Blur is perhaps surprisingly one of the degradations we know best how to undo. It's been studied extensively because there's just so many applications, for microscopes, telescopes, digital cameras. The usual tricks revolve around inverting blur kernels, and making educated guesses about what the blur kernel and underlying image might look like. My advisors and I were even able to train deep neural networks using only blurry images using a really mild assumption of approximate scale-invariance at the training dataset level [1].
Just to add to this: intentional/digital blur is even easier to undo as the source image is still mostly there. You just have to find the inverse metric.
This is how one of the more notorious pedophiles[1] was caught[2].
I didn't learn about this trick (deconvolution) until grad school and even then it seemed like spooky mystery to me.
Isn't that roughly (ok, very roughly) how generative diffusion AIs work when you ask them to make an image?