ywenz said:
You can't argue with this.
Yes, you can.
What they're doing is putting an additional microlens array over a very-high-density pixel sensor (medium format size) so that each image pixel is subdivided into several sub-pixels, enabling them to record the distribution of light energy over the area of the overall image pixel.
Then, using equations similar to those already in use for lens design and astronomical photography, they can interpret this energy distribution to calculate what the pixel would have looked like if the lens had been focused differently. (In principle, the same type of calculation could be used to eliminate lens aberrations, allowing the use of a lower-quality lens to produce high-quality photos.)
Very, very clever, and scientifically brilliant. However, if you've ever looked into any of the articles available about this technology on astronomy sites, you'll have found that getting these computerized calculations to work involves plugging in some basic assumptions about what the image
should look like. I found one site with an interesting comparison of the same astronomical image reprocessed with different assumptions -- the differences were fairly startling.
That's not a big problem in astronomy because astrophysics provides a pretty good theoretical basis for assuming what should be in the corrected image. It's going to be a somewhat bigger problem in terrestrial photography, with the much wider range of possible subjects encountered.
How that most likely would manifest itself would be that one image would look great, and the next might exhibit inexplicable artifacts, or eliminate "problem" information.
I predict that this will be a great technology someday in, for example, camera phones -- it'll increase the flexibility of the product drastically without requiring additional mechanical complexity.
But for photography with a regular camera, focusing on the point you want in focus is still going to be a better overall solution.