NEX-7 Diffraction Limited for 8x10 Print @ Approx. F/4 .. ouch!

JS and John,

Don't worry. No one understands diffraction. A lot of people know how it works. No one knows why it's that way.

The observation of diffraction is the basis of the quantum mechanic's mystery of wave–particle duality. The camera sensor counts photons as if they are particles, and at the same time diffraction can occur which can only be explained by the wave nature of light. Every time someone takes a digital photograph with diffraction present, they are duplicating an experiment that forced people to acknowledge that Newtonian physics alone can not describe the measurable universe.

You raise an interesting point re duality. I hadn't thought of this before, but I wonder if Feynman's explanation of the dual slit experiment doesn't subsume diffraction. That is, if all paths to the screen are taken, with varying probabilities, does this explain the fuzzy edges that constitute diffraction even through a single aperture. That is, is duality necessary to understand diffraction?

I'm not aware of this question surfacing in the past.

Harry

Edit: Sorry, that's a bit off topic.
 
Last edited:
...Oh, and one more thing: if you double the size of the sensor, you have to dissipate more heat (and heat increases read noise), and you need a bigger heavier battery.

FF is a niche market. Admittedly, a prestigious niche market, but a niche nonetheless. And it will only become more niche over time.
 
You raise an interesting point re duality. I hadn't thought of this before, but I wonder if Feynman's explanation of the dual slit experiment doesn't subsume diffraction. That is, if all paths to the screen are taken, with varying probabilities, does this explain the fuzzy edges that constitute diffraction even through a single aperture. That is, is duality necessary to understand diffraction?

I'm not aware of this question surfacing in the past.

Harry

Edit: Sorry, that's a bit off topic.

The short answer: Yes!
 
Last edited:
While diffraction will progressively degrade the image as you stop down, it does so gradually, and has never bothered me in real world photos. If I need f/32 for dof, I stop down to f/32. Even in 8x10 prints, other factors tend to degrade the image more than diffraction.. Ymmv
 
Harry,

The signal from a digital camera sensor is electrons pushed out of the sensor wells by photons. The signal is completely explained (modeled) by photons that behave as particles behave.

But those very same photons can also exhibit diffraction effects, which can only explained by assuming light has the properties of a wave, and this is where the probabilities come into play. Diffraction patterns can be described (and predicted) by probability density functions based on constructive and destructive wave interference.

Duality and QM are not necessary to predict diffraction with great precision. Duality is simply present because the same phenomenon (light passing through a lens and interacting with the sensor) has to be explained using two very different models. There was no issue before Einstein et al published their work on the photoelectric effect.

"no-one has ever been able to define the difference between interference and diffraction satisfactorily. It is just a question of usage, and there is no specific, important physical difference between them."

Richard Feynman
 
The camera sensor counts photons as if they are particles

Strictly speaking, it's photoelectrons that are detected (even more specifically "holes") – not photons. This results in charge accumulation at each photo-site but the sensor doesn't have the ability to count discrete events. Digitization occurs at a D/A converter on-chip, and the amount of noise means that no consumer or professional camera currently available is really a photon-counting device in the strictest sense.

Scientific cameras are available that can count photons (I have one attached to one of the microscopes in my laboratory), but these tend to be bulky and without exception must be operated at relatively low temperatures (often -40° to -80° C). They operate by generating many charges for each initial photoelectron, so that the A/D converter and associated noise filters can infer the detection of discrete photons above the noise floor.
 
Last edited:
I do admire someone who is confident in his knowledge of quantum theory, even though Feynman said that "no one understands QT."

Harry

Feynman's point was that we don't need to "understand" to calculate and get the right answers. In his view "understanding" in the conventional sense was not a realistic goal, anyway – at least for QED.
 
Last edited:
Even in 8x10 prints, other factors tend to degrade the image more than diffraction..

This is absolutely a key point!

Camera movement and vibration, under real-world conditions, will limit the resolution of any camera to perhaps a couple of thousand lines (think 6 megapixels) of resolution unless you're on a big stable support like a sandbagged tripod, or are using electronic flash (1/10,000 second or so). Even then, achieving higher resolution can be a real challenge.

One of the key advantages of digital sensors is that they allow for fast shutter speeds. This increases resolution by a lot. Mirrorless systems have less vibration and that helps, too.

The other thing that utterly destroys resolution is focus error. APS-C and 4/3 systems provide reasonable DoF at wider apertures, reducing the problem of focus error.
 
Last edited:
Bottom line for all of this is that for a handheld camera, APS-C is rapidly reaching the point where (for practical photography) the only thing that FF buys you is shallow DoF. Not light sensitivity, not resolution, not DR.

For a few people the cost of FF (or bigger) will be worthwhile. For most people, it won't.
 
I do admire someone who is confident in his knowledge of quantum theory, even though Feynman said that "no one understands QT."

Harry

Harry,

Having confidence in QM is no different than having confidence in gravity. Being competent at QM is profoundly boring. If you have the right model and can perform the calculations, you can predict every experimental result involving coherent phenomenon to within the accuracy of the apparatus.

The problem is: Unlike gravity, QM is fundamentally non-intuitive. QM defies common sense. Human brains are not compatible with how QM works. This is what Feynman meant (besides the classic "how" vs. "why" cliche).
 
You never lose resolution, you just get muddy definition. Resolution is the amount of information. It will always be the same at the sensor. However different lenses will provider more information. Diffraction is incorrect information so it appears like less information.

One a small print (5x7) you can no tell that the information density is low, but when it is big it will look poor. Ken Rockwell's tree in the corner tests are a good example.

It is hard to understand QT when you consider you are dealing with things that are and are not matter! Do you know what it will do? Yeah probably or you will find out, and it repeats.

Technically speaking Leica has lower resolution due to less megapixles. But we all know the truth is the information is more accurate to begin with so it circumvents the whole situation and you find the overall quality to be better than DSLRs. They have sharper lenses and their sensors despite being similar appear to do more with less.
 
Last edited:
You never lose resolution, you just get muddy definition.

Resolution is a property of the entire imaging chain, from lens to sensor to print or display, as well as of the chain's components.

Since resolution is typically defined in terms of the MTF-50 [simple description of MTF], loss of resolution is *precisely* what happens as diffraction becomes more severe.

Even Walter Mandler and his successors are unable to void the laws of physics.

In my work as a microscopist, we are up against diffraction all the time. Doesn't matter whether you use Leica, Zeiss, Nikon, or Olympus microscope optics. The Abbe resolution limit for a 1.4 NA lens is ~180 µm with all of 'em.
 
Last edited:
The loss of resolution density on film made sense, but on digital it becomes a misnomer because the file size of pixels will always be the same. With film it is impossible to "measure" this but we call it a loss of resolution not because the film will lack information, it will just lack the definition as things become indistinguishable from overlapping fractions of light.

It is just unfortunate we have overlapping terminology. It is misleading to say print size will be limited by resolution with digital, implying the human eye will see pixelation - which it will not. We really needs either better words, or to use simpler words like, "it will look like crap because of diffraction."

To lose resolution capability you have shrink the sensor, or use film that is smaller or incapable of holding as much information. How do you measure that thought? We should measure the lenses % of how much film or censor it actually uses. Like 95% before diffraction is utilized in good conditions, 5% is indistinguishable different from the surrounding pixels/area. Er I am not sure how to measure, but base it on capacity to exhaust resolution capability.
 
Last edited:
Oops... Field and semilog, I think you are saying the same thing about what happens and what comes out, but you are using the word "resolution" very differently.

FWIW, semilog, your use of the word "resolution" is a better match for my understanding of the term.

It's quite nice to have some fellow physics people here!
 
Col, you know what they say, right?

• A Chemist is someone who believes the readings on gauges.
• A Physicist is someone who believes the labels on chemical bottles.
• A biologist is someone who believes both.

I'm a biologist ;-)
 
Last edited:
semilog, you seem to know your physics as well!

Physics here... it's been a while, but light, diffraction, and optics haven't changed much.

I suggest Bob Atkins for excellent articles on modern optics.
 
Harry,

The signal from a digital camera sensor is electrons pushed out of the sensor wells by photons. The signal is completely explained (modeled) by photons that behave as particles behave.

But those very same photons can also exhibit diffraction effects, which can only explained by assuming light has the properties of a wave, and this is where the probabilities come into play. Diffraction patterns can be described (and predicted) by probability density functions based on constructive and destructive wave interference.

Duality and QM are not necessary to predict diffraction with great precision. Duality is simply present because the same phenomenon (light passing through a lens and interacting with the sensor) has to be explained using two very different models. There was no issue before Einstein et al published their work on the photoelectric effect.

"no-one has ever been able to define the difference between interference and diffraction satisfactorily. It is just a question of usage, and there is no specific, important physical difference between them."

Richard Feynman

Feynman's statement is actually anticipated in the elementary text of Jenkins and White: "Hence it is proper to say that the whole pattern is an interference pattern. It is just as correct to refer to it as a diffraction pattern..."

And that dates to 1937!

"Duality and QM are not necessary to predict diffraction with great precision."

True enough. You could also say that (almost) all planetary motion can be predicted by Newtonian mechanics with great precision. But, obviously, general relativity does a better job.

I guess I'm suggesting that wave-particle duality may be a time-honored, useful construct, but if Feynman's many-path approach, or something similar, could be extended to include diffraction, we would have a unified, particle-only model of light without the need for "using two very different models." That would be elegant.

Look, this discussion is getting way off topic for most of the participants here. I would love to explore the subject further, but I suggest that that best be done off line.

Harry
 
Back
Top Bottom