Diffraction and small aperture.

Hamster

Established
Local time
5:03 PM
Joined
Jan 1, 2006
Messages
122
I keep reading about small aperture >f16 reduces the optical performance of a lens.

Can someone explain to me how it works in simple terms?

How about lenses for Large Format? Does it not get affected as much?
 
Basically you are correct. Diffraction is a broadly function of the absolute physical size of the "hole" (ie what we call the aperture.) Whenever light passes thru a lenses aperture a certain proportion of that light strikes a glancing blow on the edge of the metal iris that creates the adjustable aperture and this "bends" the path of that light wave so it then strikes the film in a location other than where it would if it did not strike a glancing blow on the iris. This is what we call diffraction.

When the aperture is small (f stop large) a higher proportion of the total light hitting the film plane / sensor is diffracted in this way so the diffraction effect is proportionately larger for such physically smaller apertures than with a larger aperture - where more of the light passes through the middle of the "hole" and is unaffected.

Taking 35mm lenses for example, f16 in a wide angle lens is much smaller physically than f16 in a long focal length lens. (Take a look at say a 28mm lens and a 135mm lens and you will see what I mean.) So a longer lens should have less diffraction than a wide angle lens at any aperture setting, simply because the physical hole is bigger at any given f stop.

(In technical terms the area of a circle is given by pi times radius squared while the circumference is given by two times pi multiplied by the radius. Thus a small increase in physical size of the aperture creates a large increase in undiffracted light passing through the center of the "hole" compared with the light that hits the edge of the hole. (Run an example thru this formula and you iwll see what I mean - the area of the hole increases geometriclaly while the circumference increases linearly and the same applies (inversely) to diffracted light.)

In larger formats the physical size of any given aperture is much larger than for say 35mm film so it follows that these are less susceptible to diffraction - which is no small measure is why Ansell Admas could get away with using apertures like f64 when he took photos with a large format camera. f64 on a large format camera might be physically much bigger than say f16 on a 35mm camera. So- less diffraction. Of course because he was using a large negative this means the image gets blow up less too so whatever diffraction (and other defects) are present in the negative may be less noticeable in the final printed image. So not doubt that may help too.
 
Last edited:
Hamster, lenses do not focus light to a perfect point but to a small disk known as an Airy disk. The size of this disk is proportional to the f-stop. The smaller the f-stop, the larger the Airy disk.

While an Airy disk is a particular size with a given aperture, the image is not enlarged equally to achieve a final print size. A 35mm frame is enlarged 10X for an 8X10 print whereas a 4x5 inch sheet film is enlarged only 2X. Because of this, larger formats can have much smaller apertures as the Airy disks will be enlarged less.

So any lens has two problems. At large apertures there are lens defects like spherical aberration and at small apertures, diffraction becomes a factor. The "best" aperture is somewhere in between (a good guess would be two stops closed from full).

Don't knock diffraction. Without it we would not have images in the first place.
 
Depends partially on sensor or film size diagonal, as well as a constant known as the z-constant which is defined differently by different vendors (Zeiss: z=1500, Canon: z=1443).

Also depends on wavelength of colors. Many small sensor cameras become diffraction limited with reds at < F4.0.

Here's a good read: http://dpanswers.com/content/tech_crop.php#dif
 
Non-laser light (ie normal light) does not proceed in a straight line, but spreads out. Lenses are imperfect attempts to force the light into behaving as if it did go in a straight line for a short distance. However, you can only force it so far, and then it gets angry.
 
Non-laser light (ie normal light) does not proceed in a straight line, but spreads out. Lenses are imperfect attempts to force the light into behaving as if it did go in a straight line for a short distance. However, you can only force it so far, and then it gets angry.

Yer darn tootin it does.

Check out this (relating to interferance ratehr than diffraction)

"In classical optics, light travels over all allowed paths and their interference results in Fermat's principle. Similarly, in QED, light (or any other particle like an electron or a proton) passes over every possible path allowed by apertures or lenses. The observer (at a particular location) simply detects the mathematical result of all wave functions added up, as a sum of all line integrals. For other interpretations, paths are viewed as non physical, mathematical constructs that are equivalent to other, possibly infinite, sets of mathematical expansions. Similar to the paths of nonrelativistic Quantum mechanics, the different configuration contributions to the evolution of the Quantum field describing light do not necessarily fulfill the classical equations of motion. So according to the path formalism of QED, one could say light can go slower or faster than c, but will travel at velocity c on average[4]."

A bit too deep for me though.

Maybe this


http://en.wikipedia.org/wiki/Diffraction
 
Last edited:
Yes, that's what I mean. Simple desktop experiments have shown that light passing by solid objects diffracts around it in surprising ways that you can see clearly if you let the light strike a surface behind the object. Explained by quantum mechanics and yet demonstable easily, it is kind of fun. But saying that the light wavicles get angry about covers it. Funnier, too.
 
This is also the principle of the "pin-speck" camera, which uses a small, opaque speck, suspended within the center of a larger aperture, to diffract light to a common focus point. Like an opaque pinhole, of sorts.

*joe*
 
It's funny, diffraction has always been portrayed as the big bad boogie man of photography. I do believe that large-format lenses are less susceptible to loss of resolution than the smaller 35 mm glass. In practical terms I've taken hundreds of shots at f16-64 with excellent results. Sometimes there is no other way to get the shot unless you have ND filters when shooting sunrises/sunsets.



Taken with Nikkor 800 5.6 IF ED AIS and TC-301 on D3 @ f45 1/8000 ISO 100
 
Last edited:
It's a balance. While diffraction will definitely degrade images, the advantages of using a small aperture sometimes outweigh the disadvantages. There are a lot of variables in the imaging process. Limited DOF from using a middle aperture could degrade the image more than the diffraction from using a very small aperture, for example.
 
Shorter flange distance also helps the diffraction problem by not giving distance for the light rays to diverge as much.
 
Another nice photo of the Sun there.

It doesn't suck at all, but likely that reds were diffraction limited. Reds (with longest wavelength of ~700nm) are much less than Nikon AIS's sun photo.

This shot, technically, "should suck." It was shot on a Canon 1D Mark IIn at the extended ISO of 50 (nominal is 100, 50/3200 are specially activated via custom function), the aperture was f/10, and I combined a 2x TC with a fast zoom lens! I don't know, looks okay to me:
 
Wow, lots of deep stuff indeed! I like Dr. Quantum, but he left me wondering HOW observing the electrons affected their path? Does the tool used to 'observe' measure by sending out a 'signal' or wave? my brain hurts.

My brain hurts too. That's just the thing - no one knows, even the boffins!

Even more spooky they did some experiments called the delayed choice experiment where the same thing was done but by using some clever technology, the actual observation of the electrons' path was not made till after each electron had passed thru one or other of the slits - this changed nothing. Observe and it behaves like a particle. Choose not to observe and it behaves like a wave. The experiment still worked. Weird stuff. So how the hell does the blasted thing work backwards in time.

And I understand that although the delay was only nano-seconds there is no reason why it could not have been days or months or longer - in fact you in theory set up an experiment is where you were light years away and make the observation and it would still work. (Don't know how you would actually do that of course.) Of course it does not much matter if we are talking electrons or photons (light) it all behaves the same.

God I love this stuff. Only wish I understood it. Oops there goes my brain hurting again.
 
Last edited:
Wow, lots of deep stuff indeed! I like Dr. Quantum, but he left me wondering HOW observing the electrons affected their path? Does the tool used to 'observe' measure by sending out a 'signal' or wave? my brain hurts.

It is weird, but it does seem to work. They have even proven they can split a single photon into two, send one down one direction and one down another. Then they observe one and take a measurement, and since it affects the photon once they see, it affects the other too. But here's the kicker - it seems to affect the other no matter how far away it is, and instantly - theoretically faster than light can travel, which nothing should be able to travel faster than. Oh that was bad english, but you get the idea. It even has practical applications in secure communications - you can tell if someone listens in on a fiber-optic line, because having been 'observed', a duplicate will also be changed.

I do not claim to understand it or to be any kind of expert. I just like to read about it.

If you want some more fun, look up steganography. Hiding encrypted text messages inside graphic files. Fun for the whole family.
 
Back
Top Bottom