hlockwood
Well-known
Harry,
Having confidence in QM is no different than having confidence in gravity. Being competent at QM is profoundly boring. If you have the right model and can perform the calculations, you can predict every experimental result involving coherent phenomenon to within the accuracy of the apparatus.
The problem is: Unlike gravity, QM is fundamentally non-intuitive. QM defies common sense. Human brains are not compatible with how QM works. This is what Feynman meant (besides the classic "how" vs. "why" cliche).
A not so minor point; I said "confidence in his knowledge" not "confidence in QM". I have the latter but not the former.
Just for the hell of it, I'll throw in my model of QT knowledge.
First level for the newly exposed: FUD, fear, uncertainty, doubt.
Second level: " I don't really don't understand it, but it works beautifully. So, I'm satisfied."
Third level: Arrogance. "I understand it."
Fourth level: The highest level. "I don't understand it, and no one else does either." This is where you'll find people like Feynman.
Beating of dead horse finished.
Harry
j j
Well-known
Doesn't all this stem from the usual bottom of the internet misunderstanding that diffraction limited means unusable? I have been hearing the same old guff about pixel density for years (diffraction... noise... dynamic range... the new sensors are going to be terrible... the new sensors are terrible...). There is a germ of truth in it somewhere, yet up to now more pixels has resulted in more detail time after time. There is nothing to suggest that the new NEX will be any different.
kossi008
Photon Counter
I see that I am not the only one who is not comfortable with the way the diffraction limit of 8x10 print was obtained.
Thanks for reducing the entropy on this one.
kossi008
Photon Counter
Here is my simple math: I preordered the NEX 7. When it comes in and if for some reason I don't like it, I have 14 days (or 30) to send it back for a 100% refund. So 14 or 30= 100.
Limited brain capacity requires me to stick to simple math.
Right you are. And simple math is way better than faulty or (in this case) irrelevant math.
Pherdinand
the snow must go on
I do admire someone who is confident in his knowledge of quantum theory, even though Feynman said that "no one understands QT."
Harry
Uncertainty is at the border between knowledge and the unknown. See attached drawing for a simple explanation to what you are talking about
ps: i think this comes from Archimedes or some other old greek philosopher
Attachments
hlockwood
Well-known
Feynman: the final word on this topic!
http://www.amusingplanet.com/2009/12/sketches-and-paintings-by-richard.html
Stunning! Thanks for the link.
Did that guy do anything badly?
Harry
Pherdinand
the snow must go on
Strictly speaking, it's not holes but hole-electron pairs generated by the incoming photon. But you knew that.
Harry
even more strictly speaking, electrons and holes are also not (only) particles, lol, but does this matter here i guess it doesnt
LeicaFoReVer
Addicted to Rangefinders
I see that I am not the only one who is not comfortable with the way the diffraction limit of 8x10 print was obtained. So let me dig a bit into that.
To be able to estimate at which point the diffraction becomes visible I (based on THIS online calculator) decided that once the circle of confusion (CoC) reaches size of two pixels we hit the diffraction limit (see the above link for details).
First of all - let's check at which f/stop the output from given sesor will be diffraction limited. This is easy to follow, as the only variable here is the pixel size. As the comparison above was between X100 and NEX-7 let's have a look at these two:
X100:
pixel size: 5.9 micro meter (um)
diffraction limited at: ~ f/8
COC ~ 12 um
NEX-7
pixel size: 4.2 um
diffraction limit at: ~ f/4
COC ~ 8um
So far so good. NEX-7 has smaller pixels so it gets diffraction limited at larger apertures - no surprise here.
Now however we move to the prints. So we want to make a 8x10" print, sa at 300 dpi. For that we need at least 7.2 Mpixels and both cameras offer more than that. In other words - we need at least 7.2 effective Mpixels to get sharp 8x10" print. So we can just compute the size of the circle of confusion for diffraction limited 7.2 Megapixel APS-C sized sensor. This is about 19 um and is reached at about f/13.
And that is the f/stop at which ANY cameras with APS-C sized sensor of at least 7.2 Mpixels will be diffraction limited for 8x10" print.
This limiting f/stop value would only change if:
- the sensor would be larger than APS-C -> the limiting f/stop would be smaller
- the sensor would be smaller than APS-C -> the limiting f/stop would be larger
- the sensor has less pixels than 7.2 -> the limiting f/stop would be smaller
Of course the computed f/stops above would shift if one would consider different definition of the diffraction limit (limiting size of CoC)
I absolutely agree with that. I used Canon 20d professionally, which is around 6Mp to photograph horse shows and sold the 8x10 prints which looked awesome...
semilog
curmudgeonly optimist
Strictly speaking, it's not holes but hole-electron pairs generated by the incoming photon. But you knew that.
Harry
I knew it a decade ago, when I was first trying to learn about solid-state sensors.
Last edited:
semilog
curmudgeonly optimist
Stunning! Thanks for the link.
Did that guy do anything badly?
"He is a second Dirac, only this time human."
semilog
curmudgeonly optimist
Doesn't all this stem from the usual bottom of the internet misunderstanding that diffraction limited means unusable? I have been hearing the same old guff about pixel density for years (diffraction... noise... dynamic range... the new sensors are going to be terrible... the new sensors are terrible...). There is a germ of truth in it somewhere, yet up to now more pixels has resulted in more detail time after time. There is nothing to suggest that the new NEX will be any different.
I agree with this. Most any FF or APS-C or 4/3 camera is going to do better in terms of resolution at f/5.6 than at f/22 – IF the stuff you want in focus is actually in focus. If you need greater DoF, you have bigger concerns than softening due to diffraction – you're going to need to stop down.
And as I've pointed out above, the typical calculations that you find on the internet underestimate how many pixels you need to accurately sample a typical high-end camera lens. The way forward is shown by the iPhone, with its 2µm pixel pitch. Expect DSLR pixel pitch to also fall to ~2µm, and pixel count to eventually increase to around 100 Mpix for APS-C and 200 Mpix for FF, before the megapixel race is well and truly over.
Meanwhile, I'll continue to be happy with the images from my Olympus E-500 (8 Mpix) and E-620 (12 Mpix).
Even with those cameras, image clarity and enlargement are usually not limited by sensor density, but rather by focus accuracy and camera movement.
Last edited:
hlockwood
Well-known
And as I've pointed out above, the typical calculations that you find on the internet underestimate how many pixels you need to accurately sample a typical high-end camera lens. The way forward is shown by the iPhone, with its 2µm pixel pitch. Expect DSLR pixel pitch to also fall to ~2µm, and pixel count to eventually increase to around 100 Mpix for APS-C and 200 Mpix for FF, before the megapixel race is well and truly over.
Such a small pitch implies small pixel dimension, no? If so, what happens to shot noise as pixels get smaller?
Harry
semilog
curmudgeonly optimist
Such a small pitch implies small pixel dimension, no? If so, what happens to shot noise as pixels get smaller?
Harry
As you know, all else being equal, the shot noise (= counting noise = n / n^0.5, where n is the number of events) increases as the photosite size shrinks.
But all else is not equal, so it's just not a simple problem. For example, the iPhone uses a back-thinned (a.k.a. front-illuminated) sensor with a substantially higher quantum efficiency than a conventional sensor with the photosite size. Consequently, it has proportionally less shot noise.
No currently-available large-sensor camera (4/3 to medium format) uses a back-thinned sensor, so far as I know.
And there are ways to deal with noise.
uhoh7
Veteran
Though a silly premise, interesting discussion.
I'm doing alot of infinity landscapes at the moment, and have noticed alot of my glass is sharpest f/4 to f/5.6.
e.g. leica 50/2 and CV 35/2.5--also tele-elmarit 90 /2.8
I keep reaching for that good ol f/8--but maybe this is a good reason to stay faster than I otherwise might.
I'm doing alot of infinity landscapes at the moment, and have noticed alot of my glass is sharpest f/4 to f/5.6.
e.g. leica 50/2 and CV 35/2.5--also tele-elmarit 90 /2.8
I keep reaching for that good ol f/8--but maybe this is a good reason to stay faster than I otherwise might.
awaw
Member
Theory vs reality
Theory vs reality
Math is important, but the real image output is what counts. Lets wait and see what Sony has achieved. I am eagerly waiting for the test results of a production model with the final firmware. Have a nice weekend!
Theory vs reality
Math is important, but the real image output is what counts. Lets wait and see what Sony has achieved. I am eagerly waiting for the test results of a production model with the final firmware. Have a nice weekend!
vidgamer
Established
I agree with Matus that 8x10 is just not large enough to worry about diffraction with APS-C. If you're going to fix the print size like that, then it doesn't matter how many pixels you pack in. When you view a 24mp photo at 100% that's quite a lot of magnification, so of course you'll see diffraction sooner than with the 14mp or 16mp sensors, but that amount of pixels is wasted on 8x10! And in any case you're no worse than a 14mp sensor, you just might notice diffraction sooner.
Here's my experience with diffraction. I had read many complaints about diffraction causing problems with people using their DSC-V3 (which has a 1/1.7" sensor, IIRC), at least when people used f8. But why would the camera choose f8 on purpose if it was affected by diffraction? After reviewing photos after one trip where I had used P mode, sure enough, it occasionally uses f8. Looking at the details (it's just a 7mp sensor), it looks like a "blur" has been applied to the photo, but only when you zoom in. Printed small, you'd never notice it, and the photo is not "ruined", as many claimed on the internet/dpr.
I dragged such a photo into an image processing program that has a "deconvolution" function, and it cleaned right up, as if there were no diffraction at all. So, I'm a big believer in deconvolution to fix certain problems -- it seems like magic, but sometimes it works wonders. You can do some complicated functions with deconvolution (to try to remove motion blur, for example), but for something with an even blur like this, I think a standard function would work fine.
So, I say, bring on the diffraction. If you accidentally get more blurring than you wanted, try deconvolution. But it is good to know the point at which the CoC starts to exceed your pixel so you can at least know whether, say, f8 might cause you some concern.
About backlit sensors, I've read that with less pixel density, it doesn't make sense, as it doesn't buy you much. It helps with the small sensors because the pixel density is dense enough where the circuitry ends up being a significant amount of real-estate. Maybe if APS-C sensors are dense enough, it'll make sense, but I didn't think they were near that point yet. Although, another side effect might be that backlit sensors would handle the steep angles from rangefinder lenses better....
Here's my experience with diffraction. I had read many complaints about diffraction causing problems with people using their DSC-V3 (which has a 1/1.7" sensor, IIRC), at least when people used f8. But why would the camera choose f8 on purpose if it was affected by diffraction? After reviewing photos after one trip where I had used P mode, sure enough, it occasionally uses f8. Looking at the details (it's just a 7mp sensor), it looks like a "blur" has been applied to the photo, but only when you zoom in. Printed small, you'd never notice it, and the photo is not "ruined", as many claimed on the internet/dpr.
I dragged such a photo into an image processing program that has a "deconvolution" function, and it cleaned right up, as if there were no diffraction at all. So, I'm a big believer in deconvolution to fix certain problems -- it seems like magic, but sometimes it works wonders. You can do some complicated functions with deconvolution (to try to remove motion blur, for example), but for something with an even blur like this, I think a standard function would work fine.
So, I say, bring on the diffraction. If you accidentally get more blurring than you wanted, try deconvolution. But it is good to know the point at which the CoC starts to exceed your pixel so you can at least know whether, say, f8 might cause you some concern.
About backlit sensors, I've read that with less pixel density, it doesn't make sense, as it doesn't buy you much. It helps with the small sensors because the pixel density is dense enough where the circuitry ends up being a significant amount of real-estate. Maybe if APS-C sensors are dense enough, it'll make sense, but I didn't think they were near that point yet. Although, another side effect might be that backlit sensors would handle the steep angles from rangefinder lenses better....
semilog
curmudgeonly optimist
About backlit sensors, I've read that with less pixel density, it doesn't make sense, as it doesn't buy you much. It helps with the small sensors because the pixel density is dense enough where the circuitry ends up being a significant amount of real-estate.
That might be the claim, but in my real-world experience it is wrong, wrong, wrong.
The best high-sensitivity sensors currently available for optical microscopy are back-thinned EM-CCDs (electron-multiplication-charce coupled devices).
These sensors achieve better than 90% quantum efficiency with read noise of less than one photoelectron per pixel.
Photosite pitch? 16 x 16 (sixteen-by-sixteen) µm.
The back-thinned EMCCD sensors are considerably better than their non back-thinned equivalents. I know this because I've done extensive side-by-side testing, as have many of my colleagues. There's just no contest. This chip, made by Marconi and packaged into cameras by at least three different companies (I think Andor currently has the best implementation), is the only one seriously worth considering for single-molecule fluorescence experiments, and it's been that way for at least 6 years.
IMO The real reason that you're not seeing bigger back-thinned sensors on consumer cameras is that they are mechanically fragile and have high defect rates. Fine for yield if you have a lot of little sensors on the die, catastrophic if you only have a few.
Essentially all of the best cameras for astronomy and optical microscopy are back-thinned, and they generally have larger photosites. But the sensors are both mechanically fragile and rather expensive. The 512 x 512 (0.25 Mpix) camera linked to above is in the $30,000 range.
Last edited:
Gabriel M.A.
My Red Dot Glows For You
I am amazed-are these cameras even available? And people would make a decision of some sort on this type of thing?
I suspect the "who cares!"-ists would be the very first to not give any of this more than a few seconds of thought.
ampguy
Veteran
same here
same here
Check old Puts and Reid reviews. Chances are good that those lenses you mention peak optically before f8, irregardless of the diffraction issues.
same here
Check old Puts and Reid reviews. Chances are good that those lenses you mention peak optically before f8, irregardless of the diffraction issues.
Though a silly premise, interesting discussion.
I'm doing alot of infinity landscapes at the moment, and have noticed alot of my glass is sharpest f/4 to f/5.6.
e.g. leica 50/2 and CV 35/2.5--also tele-elmarit 90 /2.8
I keep reaching for that good ol f/8--but maybe this is a good reason to stay faster than I otherwise might.
hlockwood
Well-known
As you know, all else being equal, the shot noise (= counting noise = n / n^0.5, where n is the number of events) increases as the photosite size shrinks.
But all else is not equal, so it's just not a simple problem. For example, the iPhone uses a back-thinned (a.k.a. front-illuminated) sensor with a substantially higher quantum efficiency than a conventional sensor with the photosite size. Consequently, it has proportionally less shot noise.
No currently-available large-sensor camera (4/3 to medium format) uses a back-thinned sensor, so far as I know.
And there are ways to deal with noise.
I'm going to have to come back to this discussion when I return to civilization. For the next 10 days or so, I'll be on this dial-up connection; it is painfully slow.
Meanwhile: I think one should start with the simplifying assumption of unity quantum efficiency in the detector (sensor) and consider the noise problem from there.
Your example above appears to be more signal, not less noise, therefore a higher, SNR, signal-to-noise ratio. Always good, of course.
As the pixel site decreases in size, all else being equal, fewer electron excitations occur per unit time. In parallel, random events - shot noise - compete, i.e., add to the output, whatever that might be. The only way to get around the problem is to expose (count) for a longer time.
(For example, this is precisely the problem in e-beam lithography: long exposures to overcome shot noise means low wafer throughput. Not acceptable to the semiconductor industry.)
In our photography world, the requisite longer exposure would translate to lowering the available ISO range.
One way to approach this problem is to take it to the extreme. Start with the constant (shot) noise current, with no signal. Now begin to add signal electrons to the total current (or power.) If the pixel is made larger more signal current will be available more quickly.
Yes, in some environments there are ways to deal with noise, but I thought we were talking about digital photography as it is implemented in cameras that are not strictly laboratory devices.
Harry
Share:
-
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.