rxmd
May contain traces of nut
Here is an interesting piece which, among other things, points out the advantage and necessity with wide angle lenses of the CCD sensor in digital M Leicas as compared to the CMOS sensor in many DSLR's.
Firstly, I think a fair share of the light loss exhibited in the graphs can be attributed to manufacturers mislabeling their lenses - it's been known for ages that when you buy an f/1.2 lens, you may actually get a T/1.3 or even an f/1.3 lens. Lens tests have regularly shown this since the 1950s, and it's a bit laughable when people rediscover that a f/1.2 lens is a third of a stop slower, talk about "T-stop loss at the sensor" and never measure what T-stop the lens actually has. (DxO doesn't do that, all they do is look at RAW files.)
The article also fails to explain why T-stop losses at the sensor should be dependent on the aperture set in the lens at all. Why should there be more light loss at the sensor at f/1.4 than at f/5.6? And if light loss at the sensor is the same across apertures, his whole argument about fast wide lenses being unnecessary breaks down, because slow tele lenses have the same problem.
Also note that his argument about T-stops and depth of field is completely meaningless. Depth of field is determined by the ratio of focal length to the diameter of the projected aperture. You can have an f/1.2 lens with a T-stop of 8 and depth of field will still be that of a f/1.2 lens. The argument that depth of field is different because "marginal light rays don't hit the sensor" is comical at best, because that solely depends on sensor size, which he doesn't talk about at all.
Finally, the article says absolutely nothing about CCD vs. CMOS sensors. A fair share of the cameras in the comparison actually have CCD sensors in them. All Nikons before the D300 used CCD sensors, and the D300's CMOS sensor exhibits the same "light loss" as the D200's CCD sensor. In short, what the article describes is completely independent of sensor technology.
Last edited:
semilog
curmudgeonly optimist
Firstly, I think a fair share of the light loss exhibited in the graphs can be attributed to manufacturers mislabeling their lenses - it's been known for ages that when you buy an f/1.2 lens, you may actually get a T/1.3 or even an f/1.3 lens. Lens tests have regularly shown this since the 1950s, and it's a bit laughable when people rediscover that a f/1.2 lens is a third of a stop slower, talk about "T-stop loss at the sensor" and never measure what T-stop the lens actually has. (DxO doesn't do that, all they do is look at RAW files.)
The article also fails to explain why T-stop losses at the sensor should be dependent on the aperture set in the lens at all. Why should there be more light loss at the sensor at f/1.4 than at f/5.6? And if light loss at the sensor is the same across apertures, his whole argument about fast wide lenses being unnecessary breaks down, because slow tele lenses have the same problem.
Finally, the article says absolutely nothing about CCD vs. CMOS sensors. A fair share of the cameras in the comparison actually have CCD sensors in them. All Nikons before the D300 used CCD sensors, and the D300's CMOS sensor exhibits the same "light loss" as the D200's CCD sensor. In short, what the article describes is completely independent of sensor technology.
It's pretty obvious that at least a major fraction of the T stop losses that DxO is seeing are indeed sensor-dependent and NOT (as you suppose) due to lens T stop values, because (1) presumably the same Canon f/1.2 lens is used across all the bodies tested, and they vary considerably in T stop at the sensor, and (2) there is a strong correlation with pixel size, with smaller pixels showing greater losses at wider apertures – precisely as we'd expect if angle-of-incidence-dependent shading is an issue.
Your point about CMOS vs. CCD is one I that made above, and is correct.
antiquark
Derek Ross
The article also fails to explain why T-stop losses at the sensor should be dependent on the aperture set in the lens at all. Why should there be more light loss at the sensor at f/1.4 than at f/5.6?
The idea is, with a larger aperture, more light is arriving at the sensor from the edges of the aperture at a glancing angle. This would cause a reduction in sensitivity because sensors need the light to be coming in perpendicular to the sensor.
Other than that, I think the article is an attempt to make a mountain out of a molehill!
rxmd
May contain traces of nut
The idea is, with a larger aperture, more light is arriving at the sensor from the edges of the aperture at a glancing angle. This would cause a reduction in sensitivity because sensors need the light to be coming in perpendicular to the sensor.
That phenomenon is well-known as well. It's called "vignetting". (If anything, it should lead to better results from smaller sensors.)
semilog
curmudgeonly optimist
Other than that, I think the article is an attempt to make a mountain out of a molehill!
The difference in price (and size) between an f/1.4 lens and an f/1.2 lens can be large indeed. If, as seems possible, many of these sensors deliver (1) identical sensitivity at f/1.4 and f/1.2 [or worse] and the same DOF at f/1.4 and f/1.2, but the cameras disguise this fact by upping the ISO and then modifying the RAW data so that it looks as though you're really getting an extra half stop when you're not... I'd say that's a significant issue.
semilog
curmudgeonly optimist
That phenomenon is well-known as well. It's called "vignetting". (If anything, it should lead to better results from smaller sensors.)
Wrong. What they are talking about here will be true at the center, as well as at the corners, though perhaps worse at the corners.
antiquark
Derek Ross
That phenomenon is well-known as well. It's called "vignetting". (If anything, it should lead to better results from smaller sensors.)
Isn't vignetting a lens effect? I.e., even if you're using film, some lenses will still produce vignetting? I think the article was talking about an effect that would be seen even if the lens was free of vignetting.
antiquark
Derek Ross
The difference in price (and size) between an f/1.4 lens and an f/1.2 lens can be large indeed. If, as seems possible, many of these sensors deliver (1) identical sensitivity at f/1.4 and f/1.2 [or worse] and the same DOF at f/1.4 and f/1.2, but the cameras disguise this fact by upping the ISO and then modifying the RAW data so that it looks as though you're really getting an extra half stop when you're not... I'd say that's a significant issue.
Based on some calculations, I think you're still ahead if you choose a faster lens. That is, the "secret" ISO increase is still less than if you chose a slower lens.
Also, the DOF argument seems bogus to me. If a f1.2 lens had the same DOF as an f1.4 lens, then the bokeh blobs would be the same size in both lenses. That would be a really simplistic test to run. (I can't because I don't have a 1.2 lens).
semilog
curmudgeonly optimist
You have it.. big problem. Some designers tried pointing the pixel sites off axis, at the cost of resolution. This critical angle thing is the cause for the difference in DOF in FX sensors vs film. Those little silver rocks accept photons at a greater range , producing a greater illusion of depth..
I don't think that's right, PKR. Accepting photons from shallower angles of incidence should give you larger circles of confusion and less DOF. Excluding those photons should mimic a lens with smaller exit pupil.
The article says that DxO is now doing critical focus measurements to test this hypothesis.
semilog
curmudgeonly optimist
Based on some calculations, I think you're still ahead if you choose a faster lens. That is, the "secret" ISO increase is still less than if you chose a slower lens.
How much would you pay for a quarter of a stop?
Also, the DOF argument seems bogus to me. If a f1.2 lens had the same DOF as an f1.4 lens, then the bokeh blobs would be the same size in both lenses. That would be a really simplistic test to run. (I can't because I don't have a 1.2 lens).
As mentioned above, the article says that DxO are doing these tests in a serious way.
rxmd
May contain traces of nut
Your point about CMOS vs. CCD is one I that made above, and is correct.
Of course, you're right on that they should be affected all the same. I just thought it was worth pointing out that there is a fair number of CCD sensors in the test (I didn't see you mentioning that, maybe I overlooked it). The test actually shows that there is no correlation between CMOS vs CCD on the one hand and "T-Stop" (rather: readout) losses on the other hand at all.
It's pretty obvious that at least a major fraction of the T stop losses that DxO is seeing are indeed sensor-dependent and NOT (as you suppose) due to lens T stop values, because (1) presumably the same Canon f/1.2 lens is used across all the bodies tested, and they vary considerably in T stop at the sensor, and (2) there is a strong correlation with pixel size, with smaller pixels showing greater losses at wider apertures – precisely as we'd expect if angle-of-incidence-dependent shading is an issue.
Firstly, I don't think we're contradicting each other that much. I said "a fair share" can be attributed to the lens; in his f/1.2 test, there is a solid baseline of -0.4 EV, and that's what I'm talking about. Of course I don't deny the variation in the rest.
Regarding your points:
(1) you're right. However, the article forgets to tell us which lenses they are looking at. Canon sells f/1.2 lenses in 35, 50 and 85mm and we can only speculate. Basically, the f/1.2 test doesn't provide us with a strong argument that what we see depends on the angle of incidence or happens only in wideangles.
(2) I don't see angle-of-incidence-dependent shading should depend on pixel size. As far as I can see, it depends purely on the cosine of the angle of incidence. The relative amount of shading of a 1 cm² area is the same as a 1 µm² area. (EDIT: If you take the equation, you see that the area cancels itself out in the numerator and denominator.)
Again, if angle of incidence was an issue, I'd expect better results from cameras with smaller sensors. The oblique angles are (EDIT: predominantly) met with at the edge of the frame, so that smallers sensors are less affected, the results in the DxO tests being averaged across the frame. However, what we see is the opposite: smaller sensors are more affected, so angle of incidence can't be the main issue.
I presume what plays a large role is the way sensor readout is optimized. In cheap consumer cameras, the sensor readout electronics are less sophisticated and less effort is invested into postprocessing. That's about it.
Last edited:
semilog
curmudgeonly optimist
In practice, a FX (not a smaller APS-C) sensor, exhibits about 1/3 less DOF than a piece of 35mm film.
Interesting. The only obvious (to me) explanation is that the light sensitive surface on a digital sensor is thinner than in a photosensitive silver emulsion.
But perhaps there's another explanation? I'm certainly open to ideas, here.
jrv
Member
DXO is using RAW files, right? How much do RAW files really say about the sensor?As mentioned above, the article says that DxO are doing these tests in a serious way.
RAW files are heavily processed by the camera at high ISO settings: is it really safe to draw conclusions about sensor performance from a RAW file even at low ISO settings?
I always think of a RAW file as being "normalized" to whatever the camera maker thinks a RAW should be like. A camera RAW isn't the unprocessed output of the sensor A/D.
j j
Well-known
Did you read the citation in the first post?
"Bottom line: Due to the complexity of design and manufacture (let alone the high cost and weight) of large aperture lenses, one may actually end up with better results at virtually the same ISO and depth of field using lenses with more modest maximum apertures."
From:
http://www.luminous-landscape.com/essays/an_open_letter_to_the_major_camera_manufacturers.shtml
That's fairly simplified, but there are many other issues.
If you're truly interested, I suggest you do some reading.
p.
Oh dear. Yes, I read the article. It read like a load of hot air about nothing, and the quote you kindly included in place of an example of how this so-called issue is detrimental to actual photos shows that to be the case: the author does not say this is what happens, but what MAY happen.
antiquark
Derek Ross
It would be interesting to see the ray-tracing that goes on from entering the lens until being converted to charge by the detector. Somehow, I do not think they are going to provide that information.
Guess I could throw the 50/1.2 Canon onto the EP2 and then onto the M8, crop the M8 photo to overlay with the EP2 image. That could possibly be more boring than using Fourteen 50mm f2 lenses on my M3 setup on a tripod looking at a tree branch and testing at F2 and F4. The 1955 J-8 was really good, compared well with the Summicrons, Sonnars, and Nikkors. Come to think of it, I've added two Summars since that test and a Coated wartime 5cm f2 Sonnar T...
Guess I could throw the 50/1.2 Canon onto the EP2 and then onto the M8, crop the M8 photo to overlay with the EP2 image. That could possibly be more boring than using Fourteen 50mm f2 lenses on my M3 setup on a tripod looking at a tree branch and testing at F2 and F4. The 1955 J-8 was really good, compared well with the Summicrons, Sonnars, and Nikkors. Come to think of it, I've added two Summars since that test and a Coated wartime 5cm f2 Sonnar T...
Last edited:
TTL?? ground the unused input pins.. TTL is very robust.. CMOS is less likely to be damaged by RF.
I never keep my cell phone (world phone 4 bands) near my digital camera gear!!!!!
I had problems with damaged memory, traced to my assistant keeping my phone in the same bag with DSLRs..
p.
TTL yesterday, 4000 series CMOS today. I don't worry about my cell phone damaging anything. It's in the car and turned off.
antiquark
Derek Ross
I don't know what a D40x is.. is it an APS-C sensor? The FX sensor is a completely different issue. .
Yes, it's APS-C.
It would be interesting to see what goes on with fast lenses on the Forscher Polaroid backs that uses fiber optic bundles as optical relays. The acceptance angle of the fiber is small compared with film. Use the same camera, same lens, shoot negative film at the backplane and then compare with the Polaroid image using the Fiber bundle. I do not have one.
semilog
curmudgeonly optimist
Thanks, Antiquark.
Here are Antiquark's COF images, as surface plots with intensity coded as height. 1.4 on the left, 1.8 on the right. What an interesting result: the COF's are asymmetric, and (apparently) more so at wider aperture!
At f/1.4 we see very obvious horizontal bands of light falloff at the top and bottom, and a trace of this behavior is still visible at f/1.8.
This result is consistent with a photosite geometry in which light rays can be detected at shallower angles when coming from the left or right sides of the lens's exit pupil than when they are coming from the top or bottom of the exit pupil.
This behavior could be due to the design of the photosites, the microlenses, or both.
One question: how far from the center of the APS-C frame are we with these crops?
Here are Antiquark's COF images, as surface plots with intensity coded as height. 1.4 on the left, 1.8 on the right. What an interesting result: the COF's are asymmetric, and (apparently) more so at wider aperture!

At f/1.4 we see very obvious horizontal bands of light falloff at the top and bottom, and a trace of this behavior is still visible at f/1.8.
This result is consistent with a photosite geometry in which light rays can be detected at shallower angles when coming from the left or right sides of the lens's exit pupil than when they are coming from the top or bottom of the exit pupil.
This behavior could be due to the design of the photosites, the microlenses, or both.
One question: how far from the center of the APS-C frame are we with these crops?
Last edited:
Share:
-
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.