noimmunity
scratch my niche
http://www.dxomark.com/index.php/Cameras/Compare-Camera-Sensors/Compare-cameras-side-by-side/%28appareil1%29/843|0/%28brand%29/Leica/%28appareil2%29/640|0/%28brand2%29/Leica
Overall score is the same, but the individual scores are different.
Any reason why these scores should be different? I thought they had the same sensor?
Overall score is the same, but the individual scores are different.
Any reason why these scores should be different? I thought they had the same sensor?
v_roma
Well-known
I know nothing about hardware of these two cameras or how dxo does their testing, but is it possible that the image processor is different in the two cameras?
albertospa
Established
The difference is very little that is part of the normal manufacturing tolerances.
noimmunity
scratch my niche
The difference is very little that is part of the normal manufacturing tolerances.
Manufacturing tolerances. Hmmm.
Is it really the case that DxO arrive at their measurements based on a single sample rather than a data pool averaged from multiple samples? In other words, I would expect that their test methods already include variations based on manufacturing tolerances by expanding the size of the sample test group to begin with.
Roger Hicks
Veteran
Hope, yes. Expect, no. Buying ten cameras to test...?Manufacturing tolerances. Hmmm.
Is it really the case that DxO arrive at their measurements based on a single sample rather than a data pool averaged from multiple samples? In other words, I would expect that their test methods already include variations based on manufacturing tolerances by expanding the size of the sample test group to begin with.
Cheers,
R.
noimmunity
scratch my niche
Hope, yes. Expect, no.
I enjoyed a good chuckle over that one.
If it is indeed true that DxO testing does not include a sampling protocol, and is merely a random sample, then why do people continually qualify it as objective and scientifically valid?
semilog
curmudgeonly optimist
With digital cameras, I expect that the sample variation is less than the measurement noise.
Carterofmars
Well-known
Smoke and mirrors.
noimmunity
scratch my niche
With digital cameras, I expect that the sample variation is less than the measurement noise.
I'm glad that you joined this thread.
Assuming the sensor in the two cameras really is the same*, apparently there is a sample variation between the M9 and the M-E. So if the sensor is the same, then then it is entirely feasible, given the test results, that a different combination of the values (pairing the best one from the M-E with the best one from the M9, or vice versa) could have resulted in a change in the final score (albeit only by 1 point I suppose).
*It was during production of the M9 that Kodak sold off its sensor division to TrueSense.
semilog
curmudgeonly optimist
Sure. In any genuinely quantitative measurement we look for both real variation in a population of objects being measured, and noise inherent to the measurement itself. The sorts of tests that DxO does are non-trivial to do well.
I'd put it this way. The DxO-reported differences between the M9 and M220 are miniscule and operationally insignificant. That implies three things: (1) The sensors show little variation; (2) DxO is pretty good at replicating its measurement procedures for different cameras; and (3) DxO is being honest about reporting the numbers that they do get.
I'd put it this way. The DxO-reported differences between the M9 and M220 are miniscule and operationally insignificant. That implies three things: (1) The sensors show little variation; (2) DxO is pretty good at replicating its measurement procedures for different cameras; and (3) DxO is being honest about reporting the numbers that they do get.
GaryLH
Veteran
Could the difference in score be due to purely pixel remapping of some dead pixels?
Gary
Gary
semilog
curmudgeonly optimist
Possible but not likely, I think.
But there is pixel-to-pixel variation and sensors are measured at the foundry and binned, with the most homogenous sensors commanding the highest price. Sensors for scientific use are Grade 0, with no pixels hot or cold beyond a predetermined specification. Expensive.
One thing that slowed adoption of CMOS sensors by the scientific community was greater pixel-to-pixel variation than with CCD sensors. Those problems are gradually being ironed out .
The better DxO ratings for CMOS sensors vs. CCD suggest that pixel-to-pixel variation is not a parameter that the DxO ratings give a lot of weight to.
But there is pixel-to-pixel variation and sensors are measured at the foundry and binned, with the most homogenous sensors commanding the highest price. Sensors for scientific use are Grade 0, with no pixels hot or cold beyond a predetermined specification. Expensive.
One thing that slowed adoption of CMOS sensors by the scientific community was greater pixel-to-pixel variation than with CCD sensors. Those problems are gradually being ironed out .
The better DxO ratings for CMOS sensors vs. CCD suggest that pixel-to-pixel variation is not a parameter that the DxO ratings give a lot of weight to.
GaryLH
Veteran
Interesting.. Thanks for clarification.
Gary
Gary
willie_901
Veteran
The tests are technically competent because they publish exactly what they do and how the scores are computed.
Sample variation is an issue as is the tests' reproducibility.
It is not obvious the signal path for the two cameras is identical. There could be slight changes in the IR filters or in the electronic components used downstream from the sensor. It is possible the batch of sensors for the M9 have slightly different characteristics than the batch for the ME-220. Was the ME-220 manufactured after Kodak sold the sensor business? if so there could be slight changes to changes made by the new ownership.
Sample variation is an issue as is the tests' reproducibility.
It is not obvious the signal path for the two cameras is identical. There could be slight changes in the IR filters or in the electronic components used downstream from the sensor. It is possible the batch of sensors for the M9 have slightly different characteristics than the batch for the ME-220. Was the ME-220 manufactured after Kodak sold the sensor business? if so there could be slight changes to changes made by the new ownership.
semilog
curmudgeonly optimist
The tests are technically competent because they publish exactly what they do and how the scores are computed.
Well, that's in principle. In practice you still have to make the measurements and there are plenty of ways to screw up there. The relative consistency of the numbers between the M9 and 220 suggest that they are managing to avoid most such pitfalls.
Yogi Berra (from memory; forgive me if I err):
"In theory, theory and practice are the same. In practice, they're not."
For example: Measurement temperature or humidity (or even differences in heat dissipation through two different camera supports) could potentially account for the miniscule differences in the scores between the two cameras.
Share:
-
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.