M-E 220 vs. M9, different DxO ratings: Huh?

noimmunity

scratch my niche
Local time
10:46 PM
Joined
Jul 1, 2007
Messages
3,102
http://www.dxomark.com/index.php/Cameras/Compare-Camera-Sensors/Compare-cameras-side-by-side/%28appareil1%29/843|0/%28brand%29/Leica/%28appareil2%29/640|0/%28brand2%29/Leica

Overall score is the same, but the individual scores are different.
Any reason why these scores should be different? I thought they had the same sensor?
 
I know nothing about hardware of these two cameras or how dxo does their testing, but is it possible that the image processor is different in the two cameras?
 
The difference is very little that is part of the normal manufacturing tolerances.

Manufacturing tolerances. Hmmm.

Is it really the case that DxO arrive at their measurements based on a single sample rather than a data pool averaged from multiple samples? In other words, I would expect that their test methods already include variations based on manufacturing tolerances by expanding the size of the sample test group to begin with.
 
Manufacturing tolerances. Hmmm.

Is it really the case that DxO arrive at their measurements based on a single sample rather than a data pool averaged from multiple samples? In other words, I would expect that their test methods already include variations based on manufacturing tolerances by expanding the size of the sample test group to begin with.
Hope, yes. Expect, no. Buying ten cameras to test...?

Cheers,

R.
 
With digital cameras, I expect that the sample variation is less than the measurement noise.

I'm glad that you joined this thread.

Assuming the sensor in the two cameras really is the same*, apparently there is a sample variation between the M9 and the M-E. So if the sensor is the same, then then it is entirely feasible, given the test results, that a different combination of the values (pairing the best one from the M-E with the best one from the M9, or vice versa) could have resulted in a change in the final score (albeit only by 1 point I suppose).

*It was during production of the M9 that Kodak sold off its sensor division to TrueSense.
 
Sure. In any genuinely quantitative measurement we look for both real variation in a population of objects being measured, and noise inherent to the measurement itself. The sorts of tests that DxO does are non-trivial to do well.

I'd put it this way. The DxO-reported differences between the M9 and M220 are miniscule and operationally insignificant. That implies three things: (1) The sensors show little variation; (2) DxO is pretty good at replicating its measurement procedures for different cameras; and (3) DxO is being honest about reporting the numbers that they do get.
 
Possible but not likely, I think.

But there is pixel-to-pixel variation and sensors are measured at the foundry and binned, with the most homogenous sensors commanding the highest price. Sensors for scientific use are Grade 0, with no pixels hot or cold beyond a predetermined specification. Expensive.

One thing that slowed adoption of CMOS sensors by the scientific community was greater pixel-to-pixel variation than with CCD sensors. Those problems are gradually being ironed out .

The better DxO ratings for CMOS sensors vs. CCD suggest that pixel-to-pixel variation is not a parameter that the DxO ratings give a lot of weight to.
 
The tests are technically competent because they publish exactly what they do and how the scores are computed.

Sample variation is an issue as is the tests' reproducibility.

It is not obvious the signal path for the two cameras is identical. There could be slight changes in the IR filters or in the electronic components used downstream from the sensor. It is possible the batch of sensors for the M9 have slightly different characteristics than the batch for the ME-220. Was the ME-220 manufactured after Kodak sold the sensor business? if so there could be slight changes to changes made by the new ownership.
 
The tests are technically competent because they publish exactly what they do and how the scores are computed.

Well, that's in principle. In practice you still have to make the measurements and there are plenty of ways to screw up there. The relative consistency of the numbers between the M9 and 220 suggest that they are managing to avoid most such pitfalls.

Yogi Berra (from memory; forgive me if I err):
"In theory, theory and practice are the same. In practice, they're not."

For example: Measurement temperature or humidity (or even differences in heat dissipation through two different camera supports) could potentially account for the miniscule differences in the scores between the two cameras.
 
Back
Top Bottom