... And all the rest is interesting but can the image compare to the image of the M9? How do they compare side by side? Has anyone done any tests on this? I'd love to see some tests. Have they been done? Anyone??
Do you mean subjective image evaluation? If so, meaningful comparisons are complex and pointless because subjective image evaluation depends on a large number of variables and different viewers appreciate different characteristics. Everyone's subjective evaluation of perceived image quality is valid only for them. There are a large number of methods that attempt to objectively evaluate rendered image quality and none of them are simple. For example, there is an ISO psychophysical image quality measurement standard [1] that uses three different methods to estimate JNDs (just noticeable differences) for electronic still images.
Objective comparisons are useful. Subjective perceived image quality depends on objective image characteristics. For example, large differences in lens MTF 50 estimates will obvious to practically any viewer. Likewise, large differences in raw data signal-to-noise ratios will influence subjective image-quality evaluation. Objective measurements dependent on SNR are published on Bill Claff's website
here. These data are objective since the parameter estimates are computed from statistical analyses of unrendered raw data. This eliminates differences in viewing conditions, demosaicking mathematical models and image rendering parameters (sharpness, noise filtering, color hue, luminance and satuartion to name some). Claff's methods are transparent. Also, the author neither benefits from advertisements nor relationships with camera vendors.
It is important to note these data are intended to compare camera noise characteristics. They do not speak to optical differences in sensor cover glass, the micro-lens array and the color-filter array. In principle these differences are mitigated by in-camera and, or third-party JPEG demosaicking algorithms. However superior image demosaicking becomes less important as raw data SNR decreases. As SNR decreases the uncertainties for the rendered-image, RGB, pixel parameter estimates dominate the loss of perceived image quality.

Photography dynamic range (PDR) is based on the definition for engineering dynamic range. PDR attempts to represent “
the dynamic range you would expect in an 8x10" print viewed at a distance of about arms length.” Typically PDR is a lower than engineering dynamic rang by a constant of about 2 EV. A PDR vs ISO plot suggests how raw-file dynamic range can impact subjective, rendered, image-quality impressions. As PDR increases the perceived quality of shadow-region rendering improves (i.e shadow-region signal-to-noise ratio increases).

Input-referred read noise represents the noise level divided by electronic gain. Input-referred read noise is an estimate for the sensor photosites' time-dependent noise levels when the shutter is open. In general, differences in read noise levels become more important as sensor exposure decreases. In terms of subjective, rendered, image-quality impressions, input-referred read noise differences will be most obvious only in shadow -noise regions when camera ISO setting is low. As camera ISO settings increase (sensor exposure decreases) Input-referred read noise will be obvious in all image luminance regions.

These measurements are used to compare time-indepndent noise levels. The color differences between panels is not important because the colors are intended to reveal patterns. The black frame and illuminated frame measurements are made with no light present or a light falling on the sensor respectively. Both Leicas perform well.
For reference, here's the M9 and Canon EOS 1D data. In this example the EOS 1D banding patterns are obvious.
--------
1/ "
ISO 20462, A psychophysical image quality measurement standard", 2004.
Brian W. Keelana (Eastman Kodak Company) and Hitoshi Urabeb, Fuji Photo Film Company.
ABSTRACT
"I
SO 20462, a three-part standard entitled 'Psychophysical experimental methods to estimate image quality,' is being developed by WG18 (Electronic Still Picture Imaging) of TC42 (Photography). As of late 2003, all three parts were in the Draft International Standard (DIS) ballot stage, with publication likely during 2004. This standard describes two novel perceptual methods, the triplet comparison technique and the quality ruler, that yield results calibrated in just noticeable differences (JNDs). Part 1, “Overview of psychophysical elements,” discusses specifications regarding observers, test stimuli, instructions, viewing conditions, data analysis, and reporting of results. Part 2, “Triplet comparison method,” describes a technique involving simultaneous five-point scaling of sets of three stimuli at a time, arranged so that all possible pairs of stimuli are compared exactly once. Part 3, “Quality ruler method,” describes a real- time technique optimized for obtaining assessments over a wider range of image quality. A single ruler is a series of ordered reference stimuli depicting a common scene but differing in a single perceptual attribute. Methods for generating quality ruler stimuli of known JND separation through modulation transfer function (MTF) variation are provided. Part 3 also defines a unique absolute Standard Quality Scale (SQS) of quality with one unit equal to one JND. Standard Reference Stimuli (SRS) prints calibrated against this new scale will be made available through the International Imaging Industry Association."