Dante_Stella
Rex canum cattorumque
Are popular discussions of the M Monochrom technically misinformed? I think that it is fairly uncontroversial that losing the Bayer pattern filter will do several things:
I am frankly suspicious of any claim that a monochrome sensor has twice the resolution of a Bayer one (or anything near that big a difference). I don't think anyone is going to try to argue against how resolution is measured: so many cycles at so much contrast. But my suspicions are based on the following:
None of this is to say the M-M is a bad camera; the sensitivity and low noise make it compelling as a concept. But the idea that it's somehow massively better in resolving power than the M9 just doesn't seem to wash.
Granted, I am not an optical engineer - and my last training in combinatorics and matrix theory was contemporaneous with the QuickTake 100. But I have this nagging feeling that the people spouting conventional wisdom about the M-M are confusing Foveon propagranda about Bayer sensors with misinformation from videography discussion forums.
Thoughts?
Dante
1. Increase light sensitivity by 2-4x by removing the array (hence ISO 10,000 over 2,500) (according to Kodak technical materials related to its RGBW patent, the efficiency of a typical Bayer sensor is about 1/3).
2. Eliminate the blue-channel-noise-in-incandescent-ligtht problem by eliminating the blue filter.
3. Allow the use of standard contrast filters originally designed black and white film (inasmuch as the M-M spectral response is similar to known films).
2. Eliminate the blue-channel-noise-in-incandescent-ligtht problem by eliminating the blue filter.
3. Allow the use of standard contrast filters originally designed black and white film (inasmuch as the M-M spectral response is similar to known films).
I am frankly suspicious of any claim that a monochrome sensor has twice the resolution of a Bayer one (or anything near that big a difference). I don't think anyone is going to try to argue against how resolution is measured: so many cycles at so much contrast. But my suspicions are based on the following:
1. I have not seen any organized literature that chalks up differences in resolving power to different demosaicing algorithms. The discussion is primarily of aliasing and artifacts, which are problems that occur when adjacent colors in an image interact badly with color reconstruction. Maybe I have not read enough patents and white papers. Does anything actually discuss resolving power?
2. If Bayer decoding led to any significant loss in resolving power on the sensor, we would have seen a test by now comparing aerial resolution of lenses, the theoretical resolution of a sensor, and the resulting output.
3. But Dr. Evil, that has come to pass! According to Erwin Puts' tests, even including a lens (always a drag on an optical system), the M9 resolves 60lp/mm, about 82% of its Nyquist limit of 73lp/mm. Because you have the same sensor in the M-M with the same density, the limit is the same. Even if the M-M reached its Nyquist limit, that would still be only 20% better than the color model.
3a. Even absent the above, it is well documented (and formulaic) that doubling the resolution of one component of an optical system would not double the end resolution.
4. In the interpolation fury, there seems to be a considerable amount of confusion between an on-chip pixel and the resulting effective photosite. A simple way to understand photosites is a virtual pixel that overlaps a four-square R-G-G-B matrix of photodiodes, as shown in a diagram like this. Except at the edges, the diodes themselves belong to more than one photosite (hence the two pixel counts - actual and effective - you see in specifications). Real arrays, depending on the algorithm used, can consist of many more than 4 pixels.
5. The real math is a lot more complex - and getting ticked off about "interpolation" by thinking of averaging is simplistic. It's not averaging, and the mathematical model can be very accurate. As a result, interpolation does not necessarily mean that the interpolated result is bad. At a very basic level, if you have 5 and then a unknown value and then 10, using the mathematical mean for the middle value is actually very accurate if these are points on a straight line. Even if they are not, formulae can still very accurately approximate the correct values (witness JPG compression, which computes color gradients). If your method of interpolation is appropriate to the task, the errors are quite small. And cameras use multiple algorithms simultaneously. The performance of the M9 color sensor and processing appears to be 82% as good as a theoretically perfect sensor (of its size) coupled with a lens of unreal resolution, focused perfectly with an APO lens. Not so bad.
6. Resolving power is based on the ability to detect changes from photosite to photosite. On a mono sensor, the actual photosites and the virtual ones are perfectly aligned. On a color sensor, they do not have that kind of relationship, and it seems at least possible that you will be able to catch transitions that you would not get with mono sensors since two items of the same brightness (or responding similarly on the camera's curve) - but different colors - register as the same tone in black and white.
7. Which brings us to contrast filters. One of the reasons why the sample pictures posted on various blogs look so bad is poor [color] contrast control (poor control of scene lighting, overall exposure, and composition are also at play). People who grew up in the old days know that panchromatic film can lead to very blah results (particularly skin tones in artificial light). You basically get one shot shooting on b/w film or a b/w sensor. Unless you have an intuitive idea of what you are trying to do with color accentuation and suppression, you're going to get stuck. And unless you are using APO lenses, some of those contrast control tools, especially orange, deep orange, and red filters, will put a serious hurt on sharpness on a thin, mono-sensitive surface. I would speculate that a color array (later channel mixed) gives you better sharpness (since non-APO lenses can at most get 2 colors of light (red, green or blue) focused at the same place at the same time). Simulating filters that way may not cut haze and may result in noise, though.
2. If Bayer decoding led to any significant loss in resolving power on the sensor, we would have seen a test by now comparing aerial resolution of lenses, the theoretical resolution of a sensor, and the resulting output.
3. But Dr. Evil, that has come to pass! According to Erwin Puts' tests, even including a lens (always a drag on an optical system), the M9 resolves 60lp/mm, about 82% of its Nyquist limit of 73lp/mm. Because you have the same sensor in the M-M with the same density, the limit is the same. Even if the M-M reached its Nyquist limit, that would still be only 20% better than the color model.
3a. Even absent the above, it is well documented (and formulaic) that doubling the resolution of one component of an optical system would not double the end resolution.
4. In the interpolation fury, there seems to be a considerable amount of confusion between an on-chip pixel and the resulting effective photosite. A simple way to understand photosites is a virtual pixel that overlaps a four-square R-G-G-B matrix of photodiodes, as shown in a diagram like this. Except at the edges, the diodes themselves belong to more than one photosite (hence the two pixel counts - actual and effective - you see in specifications). Real arrays, depending on the algorithm used, can consist of many more than 4 pixels.
5. The real math is a lot more complex - and getting ticked off about "interpolation" by thinking of averaging is simplistic. It's not averaging, and the mathematical model can be very accurate. As a result, interpolation does not necessarily mean that the interpolated result is bad. At a very basic level, if you have 5 and then a unknown value and then 10, using the mathematical mean for the middle value is actually very accurate if these are points on a straight line. Even if they are not, formulae can still very accurately approximate the correct values (witness JPG compression, which computes color gradients). If your method of interpolation is appropriate to the task, the errors are quite small. And cameras use multiple algorithms simultaneously. The performance of the M9 color sensor and processing appears to be 82% as good as a theoretically perfect sensor (of its size) coupled with a lens of unreal resolution, focused perfectly with an APO lens. Not so bad.
6. Resolving power is based on the ability to detect changes from photosite to photosite. On a mono sensor, the actual photosites and the virtual ones are perfectly aligned. On a color sensor, they do not have that kind of relationship, and it seems at least possible that you will be able to catch transitions that you would not get with mono sensors since two items of the same brightness (or responding similarly on the camera's curve) - but different colors - register as the same tone in black and white.
7. Which brings us to contrast filters. One of the reasons why the sample pictures posted on various blogs look so bad is poor [color] contrast control (poor control of scene lighting, overall exposure, and composition are also at play). People who grew up in the old days know that panchromatic film can lead to very blah results (particularly skin tones in artificial light). You basically get one shot shooting on b/w film or a b/w sensor. Unless you have an intuitive idea of what you are trying to do with color accentuation and suppression, you're going to get stuck. And unless you are using APO lenses, some of those contrast control tools, especially orange, deep orange, and red filters, will put a serious hurt on sharpness on a thin, mono-sensitive surface. I would speculate that a color array (later channel mixed) gives you better sharpness (since non-APO lenses can at most get 2 colors of light (red, green or blue) focused at the same place at the same time). Simulating filters that way may not cut haze and may result in noise, though.
None of this is to say the M-M is a bad camera; the sensitivity and low noise make it compelling as a concept. But the idea that it's somehow massively better in resolving power than the M9 just doesn't seem to wash.
Granted, I am not an optical engineer - and my last training in combinatorics and matrix theory was contemporaneous with the QuickTake 100. But I have this nagging feeling that the people spouting conventional wisdom about the M-M are confusing Foveon propagranda about Bayer sensors with misinformation from videography discussion forums.
Thoughts?
Dante