Would yo buy a B&W only 16 BIT M9 ?

Would yo buy a B&W only 16 BIT M9 ?

  • Yes, absolutely.

    Votes: 71 14.3%
  • Yes, but only if it performs like B&W film.

    Votes: 58 11.7%
  • Yes, but only if it costs 15-20% less than the standard M9.

    Votes: 60 12.1%
  • No.

    Votes: 306 61.8%

  • Total voters
    495
Brian,

So, now we are down to the nuts and bolts of this idea. It would be nice to have an idea how much the sensor would cost. Would you be interested in making an inquiry with Kodak for production? It would be nice to know where a price break would come into play with production numbers 50/100/250/500 units. Would you expect Leica to receive a better price given their relationship with Kodak? How is the sensor delivered? i am wondering what is involved in replacing the existing one? It would be important to have an idea of the cost and labour before approaching Leica, don't you agree? Also, if I were to try to raise capital I would need some specifics.

Roger Hicks,

Who would we approach at Leica to do a limited run project? I don't know if this is going to get past the drawing board, but I believe Brian has given a strong argument as to what can be gained. We are talking a very small market, and a difficult time to attain capital for such a project. But, stranger things have happened. I think the only way this is going to fly, is if enough people are willing to commit to orders monetarily
.

Kindest regards to both of you,

Dunno, but I know who to talk to to find out. Because of the delicacy of the inquiry I'd rather leave it until we have some hard numbers from the Great Yellow Father, as I'd rather not take up Leica's time before we have a high degree of commitment.

Cheers,

R.
 
?? "DOF" is an imprecise concept at best, and without knowing other variables you cannot begin to claim that some particular rangefinder "is always within the native DOF of the sensor".
It is not imprecise. The native DOF of a sensor has the pixel size of that sensor as COC and thus is an exact defined value. For film it is somewhat more woolly, I'll grant you that, but it is certainly higher than a sensor, unless we get into very slow film. Say below ISO 25.
 
If I understood correctly a monochrome version would mean a quantum leap in resolution and a gain in low light performance compared to a color sensor with the same number of photosensitive elements.

As the hardware platform is already existing with a well proven body design and with Kodak being already in the monochrome sensor business, I guess it should not be too costy for Kodak and Leitz to assemble a small test batch.
 
I would not describe it as a "quantum leap" as the interpolation used for Bayer Patten filters "usually" makes good use of the information in adjacent pixels. It's the worst case that gets you. If you look at the sensor, 1/4th of the elements are blue, 1/2 are green. and 1/4th are red. So if you are photographing red or blue line pairs, you get 1/4th the resolution. You pick up 1-Fstop by getting rid of the Bayer Filter. It's like taking a color correction filter off of your film camera. Not a quantum leap, but doubles the ISO rating. NOW: the Quantum leap for Infrared users is getting rid of the "damned" IR absorbing glass. Picks up 8 or more stops of sensitivity.

For Scientific work, where you want to know how much energy is hitting a pixel, and the spectral region of it, the interpolation scheme not useful. It's also nice to have full sensitivity in Infrared for Scientific/Technical work and filter when you need to. So, my interest is for a hand-held 18MPixel camera with visible and near infrared. The interest was high enough 17 years ago to call Kodak and have an infrared version of the DCS200 made. Storing 16-bit sampled data is also very useful.

So- it does not hurt to ask. Kodak makes other CCD's that are monochrome and Visible+Infrared. A number of companies that cater to the Scientific market use them. They make an M9 look cheap.
 
Last edited:
Respectfully, it is imprecise. You can talk about "standard" COC for certain formats and sensors. But your statement:



was a comparison of specific margins, so exact measurement is required to make any valid assertion. The calculated COC of the sensor does not equate to a set-in-stone DOF, because DOF (as you know) depends on other factors as well, including lens geometry, print size, and viewing distance.

I'm not saying you're wrong, but I'm saying that it's unlikely you truly have the data to make it a flat, across-the-board statement of fact. That's all.
You're wrong. You are talking about the DOF of a print. I am talkiing about the native DOF of a sensor which is determined by the pixel size Two totally different things. Lens geometry has nothing to do with either btw. I suggest you read the following page, which gives you the mathematical foundation for DOF calculations:

http://en.wikipedia.org/wiki/Depth_of_field#Moderate-to-large_distances_2
 
Last edited:
Leica "as a brand", and however they've split into business units, maintains a presence in the scientific and technical market with Microscopes. Leica Microsystems still offers monochrome cameras for microscopes.

http://www.leica-microsystems.com/p...uorescence/details/product/leica-dfc345-fx-1/

Of course I'd love to see an M9 with a full-spectral range sensor mounted on a Leica microscope with a filter wheel. I'll bet it's cheaper than some of the Microscope cameras being offered. They tend not to be cheap.
 
I would not describe it as a "quantum leap" as the interpolation used for Bayer Patten filters "usually" makes good use of the information in adjacent pixels. It's the worst case that gets you. If you look at the sensor, 1/4th of the elements are blue, 1/2 are green. and 1/4th are red. So if you are photographing red or blue line pairs, you get 1/4th the resolution. You pick up 1-Fstop by getting rid of the Bayer Filter. It's like taking a color correction filter off of your film camera. Not a quantum leap, but doubles the ISO rating. NOW: the Quantum leap for Infrared users is getting rid of the "damned" IR absorbing glass. Picks up 8 or more stops of sensitivity.

For Scientific work, where you want to know how much energy is hitting a pixel, and the spectral region of it, the interpolation scheme not useful. It's also nice to have full sensitivity in Infrared for Scientific/Technical work and filter when you need to. So, my interest is for a hand-held 18MPixel camera with visible and near infrared. The interest was high enough 17 years ago to call Kodak and have an infrared version of the DCS200 made. Storing 16-bit sampled data is also very useful.

So- it does not hurt to ask. Kodak makes other CCD's that are monochrome and Visible+Infrared. A number of companies that cater to the Scientific market use them. They make an M9 look cheap.
On the M9, Kodak has shifted the relationship between red and green to reduce sensor noise.
 
What is the new configuration? This is a break from Dr. Bayer's pattern used since the DCS200.


RGRGRGRGRG
GBGBGBGBGB
RGRGRGRGRG
GBGBGBGBGB
RGRGRGRGRG
GBGBGBGBGB
RGRGRGRGRG
GBGBGBGBGB

The Blue spectral response used to be very noisy, but Kodak changed the sensor for extended blue response with the newer CCD's. I think they added Tin, but cannot remember the exact change.
 
Last edited:
What is the new configuration? This is a break from Dr. Bayer's pattern used since the DCS200.


RGRGRGRGRG
GBGBGBGBGB
RGRGRGRGRG
GBGBGBGBGB
RGRGRGRGRG
GBGBGBGBGB
RGRGRGRGRG
GBGBGBGBGB

The Blue spectral response used to be very noisy, but Kodak changed the sensor for extended blue response with the newer CCD's. I think they added Tin, but cannot remember the exact change.

I simply don't know, Brian, I only know that they did just that, as it impacted on post-processing.
 
I commend your attempt to confuse the issue. You are still talking about something different than what I am talking about.Just once more: a sensor is a silicon thing inside a camera. A print is a paper thing on the wall.

1. A sensor pixel has a physical size.
2.That means there is a span that the sensor cannot resolve better.
3. That span is the native DOF of the sensor.
4. If you can put the focus within that native DOF it is the best focus that camera can give, futher accuracy of the focussing mechanism will not give a sharper image.

Technically - as we are not speaking about the skill of the photographer - the M8 and M9 fulfill condition #4 consistently, so it is not relevant to claim other M cameras are "better"

The only thing one might be able to claim is that a larger VF magnification is easier to use, but of course that means one has to compromise in the area of field of view of the viewfinder.
 
Last edited:
Well- I do not want to get into this Depth of Field thing. Pixel size and LP/mm resolution, I understand. Circles of Confusion, also understood. Convolving an image with circles of confusion that are identical to the Pixel size VS Convolving an image with much higher resolution than the pixel size- one of our PhD's understood, and came up with a nifty way to improve the final resolution. That was in the 1980s.

But here in the 21st Century, I find the M8 is good enough to test my 70+ year old lens conversions.

picture.php
 
Last edited:
For machine vision and other technical applications, having an image with higher resolution than the size of the sensor elements allows more "processing gain". Essentially, the object is under-resolved in terms of the sensor element, but its energy is likely to be captured in one pixel. The intensity of that pixel is higher than its neighbors. It is possible to write software that can pull-out an object that covers less than 10% of the pixel. In FORTRAN, of course.

And I seem to recall that the Sensor Geometry used for the improved resolution offset Columns in the array by 1/2 pixel. Those were fun days.

So anybody building a Robot with M9's for eyes... But Nikki went with Electromagnets for 5th grade Science Fair Project instead.

I suddenly remember why I put off using Digital Cameras at home for so long.
 
Last edited:
I commend your attempt to confuse the issue. You are still talking about something different than what I am talking about.Just once more: a sensor is a silicon thing inside a camera. A print is a paper thing on the wall.

1. A sensor pixel has a physical size.
2.That means there is a span that the sensor cannot resolve better.
3. That span is the native DOF of the sensor.
4. If you can put the focus within that native DOF it is the best focus that camera can give, futher accuracy of the focussing mechanism will not give a sharper image.

Technically - as we are not speaking about the skill of the photographer - the M8 and M9 fulfill condition #4 consistently, so it is not relevant to claim other M cameras are "better"

The only thing one might be able to claim is that a larger VF magnification is easier to use, but of course that means one has to compromise in the area of field of view of the viewfinder.

Well stated. I would simply add one thing to this regarding other M cameras. Since, ultimately the camera will be focused by a human being, this distinction is not without merit. Film versions have an advantage in that film has a larger DOF, due the layers of emulsion create a larger size than the pixel. So, while the two systems from a mechanical stand point function within the same parameter of being able to attain a focused image. The film version is more forgiving from a user stand point.

I would estimate that the Noctilux wide open, is a little more difficult in low light to attain a focused image. When coupled with an M9. Since, it is a person who is focusing it.
 
Okay- using astro-photography as an example: it is best to resolve the Stars in the FOV to the tightest point source possible. If you resolve the star to a 6.8micron blur circle, it's energy can be spread across (up to) four Pixels. If you resolve the star to a true point source, or close to it, it's energy falls into one 6.8Micron sensing element of the CCD. This makes it easier to image the Star.
 
Last edited:
Okay- using astro-photography as an example: it is best to resolve the Stars in the FOV to the tightest point source possible. If you reslove the star to a 6.8micron blur circle, it's energy can be spread across (up to) four Pixels. If you resolve the star to a true point source, or close to it, it's energy falls into one 6.8Micron sensing element of the CCD. This makes it easier to image the Star.

Absolutely. You have concentrated the energy, allowing the one sensing element of the CCD to achieve a better ratio of energy to dark current. You also have minimized issues of inertia inherent to the system. The DOF does not change, but the results certainly change.
 
Yes, that is because you will see the Lichtberg at 6.8 micron, but if you resolve to a smaller size you will take in the "waves" of the Airy disk, which would otherwise fall on the surrounding pixels.
 
As for the relationship of the print and the image supplied by the camera system. Jaap is correct in his position. Let's ignore the different print mediums, as this will just add confusion.
1. Image from the camera is in focus > it will be possible to print a picture that is in focus.
2. Image from the camera is not in focus > it will not be possible to print a picture that is in focus. Unless, there are some serious algorithms being applied, to render a new image. Which it could be argued this is not the same image. Hence, bad image equals bad print.

What can't be said is:
3. An image that is in focus from the camera > guarantees the print will be in focus.

Therefore, there is a disconnect between the two issues, and you can not use the print as the argument regarding the mechanical capabilities of the camera. You have to look at the image on the mediium that is capturing it in the camera. With a digital camera this is the sensor, and with a film camera this is the film.

note: The quality and type of sensor, the quality of the circuitry and it's design, and the software, do have an impact on the image that is stored on the systems storage media. But this of course has nothing to do with DOF.
 
Back
Top Bottom