Fotodiox Leica is the correct size

Sorry, you have it completely wrong. The thickness of the body is given by the fixed mount-sensor distance which is determined by the lenses plus the thickness of the sensor-motherboard-LCD. There is no technological way out of that predetermined distance..

So why is the full frame Q thinner?

Oh yeah, software that wasn't around in 2006.
 
Roger.

You obviously have a handle on this. Please explain why the full frame Leica model Q, with built in image stabilization, is thinner than the M.

Please, really. Try.
 
Because the lens-sensor assembly uses a shorter register distance. No way could the Q be designed to take an M lens. /full frame has nothing to do with this.

And how did Leica get it to work with a shorter register distance?

It's funny, you and Roger apparently live in a world without software.
Have you ever seen what an image from the Q looks like when the software correction is turned off? It has massive distortion.

Software. Don't fight it. Embrace it. Leica already has.
Let's see if they are willing to move on from the pudgy M240 and make something slim again.
 
And how did Leica get it to work with a shorter register distance?

It's funny, you and Roger apparently live in a world without software.
Have you ever seen what an image from the Q looks like when the software correction is turned off? It has massive distortion.

Software. Don't fight it. Embrace it. Leica already has.
Let's see if they are willing to move on from the pudgy M240 and make something slim again.
By the optical design. Software has absolutely nothing to do with it. It is a physical distance which defines the sensor plane in relationship to the mount. Distortion has nothing to do with it either.
 
By the optical design. Software has absolutely nothing to do with it. It is a physical distance which defines the sensor plane in relationship to the mount. Distortion has nothing to do with it.

Without the software corrections the optical design would be worthless.
Software has everything to do with it as it corrects the distortion inherent in the lens.
Why do you think Leica is correcting for it with software?
 
Errr-- what has that to do with the sensor distance? How is software going to alter the 27.80 mm that every M lens needs to be in focus.
So unless you can come up with software to shrink physical distances your ideas make absolutely no sense. Wait- I have the solution: make sure the camera i moving at the speed of light...


Hybrid lens design is not about correcting a poor design. I is about integrating optical and digital corrections to get a better result. Take away the d
igital part and you have an incomplete lens, worse than it would be if corrected optically only.
 
In an effort to lend clarity, see if I understands this. A lens designed for digital is different from classic lens designs as all the light rays strike the sensor at a 90 degree angle. With classic lens design only in the center does the light strike the film at 90 degrees; on the edges and corners it strikes at a much different angle causing problems for modern digital sensors. Software can help and also micro lenses. But I may be wrong as I just stick with film and my IIIc. Joe
 
I suppose one should begin with the definition about "M camera". For me, it is a camera that can take M mount lens. Plain as that. Those lens are designed for a certain Flange focal distance (or register). You certainly can design a camera with shorter (or longer) focal distance (even without sofware) and you certainly can design a smaller interchangeable camera (as Olympus did with the OM system) but you can hardly change the focal distance of an already designed lens system. I think that one of the reason Leica hasnt designed a smaller M camera.

Also, why fix something that isnt broken umm?

Besides:
The last time Leica tried a major change to the M body it almost ended in bankruptcy. I think that has left a very long standing memory in that company, and not a good one. ...:

I think Pioneer pretty much nailed it with that. Why risk an already succesful design with something that may be succesful. Lets remind that we arent talking about tech. performance but aesthetics here.

Just my lunch time rambling which could be absolutely wrong.

Cheers

Marcelo
 
I suppose one should begin with the definition about "M camera". For me, it is a camera that can take M mount lens. Plain as that. Those lens are designed for a certain Flange focal distance (or register). You certainly can design a camera with shorter (or longer) focal distance (even without sofware) and you certainly can design a smaller interchangeable camera (as Olympus did with the OM system) but you can hardly change the focal distance of an already designed lens system. I think that one of the reason Leica hasnt designed a smaller M camera.

Also, why fix something that isnt broken umm?

Besides:


I think Pioneer pretty much nailed it with that. Why risk an already succesful design with something that may be succesful. Lets remind that we arent talking about tech. performance but aesthetics here.

Just my lunch time rambling which could be absolutely wrong.

Cheers

Marcelo
Dear Marcelo,

Not just aesthetics, but as you and Pioneer point out, also commercial common sense.

Cheers,

R.
 
😱 what about pre software lens design?

An interesting question. Some aberrations are more easily corrected in software, like distortion, so the designer is free to correct aberrations like for instance chromatic aberration, astigmatism and coma better in the optical design by allowing more distortion, which is then compensated perfectly in software. Also spherical aberration, which is aperture-dependent, can be handled more effectively. So the net result is a better corrected lens, which the Q lens is, it is amongst the absolute top in image quality.
 
In an effort to lend clarity, see if I understands this. A lens designed for digital is different from classic lens designs as all the light rays strike the sensor at a 90 degree angle. With classic lens design only in the center does the light strike the film at 90 degrees; on the edges and corners it strikes at a much different angle causing problems for modern digital sensors. Software can help and also micro lenses. But I may be wrong as I just stick with film and my IIIc. Joe

Basically you are right, but the M240 and SL sensors have pretty much solved the problem.
 
An interesting question. Some aberrations are more easily corrected in software, like distortion, so the designer is free to correct aberrations like for instance chromatic aberration, astigmatism and coma better in the optical design by allowing more distortion, which is then compensated perfectly in software. Also spherical aberration, which is aperture-dependent, can be handled more effectively. So the net result is a better corrected lens, which the Q lens is, it is amongst the absolute top in image quality.

Yeah, I get your point jaap, but worthless? Thats a bit of a strech there.
 
Of course not worthless, Leica could have designed a purely optically corrected lens and it would have been pretty good. But at the same time bigger, heavier, not quite as good and more expensive. That said, the present Q lens without its "digital lens element" in not that bad at all, just quite strongly distorted and probably marginally more CA.
All lens/camera makers are introducing hybrid designs wherever possible for these reasons, although Leica appears to have a lead. The SL lenses are made without real size restriction, but still hybrid, and are absolutely superb. The new Summilux 50 SL is said to be stunning.
 
It's not just software (and I'm a software developer). Enter the most unappreciated piece of hardware of any digital camera, the image processor. As I wrote in one of the early Leica Q threads:

Let's assume the distortion correction is done for every frame, both when emptying the buffer, and when showing it at > 40 Hz in the LCD. Say the camera can manipulate 5 frames per second (just picking a realistic number).

For a color 24 MPixel image, this means the camera needs to process 360 Million 14bit values per second. Say you pick a modern Intel CPU to do that in software (at 3.4 GHz); that CPU would have less than 10 cycles to manipulate a grey pixel. Even if you pick the most simple distortion algorithm, that would not be enough "just to do in software". Also note, that the CPU has to do other things in parallel, like DNG encryption, etc.

Most of the distortion correction must be done in hardware. Enter the camera's image processor; I read that the Maestro II was built together with Fujitsu. The latest Fujitsu Image processor specs I could find included this:

Fujitsu MB86S22AA:
  • CPU: ARM Cortex-A5MP (see also http://www.arm.com/products/processors/cortex-a/cortex-a5.php)
  • Maximum image processing speed equivalent to12fps at 24M pixels
  • Hardware assist capable of feature extraction
  • Improved lens correction, lens distortion correction, lens resolution correction
  • Accelerated multi-frame operation
  • High speed and intelligent bus arbitration

I assume the Maestro II has similar technology. Maybe the Fuji Finepix cameras, too ? (the X100 also does in camera distortion correction)

So, Leica picked an algorithm that they could execute on the Maestro II at 360 MHz per 14bit grey pixel. Then they optimized the 28/1.7 lens to be as well corrected as possible with that algorithm....

At 5 Mpixel and 8bit / color, the iPhone 4 has a much easier time doing corrections .....

Roland.
 
It's not just software (and I'm a software developer). Enter the most unappreciated piece of hardware of any digital camera, the image processor. As I wrote in one of the early Leica Q threads:



At 5 Mpixel and 8bit / color, the iPhone 4 has a much easier time doing corrections .....

Roland.
Yes. But the camera does not do the correction itself. It attaches the distortion correction algorithm in a sidecar file that is written into the DNG. The corrections themselves are executed by the software that processes the image in the computer. The only corrections the camera has to execute itself are on the much smaller embedded JPG thumbnail, which is used to feed the LCD.
 
Yes. But the camera does not do the correction itself. It attaches the distortion correction algorithm in a sidecar file that is written into the DNG. The corrections themselves are executed by the software that processes the image in the computer. The only corrections the camera has to execute itself are on the much smaller embedded JPG thumbnail, which is used to feed the LCD.

Interesting. Don't you have the option to save full resolution, corrected JPG files only ? (don't have a Q myself, only a 240)
 
I could be off base here, but aren't the Q and RX1 series poor choices of comparison to the M? Yes they are smaller, but as a fixed lens camera those lenses/bodies were designed for that specific sensor. I would think, optically, you have a lot more play with lens design and size when designing a body and fixed lens around a sensor. Plus if I recall the RX1 lens protrudes far into the body, so the "size" is not as it seems. It doesn't make sense how "software" is going to make the register distance any different. If "software" was the solution, wouldn't it be possible to mount any lens on any camera and fix it later?
 
Back
Top Bottom