M8 lens coding needed for RAW?

The coded camera has 189 permutations to work from as the frame selector tab has three positions.

Henning

Henning,

If only 6 characters are used in a binary code, the maximum number of permutations can only be 63. It's a beutifully simple way of allowing the camera to identify the lens. As you say the frame selector adds 3 options but this is not coding. It is operator input (presumably, tri elmar excepted).

It does however suggest that the frame selector on the M8 is now used to convey information to the computer rather than being simply a mechanical device.

How does it cope with someone holding the frame selector on the wrong frame set when taking a shot?

Regards

Simon
 
Henning,

If only 6 characters are used in a binary code, the maximum number of permutations can only be 63. It's a beutifully simple way of allowing the camera to identify the lens. As you say the frame selector adds 3 options but this is not coding. It is operator input (presumably, tri elmar excepted).

It does however suggest that the frame selector on the M8 is now used to convey information to the computer rather than being simply a mechanical device.

How does it cope with someone holding the frame selector on the wrong frame set when taking a shot?

Regards

Simon

It is not 'coding' if you mean the six binary dots. It does add a couple of bits of information though, and so it adds to the information the lens sends to the camera, so it is coding in that sense. The frame selector position is definitely used as digital input, as otherwise when using the Tri-Elmar the EXIF data could not know the correct focal length. Yet there it is.

The extra 3 possibilities are used by the computer in the processing as well, as I know of an instance where a lens (namely the 21 ASPH in this case) was shipped with the wrong mount. It does not do the cyan corner compensation correctly unless the lever is pushed to the correct position, thus overriding the tab on the mount.

Henning
 
Last edited:
I am trying to answer Keiths question with a logical discussion and open reasoned throught.

I have nothing against coding and as I have said I really must get at least one of my lenses coded to see the result myself.

I object to the sugestion this is backgroundless oversimplification. How patronsing is that, not only to me but the silent majority reading this post and hoing to find reasoned and explained logic. I have tried to spell out my thought processes so people can make up their own mind.

Jaap,

The coding is 6 bit binary therefore 63 permutations. Are you calling the options of lens detection "on", "off" and "on+UV/IR" 3 groups of detection resulting in 189?

That's pushing a point as the camera still only receives 63 possible permutations from the camera. The other options are input by the operator.


I fully understand RAW conversion and I have already conceded that the camera does not atribute a white balance to the data. The fact remains that the camera registers the WB I set as my preferred and shows me that option when I open the image on screen.

If you are saying the coding allows the camera to "corrects" the RAW data before writing it to the SD card, then it is surely no longer RAW data.

The fact remains that the UV/IR filter corrects an issue which software cannot. Are we agreed on that?

If so, the further correction is done in software and I don't see why people shouldn't explore the possibility of seeing if photoshop etc can do a better job than the camera. There's no Leica bashing going on, just a search for an answer to a question posted.

I therefore think Keiths question was valid and whilst we may ultimately disagree on which is best (I currently have not done enough research to fully form my own opinion on this), I don't think it is helpful to dismiss such questions without letting others form their own opinions.

The autor of Cornerfix states that he developed it as an alternative to the one size fits all approach of the camera and if he can do it, why can't others?

Regards

SR


63 from the six-bit coding with the extra input from the frame-selection by the mount -that is not the operator -makes 189 permutations. The lens will not be recognized with the frame-selectior in the wrong position.Yes - the sensor dump is modified by software before writing the RAW format. The same as it is done by for instance Canon and Nikon for their noise reduction at high ISO.
 
Last edited:
Personally, I think the coding requirements is incredibly overblown unless you are you really are such an expert photographer that you never need to post process because your exposure is always bang on. Photoshop would never have been invented or accepted if digital post processing wasn't the greatest improvement in photography since its inception.

Flame me, if you like, but I have gotten excellent results with my CV12 on my M8, a lens which defies any coding as there is no near equivalent. I don't even bother with a UV/IR filter. I accept that I will have to post process to compensate for vignetting and (even if you can see it in such a wide angle lens) cyan drift.

I will admit to some improvement in performance when using coded over un-coded lenses but it would not (and has not) put me off using uncoded lenses on the M8. Pictures attached are an example of the CV12 uncoded and a 1958 Elmar 50/2.8 uncoded but with a UV/IR filter attached.


highgate-literary-and-scientific-institution.jpg

duckings-01.jpg
 
Once you've run a RAW file through the raw converter your chances of performing UV/IR vignette and color corrections in Photoshop are pretty limited since they all apply some form of adjustment curve to the original image. That's why Leica perform the filter adjustment in-camera using lens coding and CornerFix works with the raw DNG files.
 
Once you've run a RAW file through the raw converter your chances of performing UV/IR vignette and color corrections in Photoshop are pretty limited since they all apply some form of adjustment curve to the original image. That's why Leica perform the filter adjustment in-camera using lens coding and CornerFix works with the raw DNG files.

Or you could STOP "raw converting" and just edit the raw files themselves using the myriad of tools out there built to do so, e.g., Aperture, Camera Raw, Lightroom, Capture 1, etc...
 
Or you could STOP "raw converting" and just edit the raw files themselves using the myriad of tools out there built to do so, e.g., Aperture, Camera Raw, Lightroom, Capture 1, etc...

I'm afraid you don't understand the process here ... all of the applications you mentioned convert the raw file and apply colour mapping & std tone curves plus your own image adjustments to create a master working image. Even so called non-destructive editing apps do this - every time you open or recreate the file.

You need to the UV/IR correction to the true RAW file and not the converted file - the order of operations is very important. This would require the ability to plug in the correction filter into the RAW conversion workflow BEFORE any other colour, vignette, white balance, contrast or exposure changes were carried out. None of these apps support that ability at this time so you're left in a post-processing state.
 
Or you could STOP "raw converting" and just edit the raw files themselves using the myriad of tools out there built to do so, e.g., Aperture, Camera Raw, Lightroom, Capture 1, etc...

That is like saying: "Let's not develop the negative and just enlarge the film as it comes out of the camera" .....

The RAW process takes the sensor dump and processes it into data that can be used by Photoshop-like programs to edit the photograph. Part of it is done in-camera and produces the RAW file like DNG.
The DNG file (or CRW or NEFF etc..) is the intermediate format that is used to transport the already part-processed data from the camera into your computer in their most mallable form. In the computer the RAW converter like ACR or C1, Biddle, etc processes these data into a file format your programs can handle. Usually TIFF. The user can fully influence this process.
Then you can "darkroom" your photograph in Photoshop or any of the many other programs and save it as a Jpeg for display or printing.
The Jpeg output you can choose from your camera leaves the full data processing in the camera according to certain presets and produces a compressed and less editable file.
 
Last edited:
Every once in a while I get close to getting an M8, and this week have had my eye on 2, (And just missed the one in Newcastle-NSW) so I am feeling closer than I have in the past...
Therefore this thread is very helpful for me.

How does the camera know what vignetting correction to use?
When I look for example, at the 35mm-Summilux-M 1.4 ASPH:
(Ed-Puts pdf-download "secrets and soul of M lenses" pg38)
attachment.php

I see about 12.5% at the edge, for WFO@1.4, and it goes up to ~50% at F8 (yeah I know that 21 is at the edge of film), but this is 2 stops of differing(UN)vignetting... So seems like the camera would need to know what F# it started out with...

Also - Where does on get info for the codes relative to which lens is used?


(If anyone is ready to move on their M8 now - PM me)
 

Attachments

  • Picture 12.jpg
    Picture 12.jpg
    22.8 KB · Views: 0
The "Cyan vignetting" is quite a bit more stable with aperture - essentially, it is caused by the angle of light, rather than the partly mechanical reasons for luma vignetting. Sean Reid did quite a bit of testing on this - you can find the results on his site.

The M8 almost fully corrects for chroma vignetting (cyan drift), but doesn't fully correct luma vignetting exactly for this reason.

Sandy
 
This weekend I took the 1000th photo with my m8. My Nokton 1.4/35 is not coded but I couldn't find the cyan vignetting effect in any of my photos. Either I am not critical enough or 35mm is not wide enough?
 
Back
Top Bottom