35mm lenses: A statistical exercise using the M-Mount group

mrtoml

Mancunian
Local time
4:32 PM
Joined
Dec 9, 2006
Messages
1,482
Location
Sheffield, UK
It has been raining here for weeks. My outdoor street photography has more or less ground to a halt until things improve. What to do with the time...?

I have been contemplating buying a new 35mm or 40mm lens to replace or add to my CV skopar. I started to look on the Flickr M-Mount group to see whether I could make an informed decision and felt that probably I could, but there were way too many variables to be sure about anything. Anyway, given that I had some time on my hands I decided to do a trial statistical analysis of some of the lenses using the Flickr images. I was surprised by some of the initial results. It is too long to post all the details here, so they are on my blog:

http://alt-toy-and-vintage-camera.blogspot.com/2007/07/rating-3540mm-rangefinder-lenses-trial.html

The upshot is that the M-Mount group is really useful and helped me decide, but there were certain caveats. For instance, certain lens pools on the group are dominated by certain photographers and this influences perceptions of image quality. Other issues still need to be resolved such as film and developer effects. Anyhow, if you are interested read the blog post and I would be pleased to hear reactions. I'm still deciding whether to spend much more time on this now.

Please don't flame me if your pet lens didn't get a high score using my method. This is based on my preferences and the images available at this time and your mileage may vary. As the M-Mount group expands and more photographs get posted things may also change. I found it really useful so thanks to Honus and Alkis etc. for setting it all up.
 
mrtoml,
Very interesting. Your standard deviations are so high that I doubt there are any statistical differences between the lenses.
To reduce variability, what we really need are a group of photographers taking the same images with the same camera body and film (or sensor), all using the same set of lenses.
But it is difficult to arrange such a thing since very few people have several lenses of the same focal length.
Maybe this could be arranged at the next LHSA meeting or similar event.
Eric
 
Eric T said:
mrtoml,
Very interesting. Your standard deviations are so high that I doubt there are any statistical differences between the lenses.
Eric

Actually the top 3 lenses by my scores are significantly higher (at the 1% level of significance). The significance reduces for the Summilux asph when the more sophisticated model is used, but it is still there (but now at the 5% level).
 
Eric T said:
mrtoml,
Very interesting. Your standard deviations are so high that I doubt there are any statistical differences between the lenses.
To reduce variability, what we really need are a group of photographers taking the same images with the same camera body and film (or sensor), all using the same set of lenses.
But it is difficult to arrange such a thing since very few people have several lenses of the same focal length.
Maybe this could be arranged at the next LHSA meeting or similar event.
Eric

When I do my lens testing, people send me their lenses so that I test them. This way, I have the chance to standardize the testing the same way for all lenses, with the same camera and same film type and of course, the same photographer.
 
Mark,

Thanks for doing this. You obviously put a lot of time into your analysis. We set up the group with the hope that it would help people evaluate the performance of lenses they were interested in. Our only requirement is to use the proper M-Mount Group tag to identify the lens, so that the software will place the image in the proper data set.

The Group continues to grow, with nearly 350 contributing photographers and over 4,000 images. You have also pointed out that the database is far from perfect. Additional tags for film, developer, aperture, etc. would be helpful, but requiring them would lead to mass revolt. Alkis and I will try to think of ways to make the group more useful, based on your input.

Cheers,
 
Mark: I find your results very interesting and useful. As a statistics professor, I welcome correct applications of statistical methods in analyzing the results from experiments.

Cheers,

Raid
 
Last edited:
raid said:
When I do my lens testing, people send me their lenses so that I test them. This way, I have the chance to standardize the testing the same way for all lenses, with the same camera and same film type and of course, the same photographer.

Thanks, Raid. I value the testing people like you do very much and know how much time and dedication it requires. Being kind of relativistic though I like to get a lot of various kinds of input before I make a decision about a lens (especially when the lenses are as expensive as Leicas). I have a tendency to be go for a combination of approaches like the Mike Johnston style looking at as many images using the lens as I can, looking at MTF data, and looking at standardised tests like yours.

The approach I tried here was really to see whether the M-Mount group was providing useful data or whether, for whatever reasons, it was just random noise. I think I have demonstrated that it is far from random noise and that it will get more useful as more people join and upload images. And I wouldn't have thought of looking at the 40mm Summicron-C until I did this exploration, for example. Now I think it will be my next lens purchase.
 
Honus said:
Mark,

Thanks for doing this. You obviously put a lot of time into your analysis. We set up the group with the hope that it would help people evaluate the performance of lenses they were interested in. Our only requirement is to use the proper M-Mount Group tag to identify the lens, so that the software will place the image in the proper data set.

The Group continues to grow, with nearly 350 contributing photographers and over 4,000 images. You have also pointed out that the database is far from perfect. Additional tags for film, developer, aperture, etc. would be helpful, but requiring them would lead to mass revolt. Alkis and I will try to think of ways to make the group more useful, based on your input.

Cheers,

Thanks, Honus. I understand that you can't require people to add loads of tags. Most people do put the film used actually, but not the developer. I think I will explore next whether the film/developer variables actually make a difference. My impression from just eyeballing the data suggests that it will. If it doesn't then it doesn't matter so much.

Cheers,
Mark
 
mrtoml said:
Thanks, Raid. I value the testing people like you do very much and know how much time and dedication it requires. Being kind of relativistic though I like to get a lot of various kinds of input before I make a decision about a lens (especially when the lenses are as expensive as Leicas). I have a tendency to be go for a combination of approaches like the Mike Johnston style looking at as many images using the lens as I can, looking at MTF data, and looking at standardised tests like yours.

The approach I tried here was really to see whether the M-Mount group was providing useful data or whether, for whatever reasons, it was just random noise. I think I have demonstrated that it is far from random noise and that it will get more useful as more people join and upload images. And I wouldn't have thought of looking at the 40mm Summicron-C until I did this exploration, for example. Now I think it will be my next lens purchase.

Hello Mark,

I own a Summicron-C, and reviews of its performance confirmed by experience with this wonderful lens. It is small, sharp, and has great contrast for portraits. It also has a low price.

Your approach to lens testing is like collecting data through a survey, and this is fine. The more data you get, the closer you get to the "truth".

Raid
 
Last edited:
Interesting.

I keep meaning to stick images in the M-Mount group, but I never get around to tagging them. I hate adding tags after the fact. I know why they're necessary though.
 
Back
Top Bottom