swj617 -
This is what you wrote: "Simply because you consider film loving as your religion"
Don't ask me to believe you weren't dismissing my opinions as the product of some irrational belief system. You were very clear on stating your assumptions.
You are right about grain also, it should not be in film or digital (also called noise). If I do not see it through the viewfinder then I should not see it in the image later.
This is quite a strange philosophy. If I am shooting black and white, I do not expect the finder to show me the world in black and white.
I use rangefinders. I do not expect my finder to show me what the film will capture. I use the finder for focus and composition, not as a preview of what is being captured.
I shoot film. Grain in film is not noise, it is the very structure that forms the image. It is analogous to pixels. Of course you are going to see it to varying degrees.
Digital noise, however, is an artifact of the technology. It isn't a result of capturing an image, it is the result of capturing an image using an imperfect electronic system. It's always present but at lower ISO settings the signal generated by light hitting a sensor is far higher than the level of noise.
There is a fundamental difference between high ISO film and a high ISO setting on a camera. The film is actually more sensitive to very small differences in levels of light, while the sensor's sentivity to very small differences in levels of light does not change. It needs to amplify the signal to a great degree to make those differences visible. And in low levels of light, the difference between light hitting the sensor and the base noise level is almost nil. That's why high ISO digital captures are full of noise. You'd have to either greatly decrease the base noise level or greatly increase the sensitivity of the sensor to very small differences in received light *without* increasing the base noise level to improve high ISO performance.
When I look at sample high ISO shots from the new crop of digital cameras, I'm seeing an increase in the amplification without a corresponding decrease in base noise levels or any change in the sensor's ability to distinguish between very low levels of light. Yes I can now set the ISO to an insanely high number, but all I've done is turn the amplifier up higher, greatly increasing the amount of noise I see in the image. Yeah there's an image, but it has more visible noise than at a lower ISO setting. You didn't have that ISO option before because tolerance for the amount of noise generated made that setting a non-feature.
***
You say "A digital P&S with a small sensor is now rivaling 35mm film." No it's not. It "rivals" 110 film.
You can put a billion pixels in a square centimeter but it won't increase the resolution of the lens. People don't determine the resolution of a film system by counting the grains of silver, why would they measure the resolution of a digital system by counting sensor pixels?
There is a difference between the number of sensor sites and resolution of the system.
And you *cannot* compare the resolution of 35mm film by scanning a negative. If you did that you'd only be comparing the resolution of your scanner to your digital camera.
If you really wanted to compare actual system resolution in real world application, you need to make a high quality print of the same size from each system, digital print and film wet print, and compare the prints. You could scan both prints on the same scanner at the same settings for internet comparison's sake, if you wished. But you can't take a first-generation digital image and a second-generation scan of a film negative and use those for valid comparison.
My experience has a lot to do with this subject. I am informed on technical topics. I have worked with digital images and am very familiar with the difference between a diagonal line drawn in pencil and a line that looks diagonal drawn on a screen. This has been a huge topic for decades as computer scientists and professionals strive to improve the mimicking of reality in digital form. And the differences between reality and digital mimickry are too obvious for anyone with knowledge of the systems to pretend there are none. There certainly is no motivation to do so.
I never said there have been no significant advances in digital imaging. But if you actually understood how digital imaging systems actually work, you wouldn't try to suggest they bypass physics.
The idea that my knowledge and experience is irrelevant is interesting. Why don't you just say you believe what you believe and facts are irrelevant?