A widely believed (and highly credible) theoretical figure is that you need at least 18 megapixels to give the same amount of information as a top-quality slide on slow film taken with a sharp lens mounted on a good camera on a tripod.
From which I might infer that since my rangefinder-camera photos are often made at high ISOs, wider-than-optimum apertures, and almost never a tripod, I would not get the full benefit of an 18-megapixel camera...?
Whew, that's a relief! -- not to mention saving me a considerable amount of money, since all the cameras I use are in the 6-to-12-megapixel range.
Seriously, for this theoretical figure to be "highly credible" with me, it will need a bit more theory. For example, it would need to address the question of viewing magnification:
How, exactly, are we viewing the "top-quality slide" posited as a reference? How are we viewing the comparison digital image? And
what, exactly, will we see? Let's see if we can work out the derivation of this theoretical number...
One figure for the smallest detail the eye can distinguish under average viewing conditions is 3.4 minutes of arc (which I quote from the "Camera Optics" chapter, by Rudolf Kingslake, of the 15th edition of the Morgan & Morgan Leica Manual.) This represents about 1/1000 of the distance from the viewed object to the eye -- or 0.01 inch, assuming scrutiny from a close but plausible viewing distance of 10 inches. (My own eyes won't focus quite that close anymore without help, but let's stick with that number because it's easy to use.)
In case people want to play along at home, let's further assume we're going to view our "top-quality" reference slide through a theoretically perfect 20x magnifier, giving it an apparent viewing size of 20 x 30 inches.
Since my eye can't distinguish details smaller than 0.01 inch, the most detail it's going to see in that image is the equivalent of (20/0.01) x (30/0.01), or 2000 x 3000 pixels... whaddaya know, the exact size of file my R-D 1 makes!
In other words, viewing an
average continuous-tone subject under
average viewing conditions, my eye would not be able to see any difference in detail between a 6-megapixel digital image and a 35mm slide seen through a 20x magnifier.
It's worth noting, though, that the chapter I quoted earlier states that under better-than-average subject and viewing conditions (say, a target of sharply-ruled black and white lines under very strong light) the eye can do considerably better than 0.01 inch -- as much as 3 times better, or 300 points per inch (which happens to be why black-and-white laser printers are designed to image 300 ppi.)
Applying that 3x improvement to my 6-megapixel result derived earlier may well be the source of the widely-believed 18-megapixel figure Roger referenced in his post.
But it's important to keep in mind that that number relies on a seldom-achieved best-case scenario of subject conditions and camera technique... which, in turn, is why so many perfectly satisfactory images are made of more representative subjects at lower pixel counts.