Meanwhile: I think one should start with the simplifying assumption of unity quantum efficiency in the detector (sensor) and consider the noise problem from there.
I'm not sure we have any disagreement at all, actually.
Do we?
I think that each of these questions can be dealt with separately, yes. But setting aside QE doesn't mean that it's not central to the overall question of sensor performance.
1. Since the shot noise component of the total device SNR decreases as the number of detected events increases, it does not really matter HOW you increase the number of events. It can be done by making a bigger photosite (larger sensor pitch, use of microlenses), or it can be done by increasing QE of the photosensitive site itself.
2. The second major noise component – read noise – is a function of charge transfer efficiency (especially in a CCD), preamp quality, ADC ciruitry etc.
3. The third major noise component, dark current, is a lot less important at exposures shorter than say 1/50 s. The best way to get rid of dark current is to cool the sensor to below ambient temperature.
In the example that I linked, all three components are dealt with. Shot noise is dealt with by using large pixels and having ~95% QE. Read noise is dealt with by the EM-CCD sensor design, which brings us to <1 e- per pixel per read. Dark current (and in these cameras also read noise and charge transfer losses) are minimized by running the camera at about -80° C.
The examples from low-light scientific imaging are absolutely relevant to more conventional imaging cameras because they show what can be achieved and how that's done. Smaller pixels, as you indicate, are smaller photon buckets. That's precisely why it's essential to suppress read noise and maximize QE in these cameras.
Finally, for non-quantitative purposes such as imaging, it can (under at least some circumstances) be advantageous to have more, noisier pixels, n=and then to use binning or more sophisticated spatial correlation functions to deal with noise algorithmically. By doing all of these things at once, it's possible to, for example, make the sensor in the two-generation-old low-end consumer Pentax K-x performa at a level that rivals the Nikon D700, with its much larger pixels [
link].
The post that I was responding to argued that bigger sensors don't benefit from back-thinning (front-side illumination). In my view, they clearly can benefit from this technology, and I explained why.