It's All About Surface Area (rignt now anyway)
It's All About Surface Area (rignt now anyway)
......
1. A BSI sensor does not collect "more data." It simply has higher quantum efficiency. Not the same thing. There are technical challenges in making large BSI sensors. For what it's worth, I'm about to buy a second monochrome BSI-EMCCD camera for my laboratory. The chip is 5mm x 5mm, 512x512 pixels (0.25 Mpix). That will be $30,000 for the bare sensor in a box with a Peltier device to cool it to -80° C and a minimal interface. >90% quantum efficiency and capable of operating in single photon counting mode. It will not collect "more data." What it will do is give high SNR under highly specialized conditions. The point being that different sensors are optimized for different applications.
.......
3. Cell phone sensors are currently at least two full technology generations ahead of FF camera sensors. Expect that gap to widen, not shrink.
Thanks for catching my imprecise use of the word data.
I should have said: as the sensor area increases the information content of the photograph increases.
Data = Signal + Noise
The signal is what we want. It represents a state of nature, or the actual but unknown flow of electrons from each sesnor site. The signal electron flow is proportional to number of photons captured by the sesnor.
The noise is responsible for uncertainty in the data. The are two main sources: quantum noise and read noise (noise floor). One is an inherent property of matter and the other is generated by the camera's electronics.
Quantum efficiency is important. However QE alone is meaningless. If the read noise is high, QE is compromised. The storage capacity of a sensor site is called the saturation capacity. QE is compromised if the electron flow is not proportional to the photons captured by the sensor. This lower the saturation capacity, the less important QE becomes.
Large sensor areas increase the signal level. An increase in sensor area does not necessarily increase read noise. The data from an APS-C sensor has less uncertainty than data from a m4/3 sensor because there is more signal.
There is not more data, but there is more information in the data.
The site
http://www.sensorgen.info/
computes QE, minimum read noise, and maximum saturation capacity for dozens of digital cameras. I have reproduced a few of their results below.
Camera QE Read Noise Saturation Capacity
Pen_E-P3 41% 8.1 17791
Pen_E-PL1 42% 11.2 17424
XZ-1 35% 2.6 6498
DMC-G1 33% 5.9 14346
DMC-GH1 50% 4.2 18662
DMC_G3 45% 2.9 13612
DMC_GH2 43% 3.0 11803
DMC_GX1 44% 2.7 12554
D700 38% 5.3 58111
D7000 48% 2.5 49058
Because efficient exposure maximizes saturation capacity and has no effect on the read noise, the signal (electron flow) is larger for sesnors with more area. The data from a well-designed APS-C camera contains more information than the data from a well-designed m4/3 camera.
I apologize for such a nerdy post. However at this point in digital photography sensor area is important. I don't understand why m4/3 proponents can't just admit they prefer the increase in convenience instead of more information content. After all, for many photographs the reduction in information content has negligible impact on image quality. We all know the image quality if just one factor in the aesthetics of the final image.
Also, who cares about cell phone sensors? The OP suggested people whose livelihood depends, to some extent, on image quality are missing the boat by ignoring m4/3 cameras. I disagreed because convenience does not trump information content for many working photographers.