PKR- With the work we did, a Pixel that received 0 photons would be the noise floor. The A/D would get some value because of noise, we would store just that and leave any filtering to software. These days- camera manufacturers are putting the signal processing into the firmware, and I suspect the noise floor gets subtracted out.
Basic rule-of-thumb, the more bits of accuracy in an A/D convertor, the more time required to do the conversion. Settling times take longer with longer bit-length. Write-times and memory requirements go up with longer samples. 16-bit seems to be the norm for RAW these days, but some cameras such as Pentax are equipped with 22-bit A/D convertors. This allows 64times larger values. A non-linear preamp would "squeeze" the analog value into a 16-bitA/D, with a shape to the curve that mimicked film.
My particular project squeezed an analog signal into 12-bits using a Log preamp that otherwise would have required a 22-bit A/D. The latter would not work because those available required too long to settle.