Let's assume the distortion correction is done for every frame, both when emptying the buffer, and when showing it at > 40 Hz in the LCD. Say the camera can manipulate 5 frames per second (just picking a realistic number).
For a color 24 MPixel image, this means the camera needs to process 360 Million 14bit values per second. Say you pick a modern Intel CPU to do that in software (at 3.4 GHz); that CPU would have less than 10 cycles to manipulate a grey pixel. Even if you pick the most simple distortion algorithm, that would not be enough "just to do in software". Also note, that the CPU has to do other things in parallel, like DNG encryption, etc.
Most of the distortion correction
must be done in hardware. Enter the camera's image processor; I read that the Maestro II was built together with Fujitsu. The latest Fujitsu Image processor specs I could find included this:
Fujitsu MB86S22AA:
- CPU: ARM Cortex-A5MP (see also http://www.arm.com/products/processors/cortex-a/cortex-a5.php)
- Maximum image processing speed equivalent to12fps at 24M pixels
- Hardware assist capable of feature extraction
- Improved lens correction, lens distortion correction, lens resolution correction
- Accelerated multi-frame operation
- High speed and intelligent bus arbitration
I assume the Maestro II has similar technology. Maybe the Fuji Finepix cameras, too ? (the X100 also does in camera distortion correction)
So, Leica picked an algorithm that they could execute on the Maestro II at 360 MHz per 14bit grey pixel. Then they optimized the 28/1.7 lens to be as well corrected as possible with that algorithm....