APS-C has already pretty much hit its limits, what you buy now will not be significantly outdated in terms of sensor specs any more.
I respectfully disagree. APS-C (and 4/3) are gradually approaching real physical limits
but have not yet hit those limits.
First, current sensors are not close to oversampling a lens that is diffraction-limited at f/2.
Second, the calculations that we're all talking about are
for a monochrome sensor! Remember that the specified pixel pitches for real digital cameras are
not for monochrome sensors –
they are for Beyer RGB arrays.
On a Beyer array, the
real pitch for red and blue light is 2x the specified pitch, and for green light it's (roughly) 1.4 x (square root of 2) times the specified pitch. The "pixels" generated by a RAW converter are demosaiced interpolations!
In addition the specified pitch is only correct for the horizontal or vertical axes. The pitch on a rectangular array along the diagonal is, again, another 1.4x (again, square root of 2) bigger than the specified pitch.
Thus the specified pitch of a real, practical RGB color sensor considerably overstates the actual spatial resolution of the sensor. Real resolution is (depending on the color of the incident light, the axial tilt, the presence of antialiasing filters, and other factors, always much worse than the pixel pitch would suggest.
Third, there can be significant technical advantages to oversampling, not least of which is the fact that you no longer need to correct for aliasing with optical or digital antialiasing filters. There's also the fact that Nyquist sampling doesn't work when the signal is noisy. Spatial averaging of an oversampled signal can, depending on the noise costs, be a good way to compensate for this.
These considerations all argue for the technical merits of pixel arrays considerably denser than the ones currently available.
Moreover, with sufficient computational power, it is possible to exceed the Abbe limit if you know the lens's
point spread function and can computationally deconvolve. This has been a standard technique in optical microscopy for well over a decade (and in astronomy before that), and it can provide roughly a factor of 2 increase in effective linear resolution. I'd wager that this is already being done in the iPhone 4's (stunningly good for its size) camera – which has a pixel pitch of ~2 µm (500 px/mm)! Don't think for a moment that Apple and Sony don't know what they're doing with that sensor.
(Note: a FF DSLR with 2µm sensor pitch would be 12,000 x 18,000 = 216 Mpix. And an APS-C camera with that pitch would be ~137 Mpix!)
Finally, there's a long way to go (with respect to the theoretical limits) with APS-C, in terms of sensitivity, dynamic range, and color space. To take just one example: none of the available APS-C sensors is backside-illuminated (as the iPhone 4's camera is). That change alone can give you about a full stop of real sensitivity with no increase in read noise.
So, no, I don't think that we're done with APS-C or 4/3 or even 35mm sensor development. Not really even close.