Hello,
I can answer, with a good level of certainty, the first and second questions but I can only hypothesize about the last.
I strongly discourage people from reducing resolution within their cameras unless they have a clear reason to do so. As well, that they confirm with camera documentation or technical support regarding how the resolution is reduced.
In brief, as that is all the time i have, there are a few factors involved in sub-sampling an image.
First, a majority of digital cameras today, with a few exceptions, interpolate colour data between adjacent photo-sensitive sites. It is a poor analogy, but they electronically smudge colour from the neighboring red, green, blue (sometimes there are add'l colours) to determine the actual colour of each individual site.
Second, the mechanism that is storing the image onto a card can play a strong role in the results you obtain. That is the difference between RAW and the various degrees of loss-inducing JPEG compression.
The first thing you need to decide is if you are reducing the resolution so you can shoot more and save space long-term, or so you can shoot faster. I'm going to guess it is the first option from other things said.
Third, the way an image is sub-sampled is important and I'm confident Sony engineers are good at it, but there are trade-offs deepening upon your own personal goals.
If you are not shooting in RAW mode, and wish to save space, then you are substantially better to increase the compression-levels in-camera to squeeze more onto a card. JPEG compression is detail-oriented, the more it compresses, the more it throws away.
If your fine detail is now even finer because you've already thrown information away, then you will lose more than you bargain for.
Please note, the size changes are typically integer-based. So a reduction drops every second, third or fourth row. Since the colour information is based upon four adjacent photo-sites, one would think that it is more accurate because the colours are all merged into one. However colour is less than half of the actual perceived part of an image, the balance is luminance.
To close this off, even though a lot more could be said, you are far better off to keep the images in their full resolution state, with higher JPEG compression if necessary. Then, after choosing the images you wish to keep, making the adjustments to them and finally sub-sampling them, you will have superior colour, fine-detail and so forth than if you do so in camera.
As well, even though most of your work may be 4x6 to 8x10, shooting an amazing image that you can only print to the size of a 2004 digital camera seems to be a low expectation.
Indeed, you are entertaining the idea of throwing away valuable photographic information that is visible, potentially increasing moire issues, etc.
Over the years, I have run tests with these types of considerations for faster wire transfers etc. The comments I've made are based upon that, and truly, if you wish to store smaller files, you are far better to let software decide where it can squeeze the image with information we don't see.
My perspective is that storage is ultra-cheap these days. I picked up a fast, 32-gig card recently for well less than $1 per GB. It is a small price when compared to how much a roll of film used to cost to shoot and process... a very small price.
The other questions, the mulit-point AF, is as someone else noted usually contrast oriented. A sharp image will have the most localized contrast also known as detail contrast (which you propose reducing, by the way).
Old AF systems forced the shooter to focus in the center, then re-compose and shoot. Now, you can let the camera guess the best location, and for most of the time it does a reasonable job, or you can tell it to focus on one of the spots, center or not, and you can then compose, wait for the subject if necessary to do something interesting, and finally shoot when ready.
Dynamic-range optimization, I'm going to hypothesize in the Sony application, may be for JPEG images. The cameras shoot with a potential of several thousand shades of tonality for each site. The JPEG image typically only supports 256 shades. So the camera can readjust how it converts from 14-bits into 8-bits.
There are a lot more things that can go on with specialize shadow masking techniques, etc., and I cannot speak to their application of those types of calculations in-camera.
Regardless of their actual implementation, the concept is that you have a bucket to carry home some fruit. Do you fill the bucket with berries or do you include some of the leaves, stems and other inedible or undesirable elements. Odd analogy, but I mean to say that when you only have 256 shades of grey per colour, it is better to make as many of them as good as possible.
I should note that I've been in this digital gig for 24 years this month. This totally predates the present digi-cam era as I started in the newspaper and graphic-arts industry, pre-press and such.
So, my RX100 is in the mail. This will be my first digicam with somewhat advanced features and I'm utterly confused. Hope some of the gurus can shed some light here:
-Can I shoot at 10mp without affecting any other IQ parameters other than the amount of detail recorded? I'm assuming the loss of detail will be insignificant for my use, i.e. enlargements under 8x10 and little cropping.
-How do multiple AF points work? How does the camera know what to pick as the subject? This truly boggles my mind as someone who never tried anything more advanced than single-point AF from the 90's. Of course, I understand that I can set the RX100 to single point focus or even MF, but I just don't understand what the multiple AF points do and when would it be useful to engage them.
- Why is there a need for an dynamic range optimization mode? My understanding is that the RAW files offer a lot of elasticity and can be used to recover lost details in the highlights and the shadows. Is this just a form of automation, like Auto Exposure? Or, does it actually expand the sensitivity of the sensor? If the latter shouldn't it be on all the time?