Here's a couple of other notions:
- It's at least as important to scan with maximum resolution as with maximum colour depth. (We had the discussion here why this is the case; in a nutshell, if you scale the image down later, for every factor of two of downscaling in each dimension you get an extra bit of effective colour depth, because when you combine four pixels into one you are effectively multisampling.)
- Saving full-resolution 16-bit TIFFs takes incredible amounts of space, which make backup solutions a challenge in the long run. At least 1/4 of those files is random noise where the upper bits of the scanner sensor didn't get useful data.
- The lossiness disadvantage is overrated in my opinion. It's true that upon re-opening and re-saving of a JPEG there is a slight loss incurred, but how often do you reopen and resave JPEGs in your typical imaging workflow - two, three, four times? And what steps do you perform in between - rescaling, for example? That's why stating that "you lose 1% of image quality", for example, is misleading. If you choose your JPEG quality setting high enough, compression losses won't be visible to your eyes unless you re-open and re-save documents hundreds of times, which in real-world workflows tends not to be done.
- The main problem with JPEG is not that it's lossy, but that it is 8-bit-only. JPEG-2000 has a 16-bit mode. That's already preferable to TIFF IMHO because the compression is much better and the slight lossiness is not a problem, as outlined above.
- In the long run what would be desirable is a file format with support for arbitrary colour depths, or even one with 12 or 14 bits per channel. I have toyed with the idea of using
EXR, which has a floating-point representation of colour and a wavelet compression mode, but there you still have support issues with software (you'd have to scan into a TIFF and then recompress).
Philipp