filmscanner noise - 9 bits out of 16?

vicmortelmans

Well-known
Local time
11:15 AM
Joined
May 2, 2005
Messages
316
filmscanner noise

anyone around that wants to share some mathematics?

I'm trying to get a grip on my filmscanner's noise ratio. I think it's eating the first 9 bits out of my 16 bit scanner output.

This is the test I performed:

Scan an intransparant object (i.e. cardboard in my case). This way, the result contains only noise! I got the raw scanner output, which is 16 bit grayscale (yeah, it's black and white I'm in to).

I wrote a small c-program that reads the bitmap and calculates the difference between each pixel and the next one. It calculates the average difference and the deviation of the difference. It also calculates the average pixel value (1).

The outcome on my test image was:

- average difference between each pixel and the next: 200
- deviation of the difference: 160
- average pixel value: 260

What I know about statistics, is that a population fits underneath a gaussian curve. 95% of it between the boundaries of [average - 2*deviation , average + 2*deviation]. Or otherwise: 98% of fits between [-inf , average + 2*deviation].

So I conclude that the maximum difference between two pixels caused by noise is average distance + 2*deviation. That's 520.

Looking at the gaussian curve of the noise itself, this means that the 95% boundaries actually span the maximum difference between two pixels. Thus, the deviation of the noise is 520/4, or 130.

This gives a range of noise values [ average pixel value - 2*deviation , average pixel value + 2*deviation ] or [ 0 , 520 ].(2)

Thus, noise values go up as high as 520, which is in logarithmic values 2^9.

My conclusion: out of the 16 bits of available image space, 9 bits are polluted by noise. When scanning an image, it's actual scanned values should not be below this boundary (note: these are typically the highlights or densest area's on a negative!). If the highlights are below this boundary, noise will be prominently visible in the processed image.

I really have to limit density on my negatives drastically to avoid the noise to cause trouble!

It's only the 7 remaining bits that I can use for 'clean' image information. That's equal to a negative density range of somewhat more than 2 (3).

Note that the effect of scanner noise is much less in the shadow area's of the image, that's because the scanner noise is linear, but the percieved brightness is a logarithmic function of the pixel values...

Groeten,
Vic

(1) it might seem easier just to calculate average pixel value and deviation of the pixel value, but I have the impression that the overall image doesn't have equal brightness... probably some light pollution from outside the scanner or within the scanner. That's why I don't trust the deviation of the pixel values to be really a deviation of noise values.

(2) no idea why the lower boundary happens to be 0, I guess it's just coincidence

(3) negative densities are in a log10 scale
 
Last edited:
Leaving the math aside: are you doing a single or multiple pass scan on your B&W negs?
Multi-pass with B&W (non chromogenic) tends to get really nasty.

Peter
 
There's a dude here, I think Buze or booze or something like that who is actually a dsp engineer -- he had some things to say about flatbed scanning, in that you really want to scan at 8bit, then upsample to 16bit in PS -- this apparently reduces the noise. I wonder about detail and color etc etc etc, never had a chance to ask him.. also whether or not this idea works with dedicated scanners like the minolta or nikon ones.

*shrug*

Jano
 
peterc said:
Leaving the math aside: are you doing a single or multiple pass scan on your B&W negs?
Multi-pass with B&W (non chromogenic) tends to get really nasty.

Peter

single pass. I set the exposure manually, such that the film base (or deep shadows) are mapped to the maximum pixel value (= 2^16). That's the best way to get the maximum amount of information out of the negative.

I'm using Vuescan and tried things like multiple passes, or long exposure pass, but didn't find any quality gain in that. The only way I seem to gain some quality is by scanning at maximum dpi (3600) and reducing the image back to 1200 dpi (which I normally scan at).

By the way, it's a real filmscanner, not a flatbed scanner.

Groeten,

Vic
 
jano said:
There's a dude here, I think Buze or booze or something like that who is actually a dsp engineer -- he had some things to say about flatbed scanning, in that you really want to scan at 8bit, then upsample to 16bit in PS -- this apparently reduces the noise. I wonder about detail and color etc etc etc, never had a chance to ask him.. also whether or not this idea works with dedicated scanners like the minolta or nikon ones.

*shrug*

Jano

Thats me 😀
What I say is thay you shoukd scan in 8 bits, /but/ at a much higher resoluton if you can. Ie: scan at 4800dpi for a 2400dpi target...
Then open the file in PS, convert to 16 bits, apply curves and such, downscale the image and right at the end convert back to 8 bits.

Beside, I suggested scanning in 8 bits/4800dpi to be able to use JPEG as output and save gigabytes of disk space for no significant image loss.. In fact, you will have a lot less noise in averaging 4*8 bits samples than trusting just one 16 bits one...

Idealy, the scanner antiquated softwares could also save in JPEG2000 and output properly compressed modern files, in 16 bits and even lossless!
 
Hi Buze,

that's interesting, could you elaborate? What if my sensor is capable of reliably imaging at more than 8 bits? Theoretically speaking I would argue that you gain the equivalent of two bits over an 8-bit scan at resolution n by scanning 8-bit at resolution 2*n (provided that 2*n is lower than my scanner's physical resolution - or are you suggesting to let the scanner interpolate?). That looks kind of obvious. But if my sensor does 10 bits reliably I'm still throwing away two bits of information by doing an 8-bit scan. So in a 16-bit scan at 2*n I could get even more information, and I can throw away the noise bits by downsampling at the end of the process. The problem is that there is no format that allows me to work with 12-bit images, except scanner RAW files. Or am I somehow mistaken here?

Philipp
 
As I said, /idealy/ you would scan at 4800dpi 16 bits all the time, but thats impractical... So what I'm saying is that it is a much better deal to scan at 4800 8 bits (you can set the epson scanner to scan at 16 bits grayscale, with JPEG output, it will decimate the signal as it writes the file, but will still capture the 16 (well 12) bits)

I only mention using 8 bits to have /small files/. Of course, if I could, I would scan at 4800dpi 16 bits all the time. What I am saying is that a 4800dpi 8 bits file is better than a 16 bits 2400dpi file.

As you point out, the amplitude of noise can be very large (and is extremly visible) wether you use 8 or 12 bits; and oversampling solves part of that problem, exactly like "multipass scanning" does.
 
I also been thinking (and I must experiment) that the "black and white" scanning is just the luminance of R/G/B channels that themselves have totaly different signal response curve; in fact I could already imagine that the red channel would have more noise than the other two, and that, today, the scanner software incorporates that noise into your final sample.

So given that your image is monochrome to start with, you could already make a fantastic noise reduction system by scanning RGB and either throwing away the more noisy channels, or even better /averaging/ the 3 channels to get the last sample; you would already have a "3 pass multipass" sampling of the same destination monochrome pixel.
Even better, you could compute an accurate signal/noise curve per channel per image just by comparing the 3 channels and deduce their noise level. It's trivial to detect and remove the mask background color and recover the "clean" B&W.

FYI I've started to write my own scanning application to apply "modern" signal processing and algorythms to scanning; I find the existing scanner softwares (all the one I tried) to be nothing short of dreadful.
See some preview tests there : http://oomz.net/mf/viewtopic.php?id=2398
 
Not bad at all! Please make some of that available to the rest of us still stuck with VueScan 🙂

I guess you can get the mask out by subtracting channels from each other and averaging out the noise. Is that what you have in mind? Better than the scanner software approach of always subtracting a fixed value, anyway.

About the 3-pass sampling idea on colour scans of B/W materials: I guess everybody can try this in Photoshop as well by scanning colour instead of B/W and then weighting the channels appropriately. I'll try that one of these days.

Philipp
 
vicmortelmans said:
I'm using Vuescan
I tried Vuescan with my film scanner (Minolta F-2900) and found the results to be generally of much poorer quality than the ones I get from Minolta's DS Elite Utility.

Peter
 
Buze said:
FYI I've started to write my own scanning application to apply "modern" signal processing and algorythms to scanning; I find the existing scanner softwares (all the one I tried) to be nothing short of dreadful.
See some preview tests there : http://oomz.net/mf/viewtopic.php?id=2398

Hey, that's nice, me too!

My current effort is into postprocessing of raw b&w negative scans.

The basic idea is to convert the raw pixel values into log2-space, so they become equivalent to what we know as 'stops' and are linear to eye-percieved brightness.

Then a linear conversion (on log data!) is applied to get a positive image and to adapt the dynamic range.

q = A - B * p (p = input value; q = output value)

The processing has different 'modes', to calculate A and B. This can be a 'contact print mode' (all pictures of the same roll are processed with the same A and B) or a 'max contrast mode' (for each picture, min and max values are found and these are mapped to min and max values of the output dynamic range.

Then, I make for each post-processed image two variants, one where highlight contrast is improved and one where shadow contrast is improved, using a sort of gamma-correction LUT (again: on log data!).

Then I convert from log back to linear and perform a linear conversion again, to stretch the darkest shadows to 'real' black.


This way, the postprocessing produces for each picture a set of possible results. The only 'manual' step is now to select the best and delete the rest.


I guess I'll be posting results on this forum as things consolidate... hope you'll do the same!

Groeten,

Vic
 
Neat! Here I try to do color / B&W negatives and I'm not trying to make "user" images, I just want to extract as much "clean" information from the scan as I can and exploit the 16 bits range as much as I can; then feed the (probably low contrast) result to Photoshop and let the user decide on the artistics 😀
 
Back
Top Bottom