vicmortelmans
Well-known
filmscanner noise
anyone around that wants to share some mathematics?
I'm trying to get a grip on my filmscanner's noise ratio. I think it's eating the first 9 bits out of my 16 bit scanner output.
This is the test I performed:
Scan an intransparant object (i.e. cardboard in my case). This way, the result contains only noise! I got the raw scanner output, which is 16 bit grayscale (yeah, it's black and white I'm in to).
I wrote a small c-program that reads the bitmap and calculates the difference between each pixel and the next one. It calculates the average difference and the deviation of the difference. It also calculates the average pixel value (1).
The outcome on my test image was:
- average difference between each pixel and the next: 200
- deviation of the difference: 160
- average pixel value: 260
What I know about statistics, is that a population fits underneath a gaussian curve. 95% of it between the boundaries of [average - 2*deviation , average + 2*deviation]. Or otherwise: 98% of fits between [-inf , average + 2*deviation].
So I conclude that the maximum difference between two pixels caused by noise is average distance + 2*deviation. That's 520.
Looking at the gaussian curve of the noise itself, this means that the 95% boundaries actually span the maximum difference between two pixels. Thus, the deviation of the noise is 520/4, or 130.
This gives a range of noise values [ average pixel value - 2*deviation , average pixel value + 2*deviation ] or [ 0 , 520 ].(2)
Thus, noise values go up as high as 520, which is in logarithmic values 2^9.
My conclusion: out of the 16 bits of available image space, 9 bits are polluted by noise. When scanning an image, it's actual scanned values should not be below this boundary (note: these are typically the highlights or densest area's on a negative!). If the highlights are below this boundary, noise will be prominently visible in the processed image.
I really have to limit density on my negatives drastically to avoid the noise to cause trouble!
It's only the 7 remaining bits that I can use for 'clean' image information. That's equal to a negative density range of somewhat more than 2 (3).
Note that the effect of scanner noise is much less in the shadow area's of the image, that's because the scanner noise is linear, but the percieved brightness is a logarithmic function of the pixel values...
Groeten,
Vic
(1) it might seem easier just to calculate average pixel value and deviation of the pixel value, but I have the impression that the overall image doesn't have equal brightness... probably some light pollution from outside the scanner or within the scanner. That's why I don't trust the deviation of the pixel values to be really a deviation of noise values.
(2) no idea why the lower boundary happens to be 0, I guess it's just coincidence
(3) negative densities are in a log10 scale
anyone around that wants to share some mathematics?
I'm trying to get a grip on my filmscanner's noise ratio. I think it's eating the first 9 bits out of my 16 bit scanner output.
This is the test I performed:
Scan an intransparant object (i.e. cardboard in my case). This way, the result contains only noise! I got the raw scanner output, which is 16 bit grayscale (yeah, it's black and white I'm in to).
I wrote a small c-program that reads the bitmap and calculates the difference between each pixel and the next one. It calculates the average difference and the deviation of the difference. It also calculates the average pixel value (1).
The outcome on my test image was:
- average difference between each pixel and the next: 200
- deviation of the difference: 160
- average pixel value: 260
What I know about statistics, is that a population fits underneath a gaussian curve. 95% of it between the boundaries of [average - 2*deviation , average + 2*deviation]. Or otherwise: 98% of fits between [-inf , average + 2*deviation].
So I conclude that the maximum difference between two pixels caused by noise is average distance + 2*deviation. That's 520.
Looking at the gaussian curve of the noise itself, this means that the 95% boundaries actually span the maximum difference between two pixels. Thus, the deviation of the noise is 520/4, or 130.
This gives a range of noise values [ average pixel value - 2*deviation , average pixel value + 2*deviation ] or [ 0 , 520 ].(2)
Thus, noise values go up as high as 520, which is in logarithmic values 2^9.
My conclusion: out of the 16 bits of available image space, 9 bits are polluted by noise. When scanning an image, it's actual scanned values should not be below this boundary (note: these are typically the highlights or densest area's on a negative!). If the highlights are below this boundary, noise will be prominently visible in the processed image.
I really have to limit density on my negatives drastically to avoid the noise to cause trouble!
It's only the 7 remaining bits that I can use for 'clean' image information. That's equal to a negative density range of somewhat more than 2 (3).
Note that the effect of scanner noise is much less in the shadow area's of the image, that's because the scanner noise is linear, but the percieved brightness is a logarithmic function of the pixel values...
Groeten,
Vic
(1) it might seem easier just to calculate average pixel value and deviation of the pixel value, but I have the impression that the overall image doesn't have equal brightness... probably some light pollution from outside the scanner or within the scanner. That's why I don't trust the deviation of the pixel values to be really a deviation of noise values.
(2) no idea why the lower boundary happens to be 0, I guess it's just coincidence
(3) negative densities are in a log10 scale
Last edited: