40oz
...
I've got a mediocre 8x10 print from an enlarger on my wall right in front of me, hanging next to an 8x10 inkjet print of the same shot from a 2400 dpi scan. It's not the discrete data points that make the difference, it's the overall look of the image. And the mediocre wet print is far superior to the scan+inkjet. This includes easily visible differences in level of detail at normal viewing distances for an 8x10. It's not an issue of pixelization or resolution, it's an issue of not having the image parcelled into discrete data points which inherently loses details and destroys what we might call the "integrity of the image."
I know people will argue that the issue is my electronic equipment or my skills, but that's just making excuses. My photographic and wet printing skills and equipment are modest, yet the wet print is just so much better for far less cost and effort.
I made them both. If anyone wants to argue one needs $20K in equipment to produce an inkjet print that comes anywhere close to a wet print made with a used $100 enlarger with a used $15 Nikkor lens, save your breath and just say wet printing technology far exceeds digital scanning and printing technology. Because that's what you'd really be saying.
Verdict: Scanning and digital printing technology has got a long ways to go to beat a cheap used enlarger.
FWIW, I scan all my negatives. I wet print the onces I like. I use the scans as a contact sheet and to share the images electronically. I prefer the look of scanned film to digital photos, but won't suggest the scan is anything but a crapped out approximation of the "real" shot. A scan is not much different than taking polaroids of my wet prints. And nobody would argue the polaroids would be the actual and real test of the original camera, film, and enlarger.
For example, the scanner can only read each pixel as white or black or a shade somewhere in between. It cannot read each pixel as a cross or parallel lines or speckles or curves or gradients. So even though one might try to break things down to line-pairs vs. pixel count, it's not line-pairs that make up an image, it's the direction, intersection, curving and thickness of edges and the gradient and density of regions that make up a B&W film frame. You can overlay a grid and attempt to reproduce the image by taking an average of each cell, but that ignores the actual detail of each cell. Obviously, the simple act of averaging the density across a cell reduces the smoothness of the gradient, to say nothing about what happens to details when edges cut across cells. You can decrease the size of each cell to get a more accurate reproduction, but at some point, the entire effort surpasses the cost and complexity of simply printing the image with known technology. At what point does one admit that the entire effort is re-inventing the wheel with squares?
I know people will argue that the issue is my electronic equipment or my skills, but that's just making excuses. My photographic and wet printing skills and equipment are modest, yet the wet print is just so much better for far less cost and effort.
I made them both. If anyone wants to argue one needs $20K in equipment to produce an inkjet print that comes anywhere close to a wet print made with a used $100 enlarger with a used $15 Nikkor lens, save your breath and just say wet printing technology far exceeds digital scanning and printing technology. Because that's what you'd really be saying.
Verdict: Scanning and digital printing technology has got a long ways to go to beat a cheap used enlarger.
FWIW, I scan all my negatives. I wet print the onces I like. I use the scans as a contact sheet and to share the images electronically. I prefer the look of scanned film to digital photos, but won't suggest the scan is anything but a crapped out approximation of the "real" shot. A scan is not much different than taking polaroids of my wet prints. And nobody would argue the polaroids would be the actual and real test of the original camera, film, and enlarger.
For example, the scanner can only read each pixel as white or black or a shade somewhere in between. It cannot read each pixel as a cross or parallel lines or speckles or curves or gradients. So even though one might try to break things down to line-pairs vs. pixel count, it's not line-pairs that make up an image, it's the direction, intersection, curving and thickness of edges and the gradient and density of regions that make up a B&W film frame. You can overlay a grid and attempt to reproduce the image by taking an average of each cell, but that ignores the actual detail of each cell. Obviously, the simple act of averaging the density across a cell reduces the smoothness of the gradient, to say nothing about what happens to details when edges cut across cells. You can decrease the size of each cell to get a more accurate reproduction, but at some point, the entire effort surpasses the cost and complexity of simply printing the image with known technology. At what point does one admit that the entire effort is re-inventing the wheel with squares?
Last edited: