Reducing DOF at the sensor level?

Bruin

Noktonian
Local time
12:54 PM
Joined
Feb 6, 2007
Messages
326
My brain can't think of a way to do this, but I'm gonna throw it out anyways...

Would it be possible to reduce DOF at the sensor level or in some sort of postprocessing? I'm pretty sure if the lens is projecting a thin DOF there's no way you can widen it, but what about artificially narrowing it and inducing bokeh/OOF effects?

I was thinking about the Nikon D3 sensor, how future generations will continue to push the limits of "acceptable" high ISOs. One might argue that this reduces the need for fast lenses, but what about the thin DOF that fast lenses can provide?

If this is possible say, during RAW conversion, you could choose a precise focal point, shrink the DOF as much as you like, overcome focus shift problems in your lenses (just stop down for the shot and adjust later), or add bokeh to your liking (maybe even of a particular style: round w/ soft transitions, etc.). Shoot at the aperture that gives you a comfortable working DOF and play with it later. You could stay within the aperture "sweet spot" of your lens all the time.

If this is possible, I guess it brings up philosophical issues of editing your photos to have the bokeh, DOF, and focal point of your choosing. But aren't all of those basically limitations imposed on us thus far by current technology? If I edit my Summicron pic so that everyone thinks it was a Noctilux, would anyone get upset? What difference should it make if I got the end result with my lens or with my sensor/software?
 
You can easily do it in Photoshop, but on the sensor, not so easy but probably not impossible.

If you're thinking about merely blurring parts of the image that are not in your simulated principle plane of focus, this may provide some imitation of selective focus, but it's more likely to resemble off-axis coma and spherical abberation, rather than true depth-of-focus related image blurr, which is distance-dependant from the principle plane of focus.

However, to accurately simulate the optical properites of depth of focus in software (either in-camera, or post-camera processing) would require variable amounts of blurr depending on how far from the chosen plane of best focus each spot in the image is located. The problem is that a single sensor doesn't possess distance information for each pixel spot. This would require a camera system that measures a multitude of distance points throughout the image field, then blurrs the resulting image based on how far each part of the image is from the plane of best focus.

Perhaps a dual-sensor camera, with software that functions to generate a stereoscopic image, would be able to interpret distance information based on image shift between the stereo pair of images, and thus calculate distance information in each part of the image, and hence be able to calculate depth of focus blurr thusly.

However, strictly in post-processing, with no distance information for each pixel, this can't be simulated. Sure, you can blurr it in post-processing, but not in a way that's directly related to how far each pixel was, distance-wise, from the principle plane of focus.

Remember how depth of focus works: the closer to the principle plane of focus, the smaller the focus blurr, while the greater the distance the greater the blurr, with the amount of blurr depending on the lens focal length and focal ratio.

Out of focus blurr also is affected by spherical abberation and coma of the lens, which is why different lenses have various qualities of 'bokeh'. You would, in addition to that stated above, need a system to choose what lens you wish to simulate.

Sounds like the processor power needed in-camera to perform these calculations would severely tax the current state of the art, given the need also to have a speedy shutter response and good battery life.

~Joe
 
Last edited:
Wouldn't a AF system deliver this information ? If the focus is measured using the contrast of the image, the lens could provide the information about the actual focus setting.
 
How many points along the image does an autofocus system measure? Not enough to provide the necessary data to accurately measure distance of each part of a complex image.

Think of a close-up image of, say, the needle-like blades of a yucca or aloe plant. Does an auto focus system provide enough data to simultaneously measure the distance to each blade of the plant, and its orientation toward or away from the camera? No. But that's what would be required in order to know simultaneously how far every part of the image is from the camera.

And relying on image edge blurr for distance information is erroneous, for what happens when parts of the subject are, by nature, soft and devoid of hard edges? Does the system erroneously calculate a false distance for such parts of the image?

Not to be a spoil sport, but I can see how current cameras using high pixel count, ultra small sensors could afford to have a second adjacent sensor to calculate distance information directly by stereoscopic means, and synthesize DOF blurr directly. Perhaps the main image file is RAW, and the secondary image file is JPEG, used for stereo comparison and distance calculation. This would work to make small P & S cameras function like full frame cameras in every aspect except perhaps high-iso noise; but an industry standard would have to be agreed upon for the stereo file format.

~Joe
 
Last edited:
What I meant was, step-wise recording the image (sensor-data) for every step of the focus setting of the lens. This would result in a data-cube of images at every focus setting. By stacking data of selected parts (thinking of image-overlay) one could obtain images with a varied DoF. Drawback is the huge amount of data and time-delay for read-write process and stepping the focus setting...
 
Stereo imagining through a single lens...never occurred to me before! Thanks Joe, more food for thought. I wonder if anyone has ever tried to created a standardized bokeh test environment. It would probably take a big soundstage with quite an array of lighting inside. Once you establish a bokeh database for a certain lens, let the software extrapolate the effects for your specific image.

maddoc, I seem to remember some discussion on RFF a while back about some technology to allow selective focus after the fact. I don't recall if they explored bokeh and DOF manipulation, though.

Along with the AF system idea, DSLRs are getting up there in AF point (D3 has over 50). If camera makers added special AF points to only measure distance, spread them evenly across the frame, and somehow encode those distances in the RAW file to the corresponding points on the image, the camera could achieve the distance mapping necessary for this to work. Then you just need the right algorithms in postprocessing to handle the effects you want.

Since this is Rangefinder Forum, what about having a separate small lens & sensor off on the side of your camera that takes a simultaneous picture? It would be a wide angle, large DOF combination. Encode it into the RAW, then let software triangulate distances at all points in the main image given the "baselength" of the digital RF.
 
Back
Top Bottom