richard_l
Well-known
Since digital cameras emulate real cameras, I'd say it's an emulation. <ducking and running>T_om said:Would that be an emulation or a simulation?
Tom
PS: I'M KIDDING RICHARD. Don't want to start any hair fires.![]()
zeos 386sx
Well-known
wdenies said:On the site:
http://www/thelightsrightstudio.com/digitaldarkroom
you will find a good tutorial on how to make and to apply your own Photoshop filters.
Wim
Thanks Wim, but that should be:
http://www.thelightsrightstudio.com
> Since digital cameras emulate real cameras, I'd say it's an emulation. <ducking and running>
And yet another great thing about MY DIGITAL CAMERA, is that I can load it with film using an accessory back that Nikon made to convert the DCS200 and DCS400 series cameras to use film. I use the MF-20 (Manual-Film) and the DCS200 becomes a real N8008s. The DCS420 becomes n N90s with the use of the MF-25 or MF-26.
And yet another great thing about MY DIGITAL CAMERA, is that I can load it with film using an accessory back that Nikon made to convert the DCS200 and DCS400 series cameras to use film. I use the MF-20 (Manual-Film) and the DCS200 becomes a real N8008s. The DCS420 becomes n N90s with the use of the MF-25 or MF-26.
wdenies
wdenies
Soory typing error.
Read www/ as www.
Read www/ as www.
jaapv
RFF Sponsoring Member.
T_om said:PS: Man I gotta remember "If you didn't come by train, you are not here" line. I intend to shamelessly and ruthlessly plagiarize that statement the very next chance I get.![]()
Tom, I herewith renounce any copyright !
wdenies
wdenies
My last and final comment on "quality".
A Dutch friend prints his pictures exclusively with old techniques (brome,oil,gum,...), he calls it noble techniques.
His prints lack sharpnes, details, but..... they are fabulous!
A Dutch friend prints his pictures exclusively with old techniques (brome,oil,gum,...), he calls it noble techniques.
His prints lack sharpnes, details, but..... they are fabulous!
DaveSee
shallow depth of field
I've been trying to catch up with this thread, and will jump in here... wow, and I thought "double-dutch" jump roping was difficult!(Oh, and for those of us Dutch, I refer to a game where two ropes are spun between two persons, while others attempt to keep their bodies from catching either... I do not know where the term "double dutch" originates, and will leave this for PMs)...Brian Sweeney said:Jaap,
Lucky are those who do not understand computer-speak. I've worked with computers for 30 years and digital imaging for 25. In 1992, I talked to Kodak about using a CCD without the IR cutoff filter in it. It was from knowing the spectral response of the CCD and asking why the off-the-shelf camera did not work with IR. The Army later asked for the same thing and Kodak came out with an Infrared version of the then new Kodak DCS200.
OK, thanks Brian for the clarification, but did you address the "emulate/simulate" toipc? I understand the Apple, or any computer issues with emulating old instruction sets, but you did not take the "emulate/simulate" discussion further... particularly with respect to the discussion of the hardware(CCD, sensor) and software(camera OS, PhotoShop).Brian Sweeney said:I've written "emulator" and "simulation" software for a long time, and I thought the conversation on the difference was funny. I had to throw the computer terms in, just to try to lighten it. Apple is about to switch processors "again" and will come out with an emulator to allow old programs to run on the new machines. If the past is any indication, the software will be so slow and "buggy" that it will be virtually unusable on the new machines. Intel CPU's have "modes" in them that execute old software in hardware, and emulators are not required. So I can run my 1984 DOS software on a new Pentium 4 without problems. I used some 20-year old software tools to read the RAW images from the Kodak DCS200 to convert them to a new file format that Photoshop could read. In doing so, I found the images were larger (more pixels) than what the original Kodak software provided and that the range from black to white (pixel values) had a greater range. My software produced a larger image with more grey-scale values than what the Kodak software could do. I think it was trying to apply some sort of response curve to the image to mimic what you would get with film.
Hardware CCDs or sensors may be engineered for sensitivity to *types* of light which reaches them; however, the signals one may connect to via the hardware interface is only part of the process(ing of the data). As you(Brian)state, companies produce CCDs and CMOS devices optimized for light conditions, including IR filters. This engineering is not dissimalar to chemists designing film emulsions. The difference is that a COTS RF camera(Fed, Canon, Leica or CV,etc) may *load* any type of film meeting its mechanical capabilities, and thus escapes the "hardwired/softcoded" digital aparatus's programming. This, I think, is what the "nothing looks like Tri-X" folk are commenting.
[COTS is "Common Off The Shelf"]
Thus, a digital camera--and its OS and sensor sub-system--may emulate only certain types of film behaviors. PhotoShop, its plugins or The Gimp may only simulate a visual effect from the camera data it receives as if, say, a red filter had *been applied* when the data was recorded. It's this last bit that has digital folks running at the mouth about RAW files: RAW files are not *pure*, but the data left-over after the camera OS has had its way with the sensor's I/O. You, Brian, know that this RAW file is always the data of the algos which created it, whether Kodak, Canon, or others... that's why you jumped in and hacked some FORTRAN... to make your own RAW(sic) file!
From a cost-benefit analysis, digital camera manufacturers will ship primarily color-sensitive sensor sub-systems, and then offer "professional models" with *emulators* of B/W. Now, here's the tricky part: do you, dear photogs(Bertram?)
want the camera to emulate B/W, or give you a larger data-set(aka file) with which to later simulate B/W when you, ah, want *that kind* of image? Think: Winogrand not having left 300+ rolls of film, but 300(x36?)+ RAW files un-simulated at his passing...
When Tom, and others, state that "digital is a whole new ballgame," they refer not to the image the camera "captures," but the light effects captured and the simulations they may run with this data to produce an *image*. Double-dutch redux, ad infinitum!
B/W is a result/function of the chemist's emulsion(emulation), not the light--oh, yea, light!--through the lens. Programmers, not chemists, are constrained by the data they are presented... digital can do the "Tri-X look," eventually, but why? Just give us the data--say the PhotShop/Gimp capable--I'll do the rest.
For me, I'll take the digi-cam that provides the least non-emulated, full of light reading on the sensor: color is in the eye of the manipulator in this digital medium. Problem is, cameras are "optimized/priced" for those who want that "film-effect" these days.
Brian, you have that FORTRAN code public? It's the new developer recipie
with kind rgds,
Dave
zeos 386sx
Well-known
DaveSee said:? Just give us the data--say the PhotShop/Gimp capable--I'll do the rest.
For me, I'll take the digi-cam that provides the least non-emulated, full of light reading on the sensor: color is in the eye of the manipulator in this digital medium. Problem is, cameras are "optimized/priced" for those who want that "film-effect" these days.
Dave
Dave,
I read your post and then read it again (Is that a redux?)
Are you suggesting a (PROFESSIONAL?) camera that isn't programmed for a certain visual output but one that has a true RAW output? Basically, the photog would be responsible for the "characterization" of the image file in the computer. For instance, plug-ins could be applied to a file to provide "emulations" of any film stock response (color or B&W) without having to deal with "presets" applied by the camera manufacturer.
Does that have anything to do with what you just said?
no NDA with Kodak, I stared at HEX dumps of .kc2 files. If anyone gets an OLD DCS200 camera, the ".kc2" file format is completely unsupported by software past win98 and Photoshop 4.0. The Kodak software will get the files from the camera's built in 80mb unremovable SCSI drive in ".kc2" format. I wrote code to convert those to .bmp files, which is supported by Photoshop.
Most CCD's and CMOS sensors are silicon, and have pretty much the same response curves. Nothing fancier than a cutoff filter to remove most of the IR light before it hits the CCD. Some materials would be closer to Film. The Nikon FM and Pentax MX used Gallium Arsenide photodiodes which are much closer to film response for this reason. Selenium is also closer to film. Manufacturers make CCD arrays out of Indium Gallium Arsenide (InGaAs) to extend sensitivity farther into IR. Other detectors extend into mid IR and far IR. These arrays are expensive, and mostly used for scientific and military markets. So consumer cameras get silicon arrays with IR cutoff filters. These cameras do not emulate film, but they and film are designed to mimic the response of the human eye.
As far as the Emulate vs simulate discussion goes, if anyone needs a Z80 emulator for a PC, I have one.
Most CCD's and CMOS sensors are silicon, and have pretty much the same response curves. Nothing fancier than a cutoff filter to remove most of the IR light before it hits the CCD. Some materials would be closer to Film. The Nikon FM and Pentax MX used Gallium Arsenide photodiodes which are much closer to film response for this reason. Selenium is also closer to film. Manufacturers make CCD arrays out of Indium Gallium Arsenide (InGaAs) to extend sensitivity farther into IR. Other detectors extend into mid IR and far IR. These arrays are expensive, and mostly used for scientific and military markets. So consumer cameras get silicon arrays with IR cutoff filters. These cameras do not emulate film, but they and film are designed to mimic the response of the human eye.
As far as the Emulate vs simulate discussion goes, if anyone needs a Z80 emulator for a PC, I have one.
Last edited:
pfogle
Well-known
Woops, I didn't read to the end of the thread before writing this, so I think all the points have been already made. Apologies...
I'm sorry to be so stupid, but I just don't get this argument.
A colour film is roughly three monochrome layers. A digital array (colour) is three monochrome sets of pixels. A b/w emulsion is a single layer with (usually) a broad spectral response that covers the whole range of the colour film layers.
Apart from subtle effects (grain, etc) there's surely no difference at all between
a) selecting a single layer from colour film (via a filter)
b) selecting a single channel in digital, either via a filter or via post-processing
c) selecting a part of the spectral response of a b/w emulsion via a filter
- as long as the output is b/w.
Obviously, in real life, you can't select a pure channel with colour film, as there will be overlap in the sensitivities of the layers. This can be emulated, though, in digital, by channel mixing.
IR is a bit different, in that it is ususally a simulation in digital, unless you actually use the IR part of the spectrum, and use an IR filter, just as with film. However, digital sensors are in fact sensitive to IR, and often can be used to get genuine IR images. Often, though, sensors have a built in anti-IR filter that makes this more difficult. Early cameras (the Olympus 2020 was a famous example) used less IR filtering and could make excellent IR images that rival specialized IR emulsions.
Phil
ps AFAIK, the only advantage of a monochrome sensor is the higher resolution, though, to be honest, I doubt if you'd see the difference between that and, say, a D1s image in monochrome mode. I may be wrong here...
richard_l said:Just to clarify....
I think of a filter emulation as a program or algorithmic procedure which mimics the action of an optical filter. It will work on any image without modification.
A simulation only mimics the result of using an optical filter.
I'm sorry to be so stupid, but I just don't get this argument.
A colour film is roughly three monochrome layers. A digital array (colour) is three monochrome sets of pixels. A b/w emulsion is a single layer with (usually) a broad spectral response that covers the whole range of the colour film layers.
Apart from subtle effects (grain, etc) there's surely no difference at all between
a) selecting a single layer from colour film (via a filter)
b) selecting a single channel in digital, either via a filter or via post-processing
c) selecting a part of the spectral response of a b/w emulsion via a filter
- as long as the output is b/w.
Obviously, in real life, you can't select a pure channel with colour film, as there will be overlap in the sensitivities of the layers. This can be emulated, though, in digital, by channel mixing.
IR is a bit different, in that it is ususally a simulation in digital, unless you actually use the IR part of the spectrum, and use an IR filter, just as with film. However, digital sensors are in fact sensitive to IR, and often can be used to get genuine IR images. Often, though, sensors have a built in anti-IR filter that makes this more difficult. Early cameras (the Olympus 2020 was a famous example) used less IR filtering and could make excellent IR images that rival specialized IR emulsions.
Phil
ps AFAIK, the only advantage of a monochrome sensor is the higher resolution, though, to be honest, I doubt if you'd see the difference between that and, say, a D1s image in monochrome mode. I may be wrong here...
Last edited:
zeos 386sx
Well-known
Brian Sweeney said:So consumer cameras get silicon arrays with IR cutoff filters. These cameras do not emulate film, but they and film are designed to mimic the response of the human eye.
Brian,
Who's human eye??? Wouldn't camera companies be more accurate using a scientific standard to generate a response curve - along the lines of an 18% reflectance grey card for instance.
When I mentioned the DMR examples earlier you pointed me to Jorge's "diary" at the dslrexchange. One of the first things he objected to was the DMR's white balance. Someone else said it was ok, just set 300k differently (i.e. warmer).
Wouldn't it make more sense to establish an industry wide standard based on something less variable than the response of the human eye? My eyes don't see colors the same way. If I close one eye at a time and look at a color the color looks different through each eye.
It makes more sense to me to focus on making a sensor with the longest dynamic range and resolution with a standard response curve that a photographer could tweak to personal taste using plug-ins - either inserted in the camera via card or later in the computer.
If they used this scientific method, then every film, lens, and photographic paper would respond the same. Viva La Difference? That is why I'll stick with the term "mimic". Film and camera manufacturers want to produce pictures that people like, not that are scientifically accurate. It sells more. People like Fuji Colors, or Kodak Colors. I like low-contast images of my Summarit for portraits and the high-contrast pictures from the Nikkors for butterflies and flowers.
My old Kodak DCS420 does not use a "low-Pass" filter as most newer cameras use for moire. It gives some very detailed and high contrast shots that would be "lost" from the filter.
If you want an image to provide "radiometric" data, you have to do a lot of work to back it out. Usually you set up the equivalent of grey level cards in an image and read their densities. From that you can go on to interpreting film density to "Watts-Steradian". Same with CCD arrays, you usually take objects of known brightness and record the response. For me, that type of work was over 20 years ago.
http://mywebpages.comcast.net/brianvsweeney/spiny3.jpg
> Brian, you have that FORTRAN code public?
The exe in this file assumes command.com is in the root directory of the C drive.
http://mywebpages.comcast.net/brianvsweeney/KC2FILES.ZIP
XP has moved the dos windows command.com processor to the windows subdirectory:
http://mywebpages.comcast.net/brianvsweeney/KC2_BMP.EX1
The above file needs to be renamed to a ".exe" to run in an XP DOS window.
A sample raw file is included, and FORTRAN and Assembly source code. Free for use, of course.
http://mywebpages.comcast.net/brianvsweeney/DOSIO.ASM
http://mywebpages.comcast.net/brianvsweeney/EXEC.ASM
http://mywebpages.comcast.net/brianvsweeney/KC2_BMP.FOR
A second RAW file from the DCS200IR.
http://mywebpages.comcast.net/brianvsweeney/KC2FILE2.ZIP
My old Kodak DCS420 does not use a "low-Pass" filter as most newer cameras use for moire. It gives some very detailed and high contrast shots that would be "lost" from the filter.
If you want an image to provide "radiometric" data, you have to do a lot of work to back it out. Usually you set up the equivalent of grey level cards in an image and read their densities. From that you can go on to interpreting film density to "Watts-Steradian". Same with CCD arrays, you usually take objects of known brightness and record the response. For me, that type of work was over 20 years ago.
http://mywebpages.comcast.net/brianvsweeney/spiny3.jpg
> Brian, you have that FORTRAN code public?
The exe in this file assumes command.com is in the root directory of the C drive.
http://mywebpages.comcast.net/brianvsweeney/KC2FILES.ZIP
XP has moved the dos windows command.com processor to the windows subdirectory:
http://mywebpages.comcast.net/brianvsweeney/KC2_BMP.EX1
The above file needs to be renamed to a ".exe" to run in an XP DOS window.
A sample raw file is included, and FORTRAN and Assembly source code. Free for use, of course.
http://mywebpages.comcast.net/brianvsweeney/DOSIO.ASM
http://mywebpages.comcast.net/brianvsweeney/EXEC.ASM
http://mywebpages.comcast.net/brianvsweeney/KC2_BMP.FOR
A second RAW file from the DCS200IR.
http://mywebpages.comcast.net/brianvsweeney/KC2FILE2.ZIP
Last edited:
zeos 386sx
Well-known
The Sony/Phillips standard that is required for CD players didn't slow down music sales.
My point is that such a standard could be varied by the photographer. In the same way that a film camera can use any film that can be loaded in it, a "standard" CCD/CMOS response could be varied by the user to look anyway they want - including "low-contast...for portraits and...high-contrast...for butterflies and flowers". This could be done with a "style" card in the camera or later, via program plug-in, in a computer. Kodak and Fuji could sell "style" cards that would take the standardized raw image and apply specific color characteristics to it. A photographer wouldn't be limited to a single look from a camera but could have a number of pictures, all taken at the same time, that would have different looks. There would be an immediate market among people who print directly from their cameras. It would allow them some editing control without having to master Photoshop or even a computer.
My point is that such a standard could be varied by the photographer. In the same way that a film camera can use any film that can be loaded in it, a "standard" CCD/CMOS response could be varied by the user to look anyway they want - including "low-contast...for portraits and...high-contrast...for butterflies and flowers". This could be done with a "style" card in the camera or later, via program plug-in, in a computer. Kodak and Fuji could sell "style" cards that would take the standardized raw image and apply specific color characteristics to it. A photographer wouldn't be limited to a single look from a camera but could have a number of pictures, all taken at the same time, that would have different looks. There would be an immediate market among people who print directly from their cameras. It would allow them some editing control without having to master Photoshop or even a computer.
My Nikon D1 pretty much allows that, high-contrast vs low-contrast, setting the ISO-equivalent for the exposure, etc. The spectral response of the CCD is fixed, but you can filter it optically or post-process it later. The white-balance information is stored with the image when set to "RAW" mode and you can set the color balance as you please with Nikon Capture, Nikon View, or Photoshop. You can adjust the red-green-blue image planes individually to suit yourself. You can also force the camera to use custom white-balance settings as you want with in camera processing to produce the JPEG so you do not have to use the computer for this processing. But that latter takes some mastering, and I find Photoshop is easier.
Last edited:
T_om
Well-known
zeos 386sx said:There would be an immediate market among people who print directly from their cameras. It would allow them some editing control without having to master Photoshop or even a computer.
And this is probably coming down the pike... I just do not know when.
But if maximum quality B&W is the goal, this will never be the proper tool because it must necessarily generate an "average" image taking into consideration all likely shooting scenarios.
I'll stick with conversions in PhotoShop with the powerful tools available there.
By the way, here are some additional links re: the conversion process and even the argument about doing away with B&W film altogether.
Please note: Luddites PLEASE refrain from reading the link below on eliminating B&W film altogether. It recommends color film even if scanning because the digital conversion is more powerful. If you think digital is the invention of the devil, it will most certainly cause facial tics at the least, apoplexy and uncontrollable seizures at worst.
Here is a Russel Brown movie tutorial (PhotoShop guru with excellent tutorials presented in an EXTREMELY irritating manner) http://av.adobe.com/russellbrown/ColortoBW.mov
Here is a simple action that does a pretty good job. http://epaperpress.com/psphoto/index.html
Using color film to produce the best B&W image (also applies to digital capture): http://robertdfeinman.com/tips/tip12.html
OK, OK, OK, so you want to scan B&W film anyway. Here is the way t get the best from your scans: http://robertdfeinman.com/tips/tip11.html
Here is a free "one touch" conversion action that is actually quite good: http://www.digidaan.nl/indexframedigidaan.html?channelmixer/index.html
Have fun.
Tom
PS: If you REALLY want your head to spin, go here: http://www.echalk.co.uk/amusements/OpticalIllusions/colourPerception/colourPerception.html
zeos 386sx
Well-known
Tom,
Thank you for the reading material...
I'm looking for max b&w. I'll stick with Photoshop too - even though I'm rank beginner with it.
I would add to that list the website that Wim suggested earlier.
http://www.thelightsrightstudio.com
It has some excellent tutorials. A couple are even in Apple Quick time .mov format and visually walk you through the Photoshop process.
Thank you for the reading material...
I'm looking for max b&w. I'll stick with Photoshop too - even though I'm rank beginner with it.
I would add to that list the website that Wim suggested earlier.
http://www.thelightsrightstudio.com
It has some excellent tutorials. A couple are even in Apple Quick time .mov format and visually walk you through the Photoshop process.
zeos 386sx
Well-known
Tom,
I had trouble getting the movie at the following to run:
<http://av.adobe.com/russellbrown/ColortoBW.mov>
Even though I have Quick Time loaded it doesn't seem to run at this site.
For those interested in Russel Brown's tips try the following:
http://www.russellbrown.com/tips_tech.html
I had trouble getting the movie at the following to run:
<http://av.adobe.com/russellbrown/ColortoBW.mov>
Even though I have Quick Time loaded it doesn't seem to run at this site.
For those interested in Russel Brown's tips try the following:
http://www.russellbrown.com/tips_tech.html
T_om
Well-known
zeos 386sx said:Tom,
I had trouble getting the movie at the following to run:
<http://av.adobe.com/russellbrown/ColortoBW.mov>
Even though I have Quick Time loaded it doesn't seem to run at this site.
For those interested in Russel Brown's tips try the following:
http://www.russellbrown.com/tips_tech.html
Runs fine for me and I just re-tried the link I posted. Works great.
It might be you do not have the latest viewer. I would check for updates to the QT player.
Tom
richard_l
Well-known
pfogle said:I'm sorry to be so stupid, but I just don't get this argument.
A colour film is roughly three monochrome layers. A digital array (colour) is three monochrome sets of pixels. A b/w emulsion is a single layer with (usually) a broad spectral response that covers the whole range of the colour film layers.
Apart from subtle effects (grain, etc) there's surely no difference at all between
a) selecting a single layer from colour film (via a filter)
b) selecting a single channel in digital, either via a filter or via post-processing
c) selecting a part of the spectral response of a b/w emulsion via a filter...
That wasn't the issue. The issue was about post processing a color image in order to mimic the effect of a color optical filter versus post processing a B/W image in order to mimic the effect of a color optical filter.
DaveSee
shallow depth of field
zeos 386sx said:Dave,
I read your post and then read it again (Is that a redux?)
Are you suggesting a (PROFESSIONAL?) camera that isn't programmed for a certain visual output but one that has a true RAW output? Basically, the photog would be responsible for the "characterization" of the image file in the computer. For instance, plug-ins could be applied to a file to provide "emulations" of any film stock response (color or B&W) without having to deal with "presets" applied by the camera manufacturer.
Does that have anything to do with what you just said?
As Brian, and others will attest, the sensor itself has limits to its sensiitivty to light... then there is the camera software--and hardware processor--which gathers this (truely)raw data and writes it to *a* file, TIFF, JPEG or camera OS-specific RAW file. To simplify the system, a JPEG is always written for display in the omnipresent LCD of digi-cams, "professional" or otherwise. The TIFF and camera OS-specific RAW file is, or may be, written to the storage medium as well.
I am not a "professional" photog, yet my costs to produce images are the same... although not to the same scale
See Brian's comments in this thread about optimized, and hardware-specific sensors dedicated for certain tasks. He has some clear comments on what, say, a government agency wants(contrast), and what a wildlife(street or meadow) photog wants(tonal balance?). Yes, digital sensors, like film are optimized for the task... and that brings us back to the "Is digital poorly optimized for B/W imagery?"
IMHO, digital sub-systems, "professional" or otherwise, are designed for color and film effects because that's what *most* photogs expect. That said, with the right amount of data, who knows what a digital "capture" may--through PhotoShop,GIMP--provide to an image maker? Why is Adobe promoting DNG as a format? Because it *opens* the possibility for digital data to become imagery other than we(photogs) expect from film. Now, just how much DNG provides in scope remains a debate not unlike that between Ilford, Agfa(RIP) and Kodak products.
rgds,
Dave
Share:
-
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.