I wanted to follow up on this, if I might Chris. I am trying to understand the mechanism by which one piece of software would have better- or worse color than another on a given monitor. The RAW file is - more-or-less just a set of bits describing what the sensor saw. The color space and rendering is established by the monitor's gamut (or printer's if that's in play).
So, are you saying that the free software you've used is deficient because:
- It is incorrectly reading the RAW file (which presumably has a published standard).
- It's is reading the file correctly but not properly mapping the results to color.
- It's setup with the wrong defaults
- It's too limited in features compared to what you prefer
I ask this because I am having trouble understanding the mechanism of failure here. The entire signal chain is digital in this situation. There should be no loss of fidelity from capture to rendering (assuming calibrated devices),
unless the software is buggy or misimplemented.
In my direct observation, for example, Darktable gives me very predictable color from RAW files. However, I stipulate that I am a 99% monochrome film shooter and you may well be able to see nuances of color variation to which I am blind.