Understanding Selective Tone control

ok i have found an other simple trick to push things inside the colorspace you work in and you don’t need to use the selective tone sliders too much:
See my video’s, i use tonecurve and show that color accentuation, contrast and selective tone are much less useful for containing the colors inside shown by the moon and sun blinkies when out of gamut of sRGB and that colorrendering profile does need to be adressed too.
Clip one clipping is actentuated by colors i was fooling around to see if contrast, selective tone , saturation protection, and stuf like that does things in the blinkies of oversaturation and blown part wile using the protection of tonecurve lift and lower the 0-255 to 4-245.
using tonecurve to correct this shows that i can use great amount of contrast pushing the image to “pop” with out blowing the gasket.
combination with color specific HSL correction

In words: when you have clipped colors in the bottom ,oversaturated, or blown by too much luminance i can resolve this quick by using the tonecurve.
Only thing i am not sure of yet, have to test it, if i just cut off data or “push it inside” by changing the numbers.
But it is fast and interesting to change highlights and blacks which fall of the chart.
(i hope a real colorspace knowing guy can help out to explain what i am seeing on my screen.)

My conclusion is that my huelight profile isn’t calibrated inside v3 yet :wink: and that the generic renderings profile works better in this situations to keep the hue,saturation,brightnes inside the sRGB gamut. (did i used the correct therms/words to describe my finding?) please correct if not to keep it clean.
Second conclusion the sliders in selective tone and contrast arn’t not that powerfull as i first thought and more to see as a sort of fine tuning. I tried to use smart lighting and exposure centerweight correction to recover those blown saturated blinkiespots but HSL and tonecurve did it much faster and less destructive to the rest of the image colors.
Stil i am thinking that i make a mistake in thresholding blacks and lowering highlightvalue due direct cutting of image data wile the sliders are pulling stuf more inside by selective recovery.
I was convinced that color recovery in color rendering by intensity and protection saturation colors much more did in “recovery” “helping”
I really hope that this kind of managing and using of color related tools in the palletes gets much more describt in detail in the user manual or webinairs/tutorials explaining each tool how to use in which way and it’s restrictions. Because fooling around does help finding tricks and workarounds but getting a contructive logic way of working you through image issues to get the best out of it need some technical background.

Yes, I think that would be very helpful. I’ve used tone curve mostly to deal with overexposed highlights (starting before PL3). But a better understanding of alternatives and their pros & cons would be useful.

Yes, i started to re discover tone curve by this topic:
“blacklevel” and clearviewplus
I tried to get images shot through water surface more “clear” and this opened my eyes about the use of the tone curve in a way of not only “contrast” curve but also in “dehaze”.
And now color"management".
Same with the HSL tool, playing with it and watching histogram with “moon” and “sun” on in L or R or G or B only shows some control of the spikes if you mesh around in saturation and luminance sliders. (of course your colors changing but in combination with the other toolset it’s usable to get things done and the problem is finding out which to use when.)

I don’t believe the phrase “out of gamut of sRGB” makes sense.

RAW images have a colorspace: the colorspace of the sensor. That’s it. Any camera setting for Adobe RGB, sRGB or whatever is used only when the camera creates a JPEG. If you are working with a RAW file, this setting is irrelevant.

I believe the sensor color is interpreted using the chosen Color Rendering, but I don’t think this means that it’s placed in an sRGB colorspace—I would expect it to be stored in a much bigger colorspace.

When you view an image in PL, the color is converted from PL’s internal colorspace to the monitor’s colorspace. When you print it, it is converted to the printer’s colorspace. When you export it, it is converted to the chosen colorspace. All of this is most useful if the starting colorspace is a large one (such as XYZ) rather than a small one like sRGB. The colors that are out-of-gamut could be different in all these cases.

With a large colorspace (like XYZ), you might never have an out-of-gamut color and it is not clear that PL is giving an out-of-gamut indication when you turn on the clipping indicators. It looks like PL converts the color into an 8-bit/channel RGB value (and I’m not sure how—is this the monitor’s colorspace?), which it displays under the histogram. Any channel with a value of 255 is marked as a highlight clip; if the value is 0, it is a shadow clip. (I hope the internal value is more than 8-bits/channel or else we are all wasting our time producing 16-bit TIFFs).

A color that is not indicated as clipped might be out-of-gamut. And colors like (0,0,0) and (255,255,255), which are considered clipped, might not be out-of-gamut.

By adjusting the Tone Curve endpoints, you prevent any channel from having a value of 0 or 255 and thus from ever showing a clipped value. The Tone Curve seems to be near the end of the processing pipeline, so this prevents any prior controls from ever generating a clipped value. Whether this is a good thing or not, I’m not sure.

I am far from an expert in the subject of colorspaces, particularly with regard to RAW processing and preview renderings. Anyone who is an expert should feel free to correct me.

I am not certain in how this works but i know the initial “colorspace” is the one from the sensor stored in the raw file. Because rawfile arn’t color of image, it contains numbers who representing a reading from the sensor which rawfile developers interpreting to a colorspace like AdobeRGB or sRGB according to testing of that sensor. Most sensor are capable of storing more wavelengthsreading then we can see or need. Much of it is filtered out by a UV/IR filter for instance, that is what taken out for IR-camera’s bringing it towards a fullspectrum reading.
The preview we see in the rawdeveloper is a real colorspace like sRGB or AdobeRGB. This is rendering the numbers of the rawfile into this colorspace i set up in preference if i can. Dxo follows camerasetting i believe. Hence the clipping tools. This shows where the numbers fall of the chart.

And by changing the sliders and such you recalculate the numbers of the rawfile in to an other hue,saturation,luminance level. By compressing? That i don’t now for sure but i think it does.

Using a larger colorspace then your monitor can handle will effect the outcome because you modifing colors in the dark outside the visual display possibilities of your screen.

On thing i never understand is how the tools are working the color. I think you need such a tool as mac has to compare colorspaces in 3D added youtube video but then realtime so you can see which hue is actual outside the sRGB colorspace. Say it’s redisch bulking out you can choose to compres all channels only high(lights) and/or oversaturated( shadow and blacks) or shift the image to the place you have empty space,( by exposure compensation ) or just recalculate, compres, the redisch channel. (HSL tool?)

I think it’s 12bit max for colordefinition/channel in a rawfile (can’t recall where i read that.) and srgb jpeg is 8bits wile tiff has space for 16b/channel to define colors.

I think when i compres the tonecurve by lifting and lowering i actual say to recalculate the rawnumbers inside a smaller colorspace due smaller row of possibilities : 0-255 or 10-245 which is 20 steps less then on both sides clipping of “color” tools as treshold have.(edit: i think the moon and sun have about 0-10 and 245-255 as threshold to start showing clipping.)
So that’s what i am want to know. If i use tonecurve do i cut of the edges of the sRGB colorspace or do i force it to recalculate the rawfile numbers into this new smaller range of “black and white”.
I fear it cut of as in no real black and no real white.

I agree the real stamp of aplying a colorspace is by export. Then all numbers representing a color outside the colorspace like sRGB are clipped and don’t be stored in the file.

The clipping indicators in PhotoLab, esp. the moon, are a bit misleading, because the latter combines black point clipping info with display out-of-gamut warnings. It would really be useful to have a separate OOG warning for the display and for the working colour space – these two might be more or less similar if you have a wide gamut display very close to Adobe RGB, but what about the users who don’t have such displays?

When you have a display close to sRGB, and you edit your image paying attention to the Moon indicator, you effectively do sRGB soft-proofing for sRGB output. But what if you want to export the file in Adobe RGB, e.g. for printing or for people with wide gamut displays? Your edit won’t be optimized for those outputs because your display is incapable of showing you colours outside of sRGB, and you have no working colour space out-of-gamut warnings (in PhotoLab’s case it’s out-of- Adobe RGB-gamut warning) – you will throw away important colours that PhotoLab is capable of rendering, despite its limiting working colour space.

And one last point – sometimes it’s not worth it to worry about slight out-of-gamut issues because the perceptual rendering intent applied during export should take care of this. Without proper soft-proofing one needs to learn to trust it will do just fine, without you having to resort to the Tone Curve tricks. Currently in PhotoLab you have to do hard-proofing, i.e. export and evaluate if there are no colour rendering issues, or create variants / virtual copies for each kind of output (not really ideal).

2 Likes

Thanks, santos (and OXiDant).

Yes, RAW files are just numbers and they are characterized so as to match the numbers with colors in some color space.

I expect RAW processors to use some internal color space. Apparently, Lightroom uses a special version of the ProPhotoRGB color space and it stores channel coordinates in 16 bits. You want a color space that is large enough to encompass all colors that can be captured by all sensors now and in the reasonable future. sRGB, for example, would be a poor choice for an internal color space.

As sankos’ linked post shows, Photolab produces a histogram based on one of three choices in the Preferences dialog: the monitor’s color profile, sRGB or Adobe RGB. I believe this means that, for the histogram, PL converts colors from its internal format to the selected profile. I believe the RGB value displayed under the histogram comes from a similar conversion.

Change the profile in the Preferences dialog and the histogram changes. I did this with one image. Then I saved the image with the Export dialog using the same color space each time (but different Preferences profiles)—the resulting images were identical. PL’s profile preference affects the displayed histogram and RGB value, but not the internal color space.

I thought I understood clipping, but I don’t think I do. For instance, I have a color swatch that reads (239,27,9). I thought only values of 0 would be indicated by shadow clipping, but turning on just shadow clipping changes the value to (254,255,255). Turning on just highlight clipping changes it to (1,0,0). Of course, PL could base clipping on the luminance value. Unlike RGB, this is not displayed as a number and so it’s difficult to know what the value it might be.

Like most things in PL, when examined closely, the algorithms become rather mysterious.

Yes, but it also crucially affects the preview you’re seeing. If I set my Preference display profile to sRGB, the preview of images would get really oversaturated because I use a wide gamut monitor. Setting it to Adobe RGB would look better but also wouldn’t be accurate because the only correct characterization of my particular display is the ICC profile I make when calibrating/profiling my monitor.

PhotoLab is the only raw converter I know of which computes the histogram, the colour sampler and the clipping warnings on the basis of the display profile. It makes no sense to me – the histogram should be computed on the basis of either the working colour profile or the output profile (when soft-proofing). It would be nice to have the option to display raw histogram as well (like in FastRawViewer, RawTherapee or darktable). The current display warning is useful, but it’s not the most important thing when we edit our photos.

3 Likes

Thank you for bringing the former threads back in.
i remember again: it started by asking which colorspace is used to convert the rawfiledata into.
There maximum colorspace in photolab is AdobeRGB colorspace and some of you liked to have prophoto for printing.

About the Histogram: (to see if i fully understand)
1 the Histogram shown in camera is the one they process by your camera rawfile to jpeg setting as in adoberRGB or sRGB internal processor? It’s not the LRGB interpretation of the native colorspace of the sensor.
2 DxO has LRGB and clipping black (moon) clipping highlight (sun) and it shows the RGB numbers 0-255 per channel when you hoover over the image but you don’t see a crosshair in the histogram where this point is in the image and correspondences in the histogram.(Somehow i would like that.)
3 i don’t know if floating histogrampalette/tool is possible to enlarge the frame/window. (i know i can in an other application) This would help to fine tune the blackpoint and whitepoint of the images and to see which channel is oversaturated (colored spikes clipped on the top of the window.)
4 softproofing would need a histogram “in” and histogram “out” so you can see both colorspaces you have selected.

Resolving flat-blacks (nativesensor readout 0-0-0) can’t be done i believe there isn’t any detail in to resolve but i don’t know if this 0-0-0 is also the adobe or sRGB blackpoint 0-0-0 or is a smaller colorspace floating inside the bigger colorspace. and is “black” not 0-0-0 but 5-5-5 or something?
The reason i ask is if i use a slider in selective tone : highlight, midtones, shadows, blacks and i stretch the histogram i turn dark shadow to blackpoint and highlights towards “whitepoint” which contains color data and detail until it hits 0-0-0 and 255-255-255 (RGB) borders of colorspace defined by you (sRGB for instance.)

But workspace is AdobeRGB so working in preview setting sRGB i could have some colors outside the colorspace sRGB, so if i am use exposure compensation and the selective tone sliders to turn down highlights and bright colors, do i “compress” the range of color “values/numbers” inside my sRGB from the AdobeRGB colorspace only or also all available data in the imported camera colorspace numbered in the rawfile? So can i retrieve all available data the sensor has captured and coded in the rawfile by using the sliders? i hope it works that way.

Because then a good working , with adjustable “threshold” clipping detection system can be helpfull to maximize the image tonecurve balance by compressing the value’s of the RGB which representing the hue making the brightness less bright so it fits inside the colorspace i choose in preference like sRGB. (i know i can’t see colors my viewdevice can’t show. out of gamut is out of gamut.)

My Huelight DCP’s for my G80 are more stretched to the borders then the general camera rendering of DxO, getting more out of the sensors data but clipping faster apparently in my sRGB workspace (preview)

Let me just touch on this: Adobe RGB is not the “maximum” color space. From Wikipedia: " When defining a color space, the usual reference standard is the CIELAB or CIEXYZ color spaces, which were specifically designed to encompass all colors the average human can see."

ProPhoto is a smaller color space (but bigger than Adobe RGB) and Lightroom uses a custom variant of this color space for all internal work. I hope that PL does not use Adobe RGB for its internal color space.

As far as I can tell, this is incorrect. The histogram is based on the color profile chosen in the preferences dialog, not the camera’s JPG rendering. Cameras can often render the same RAW file in several different ways (using “creative” modes), which is what I believe PL’s Color Rendering tool also does. These rendering modes alter the color—color space conversions, on the other hand, try to maintain a color even as the numbers that represent that color change.

As a wild guess, if you want the color as captured by the camera’s sensors, you need to have a profiled monitor, you need to make sure PL is using that color profile, and you probably need to have Color Rendering disabled (or maybe set to Generic Renderings/Camera Default Rendering).

Since individual sensors may be slightly off, you can use something like the X-Rite color chart and software to create a DCP (or is it ICC) that will correct for that specific sensor. The default is for a “typical” sensor for your camera, which isn’t always right.

Have a look here for more info regarding color spaces and the histogram in PhotoLab.

2 Likes

Thanks, Greg! That was an enlightening link—info from people who know what they are talking about. I agree with some of the responses that the choice of Adobe RGB for an internal color space is a bit short-sighted.

In photolab adobergb is the biggest workspace, i changed the text to be more clear.:slightly_smiling_face:

If you disable the Color Rendering tool, PhotoLab still uses the “Camera Default Rendering” profile (with no protection of saturated colours) in order to assign appropriate colour values to the demosaiced pixels in the Adobe RGB working space, before converting them to the monitor profile and showing the preview to the user.

“Camera Default Rendering” is equivalent to what in Adobe world is called Camera Standard profile – it’s DxO’s emulation of the look designed by the camera maker for the specific camera model. As such, it’s not the “colour as captured by the camera’s sensor” but it’s interpretation of an interpretation, if you know what I mean.

Although PhotoLab’s default camera profile is the “Camera Default Rendering”, their baseline profile seems to be the “Neutral color, neutral tonality” one. The rendering profiles (e.g. DxO FilmPack camera emulations, or ICC/DCP camera profiles) seem to be put on top of the “Neutral color, neutral tonality” input profile – that’s at least how I understand the Intensity slider under the Rendering box – that Intensity slider acts like a layer opacity slider in Photoshop. And the “Protect saturated colors” Intensity slider probably works on a formula similar to the one used by the Vibrancy slider (a channel mask). It’s necessary because the profile-embedded tone curve might cause oversaturation if the profile doesn’t employ gamut compression.

Incidentally it’s possible that OXiDant’s Huelight profiles were designed without that gamut compression, that’s why they clip so easily in PhotoLab.

One last thing, the “neural tonality” profile name is a bit misleading because it suggests we get a linear, scene-referred rendition (“as camera saw it”). But the profile is gamma encoded, i.e. there is a midtones curve, but it’s neutral in the sense that there’s no shadows dip in the curve.

2 Likes

Oh, it’s not a bug, after all, if you consider that the RGB readout is a result of a conversion to the monitor profile. Still, it’s incomprehensible to me why DxO chose to go this way. I mean it’s probably in order to optimize for speed, but really, would it really be so difficult to put the histogram generation / colour readout before the conversion to the display profile.

Thanks for pointing out this (and all the rest as well). The PL “manual” describes things in only the vaguest manner.

You’re welcome. I’ve just found this out myself when playing around with various settings as a result of this thread, so thanks for that as well. And yeah, I agree that the manual should be much more in-depth for those who want to go a little bit deeper.

2 Likes

Photographers torture themselves a great deal over colour spaces. Recently I had one of my team who believed in AdobeRGB do some testing herself by shooting RAW in AdobeRGB vs in sRGB then processing some photos in Lightroom to see if working in a full AdobeRGB pipeline created richer colours out the other end when converted back to sRGB for web presentation or printing. The end real world result was no improvement.

Chasing alternative colour spaces only makes sense if you plan to exhibit your work in an alternative colour space (that’s not via the web, as sRGB is the colour space of the web) or print at a very high end lab (where of course they will be printing in CMYK not ProPhotoRGB).

The benefits of a wider colour space are largely theoretical. Last year I remember asking for some real world examples where an image’s final output was affected in any way by the intermediate processing space. There was never a real world example, just more abstract counting of fingers.

After careful investigation, I now shoot in sRGB, processing sRGB and print in sRGB. While I’m losing some theoretical headroom, just getting rid of most of the conversions and colour space mismatches more than returns that theoretical result.

The pushed colours look quite fine to me, very rich, very saturated. Output is from two different cameras shooting in sRGB.

I’m sure someone will comment that this is just a football picture and not pushing aesthetic boundaries. Fair enough: please post an image which has suffered from being processed in PhotoLab’s AdobeRGB colour space.

The subject is a bit off-topic for this thread, but here’s my take on this issue.

I don’t know about you but I can clearly see the difference between the sRGB and Adobe RGB version she posted on that website, and it’s precisely the kind of difference I’m used to seeing when softproofing my photos for sRGB in Lightroom, Capture One, RawTherapee or darktable. I’m viewing it in a properly set up Firefox browser, and my wide gamut monitor is calibrated/profiled to its native gamut. The difference is visible in all the blue items of clothing – the Adobe RGB version is nicely rich, whereas the sRGB version is is just ordinary. Other items in the photo seem confined by the sRGB space just fine.

When she writes, “When uploading a picture to the internet, it automatically converts to sRGB,” it’s really not true and shows some basic misunderstanding of how colour management works.

The difference between sRGB and Adobe RGB is really obvious to me not only on a wide gamut monitor, but also when printing on any basic inkjet printer. Yes, even the 4-ink Canon and Epson printers are capable of showing the difference between these two colour spaces, and I’m not talking in theory, but I’ve printed out a lot of photos to be able to appreciate the advantage of using a large colour space for printing. Is the difference huge? Well, yes and no – if you don’t compare the sRGB print-out to an Adobe RGB or a ProPhoto RGB print-out, then the sRGB one looks just fine. It’s only when you have the comparison in front of you that you can tell the difference. Same with monitors.

When it comes to the the working colour space it’s a bit more complicated because nowadays we don’t have displays which exceed Adobe RGB in a meaningful way. Still, if you use a very good inkjet printer the ProPhoto RGB and the Adobe RGB prints will look different. I know because I’ve seen it.

Now, posting a ProPhoto RGB image for a comparison is pointless because you won’t see the difference on a monitor, even a wide gamut one, but only in print.

Is the Adobe RGB working colour space of PhotoLab a deal-breaker to me? No, because I can work around it using the DNG output option for the really important images that I’m going to print in a very good lab. Would I like DxO to embrace a larger working colour space? Yes, because competition (Lr, C1) does that for a reason.

3 Likes

Sankos, thanks for sharing your in-depth experiences with different colour spaces.

Perhaps you could post the same image which has been processed in a sRGB workflow or even AdobeRGB vs one which has been processed in ProPhoto. My question is if the final delivery is to an ordinary print shot (who use sRGB, almost all of them) whether there is a visible difference based on the colour space in which the photographer does the processing.

I wasn’t entirely happy with our methodology when I reread the article now and shot some additional test photos in both sRGB and AdobeRGB on my Z6, both in RAW and jpeg, processing both formats with some strong saturation to multiply the differences. Early results show richer reds in sRGB version but better skies and better greens with AdobeRGB originated images. There’s also much less banding in the skies.


On the other hand, the bricks on the left (sRGB) are a much more satisfying dark orange than on the right (AdobeRGB) where they are yellow. Same applies to the red roofs.

Strangely enough the theoretical colour space differences support the practice in this case.
Human-Spectrum-vs-sRGB-vs-Adobe-RGB
AdobeRGB should have more blues and more greens available, while sRGB reds and oranges make up a larger part of the available gamut.

In any case, it would be great to see someone post some example images where wide gamut processing has improved an image which in the end is deliverable in sRGB (whether to the web or to a normal printer).