Understanding Selective Tone control

Sankos, thanks for sharing your in-depth experiences with different colour spaces.

Perhaps you could post the same image which has been processed in a sRGB workflow or even AdobeRGB vs one which has been processed in ProPhoto. My question is if the final delivery is to an ordinary print shot (who use sRGB, almost all of them) whether there is a visible difference based on the colour space in which the photographer does the processing.

I wasn’t entirely happy with our methodology when I reread the article now and shot some additional test photos in both sRGB and AdobeRGB on my Z6, both in RAW and jpeg, processing both formats with some strong saturation to multiply the differences. Early results show richer reds in sRGB version but better skies and better greens with AdobeRGB originated images. There’s also much less banding in the skies.


On the other hand, the bricks on the left (sRGB) are a much more satisfying dark orange than on the right (AdobeRGB) where they are yellow. Same applies to the red roofs.

Strangely enough the theoretical colour space differences support the practice in this case.
Human-Spectrum-vs-sRGB-vs-Adobe-RGB
AdobeRGB should have more blues and more greens available, while sRGB reds and oranges make up a larger part of the available gamut.

In any case, it would be great to see someone post some example images where wide gamut processing has improved an image which in the end is deliverable in sRGB (whether to the web or to a normal printer).

Most commercial raw converters don’t allow the user to set the colour working space, but RawTherapee does, so here are downsized renditions from an old holiday snapshot which illustrate the difference. First, the default ProPhoto RGB working space was used, sRGB output:


Second, sRGB is used both for working space and for output:

No big difference in colours (there’s some gamma shift), because the output space is the lowest common denominator. So if your output is 100% sRGB, the workspace issue doesn’t affect you.

The colour of the lake was really intense on that day and it exceeds the gamut of Adobe RGB slightly, so when I view it on my wide gamut monitor and when I printed it, it looks great, but when I output it to sRGB it looks meh. Here’s an Adobe RGB conversion – if you can’t see a difference when comparing with the previous renditions then either you use a regular gamut monitor, or you calibrate/profile your monitor for sRGB. If you have a DCI-P3 capable tablet or smartphone you can probably see the difference, too.


[hopefully this forum doesn’t strip the Adobe RGB profile, because the preview before posting looks washed out].

edited to add: unfortunately the Adobe RGB version gets converted by the software running this forum so I can’t show you the difference here :frowning: Disregard the third rendition posted here – you can download the file and “Assign” the Adobe RGB profile to it in Photoshop or Affinity Photo.

  1. The sRGB vs Adobe RGB in-camera setting doesn’t matter for raw files, but if you edit OOC jpegs, or if you want slightly better embedded jpeg previews (on the basis of which the camera does the histogram calculation), I always set my cameras to Adobe RGB.
  2. The problem with this comparison is that these are two different shots, so the lighting and exposure might have changed in between the shots, which would throw the comparison off. In order to run this comparison I’d take the raw file into RawTherapee and export two files processed in sRGB and Adobe RGB working spaces, and the same sRGB output profile – if you use the colorimetric rendering intents there should be no significant differences, with the perceptual intent the difference will probably be slight (like in my examples in the post above).
  3. If you output two files, one with the sRGB and the other with the Adobe RGB profiles, a wide gamut monitor profiled to its native gamut should show you a difference in the blue elements of the photo. The red part of the spectrum is the same for Adobe RGB and sRGB, as seen the diagram you posted. As you said, it’s only the blue-green spectrum that will benefit from the Adobe RGB gamut for wide gamut output.
  4. All of the above implies that you do your processing/comparisons on a raw file. If you process a rendered file then SOOC Adobe RGB jpegs or Pro Photo RGB 16-bit tiffs should be the basis of your edits in order to maximize quality (even if they end up in 8-bit sRGB jpg). SOOC sRGB 8-bit jpegs are not ideal for significant tonal and colour edits.

1 adobergb for jpeg, i knew that it didn’t matter for raw but i was set oocjpeg on srgb because of the preview in camera lcd . Which can’t preview adobe (i think) (because i edit oocjpegs only when i use creative which don’t have rawfiles i just let it on sRGB it is thus better to set camera on adobe?).

So i need to set dxo pl in workspace sRGB not auto from exifdata, and set camera to adobeRGB to maximalise my oocjpeg colorspace so i can push colors around more.
Does it have any drawbacks?
I suspect AWB is measuring from the camera’s sensor colorspace so isn’t effected by a different output colorspace.

I’m not sure I understand your question. It seems you use the in-camera creative styles (picture styles, etc.) for your out-of-camera jpegs. If you are sure you are never going to print them in a decent lab, and if you’re sure that you won’t have a wide gamut monitor in the future, then I guess sticking to sRGB is fine, as long as you don’t do any crazy edits on them. If you push your sRGB jpg in PhotoLab too much, you’ll see things like banding pretty quickly, much quicker than with Adobe RGB jpgs. Do experiment with this before settling on sRGB jpegs for ever.

But I don’t like making those kinds of decisions at the moment of capture – that’s why I shoot for raw, which means I have to adjust my exposure parameters not on the basis of the jpeg preview but on the basis of how my camera really meters. You could use UniWB for that, or some cameras offer the possibility of calibrating the zebras so that they indicate raw green channel clipping (as confirmed by FastRawViewer, RawDigger or RawTherapee). If I cared about SOOC, I’d underexpose all my raw files.

I’m not sure which setting you are referring to – if it’s the one in the Export dialogue then if you set your camera to Adobe RGB and you wanted to output from PhotoLab for web, then choose sRGB in your export options.

I shoot raw plus jpeg so i have all jpeg only features of the camera, like panorama, stars, and such.
i use raw mostly as source for my endproduct and delete the oocjpeg when i am done. (i use wifi to my smarttv for preview with my G80 which needs soocjpegs.)

The reason i did sRGB in camera was for few reasons:
1 don’t have calibrated wide gamut screen for editing or viewing.
2 print nearly nothing, (yes on a office toner mfp for fun but no photolabprinter things)
3 and this:

Which is after a search my mistake:
export%20setting

i remember again, DxO has no preset on there work(color)space like other applications:
it’s always in AdobeRGB (1998) and down sample colorspace to display setting:
display%20setting
in my case non calibrated generic sRGB. or current profile for the display device.
(which is also sRGB but from win10 or my driver of my videocard i suppose.)

So the camera setting adobeRGB or sRGB has only influence on my rawfile proces if i set export in “Original”.

indeed, my mistake.

i use Panasonics Idynamics just for that: it auto underexpose 1/3 2/3 3/3 in highdynamic scenes so the highlight is protected in the rawfile more. (it uses the soocjpeg processor to calculate it’s intended to use for soocjpeg.) so in that case i set camera to wides colorspace for less reaction of idyn. on sky’s and foilage hue’s. (idyn does contrast heykey low key correction and exposure to fit inside histogram)
the rawfile is only influenced by the exposure compensation of -1/3— -3/3 so it’s a great auto EV correction on highlighted scenes. :sunglasses:

conclusion set a camera in AdobeRGB profile when you shoot raw and only use soocjpegs when you need to.

Fair enough. If you keep your raw files in the archive, then all is good – you can always retrieve them later if you ever needed to re-process the photos for wider gamut output.

If so, then the all-sRGB workflow might work – you just don’t have as much wiggle room for edits with the jpegs (but you do with raw files), and as long as you don’t create banding in your output, you don’t have to worry about it. Plus, you can always go back to the raw file and re-edit in case of problems.

Not many applications actually let you control the working colour space-- I know that RawTherapee and newest darktable do. With PhotoLab it’s indeed Adobe RGB – the result of the calculations is converted (not downsampled) to the display space for preview or output space for export.

I don’t know how this works, so I can’t comment. If you look at a raw histogram of your raw file (e.g. in FastRawViewer or RawTherapee) you can see if you exposed your file optimally. Histograms in other applications don’t give you that information.

That’s what I do, but I realize that offering this as a general, universal piece of advice for everybody is not a good thing. So my conclusion would be: switch the in-camera setting to Adobe RGB if you know what you’re doing; if in doubt, stick to sRGB.

2 Likes

look here

This can too: it’s only on jpeg and tiff not rawfile doh

Thanks for the links. Oh yes, I forgot about Silkypix and its various clones for Panasonic, Ricoh/Pentax, Fuji and Nikon cameras. They have some weird colour management settings that are far from obvious and require some digging around.

Hmm…I could have sworn the topic here was “Understanding Selective Tone control”. :smile: Of course, I have been a bad boy, too, and added my own comments to the color space discussion.

Back to the original subject: I wanted this thread to help PL users understand how the selective tone control works, but also to discuss workarounds for any deficiencies. People contributed some workarounds, and I have been trying out a number of them.

I’ll suggest another workaround: chain the processing with another image processing tool. Many RAW tools will handle DNGs, TIFFs and JPGs.

I was working with an image where I needed very precise control of small tonal ranges. I took it as far as I could in PL. Then I loaded the result in Darktable. Whoa! Suddenly, I could control the tonal range selections with ridiculous flexibility.

In addition to a multitude of tone controls, I could combine these tools with parametric masks, which themselves could be combined with painted masks. I was floored by the flexibility of the masking system–it includes features I could only dream about in PL. (I have a feature request for combining control points with painted masks which attempts to achieve just a tiny bit of what Darktable already does–it’s garnered 0 votes. See https://forum.dxo.com/t/combine-painted-masks-with-control-points/9834).

On the down side, darktable has the typical complex UI of many open source projects. Also, the DxO folks probably nail the technical details a lot better. I would stick with PL for lens correction, lens sharpening, vignetting, PRIME noise reduction, smart lighting, clear view, etc. I had an image where PL did remarkable highlight recovery. In darktable, nothing brought out the details buried in the highlights. It was a testament to the power of PL.

I do wish that PL would pick up a subset of darktable’s masking system (i.e. local adjustments). I mean, I like being able to define a mask based on luminosity or hue proximity, but on the A or B channel of LAB space? And while I understand how the mask might be based on the output of a control, it’s really hard to imagine how to apply that to the typical image changes most of us make.

Since we did drift into color management, it looks like darktable is also ahead of PL here; for instance, soft-proofing is supported and the histogram is not tied to the monitor’s color profile.

I was using the first release candidate for darktable 3.0.

PL can be chained with a lot of other tools, of course, including my ancient Photoshop CS6.

3 Likes

Sorry for off-topic posts.

And I agree – darktable’s local adjustments are just another level completely, just like PRIME is with PhotoLab. Concerning darktable and highlights – there about a million ways to tackle highlights in that program. See the discuss.pixls.us forum, esp. the Processing/PlayRaw subforum.

Oeps! well to defend myself: tonecontrol and adjusting them needs understanding of colorspaces and hue, luminancevalues, saturation and “vibrance” behaviour. :woozy_face:

About two step adjustment: How about “looping” a 16tiff file?
dxo pl: do your thing export in tiff and import again? (didn’t test it but it should work.)
I don’t think/suspect you gain anything, so except you feeding a tiff into a “better” processor for color recovery by replacing/rebuilding “lost” pixels working from the rawfile to endproduct is most commen to recover out of gamut pixels.
But i know chain processing from different processors to use the strongholds of each is done.
by feeding a linear DNG exported by DxO into a other RAWprocessor where you further proces the WB and colors. i tried that with DxOPL => dng => Sp5pro => tiff => Define2 => jpeg.
biggest problem is colorshift by different interpretation of “WB”. second problem is balancing the proces “of what do you where?” (don’t know Darktable except from hearsay that it is rather difficult to grasp.)
you can try to use “export to application” to darktable at some point, and take it back in dxo for further refinement. (no DNG only Tiff and colorspace selection (adobeRGB

I ended up with full migration to DxOPL elite and only export a 16b tiff for those cases i hoped to improve some more in a other application in those rare moments that that image was worth the extra effort
if you stay in dpl i think using all tone(color) managing tools in a selective order and decide every time a certain path.I think i go for Virtualcopies and create a few paths to see which will be better for end result using different groups of tools. Kind of stacking adjustments from global and local one’s and use the VC’s to create “safepoints” and or “duplicates” to try out two roads at the same time and brances off to the best outcome.
This will give a learning curve to speedup the chain of adjustments and get less sidepaths.

Because every type of tool has it’s own con’s and plusses. if you uses the plusses and avoid to wander in to the con’s you can get results who are most interesting.
The “desaturation” trick to see what you effecting is most useful for any local adjustment tool and also in HSL. The selective tone palette in corporation with contrast palette is less useful in the colorcontrol department but can helpout in the luminance-control and contrast and sharpening(microcontrast/finecontrast) preservation.

Oh and an other thing hit me: if you export to NIKcollection as a Tiff, i did that in sRGB with out a thougth. But now i think it’s needs to be AdobeRGB so you have some extra wiggle space.
(you develope and feed it in to a new “workspace” which also does preview rendering to sRGB but uses the AdobeRGB for adjustingroom. and push back to DxO in sRGB or AdobeRGB (don’t know if i have a choise) and export as jpeg in sRGB to finalize.

(maybe you/we need to make a “Understanding Selective Tone Control the summary” and short-storytelling the highlight’s of this path we took. use this thread to discus and trails and the second as conclusion script :slightly_smiling_face:)

Peter

2 Likes

Good idea. I can edit the OP and add an Update/Summary section rather than add a new thread. There is a lot to summarize, so I would focus on the highlights and not the details. It will take a me a little time to go through all the posts and capture everything.

4 Likes

Thanks, you’ve given me a lot of things to check out. For what it’s worth, after using the Nik tools and sending the image back to Lightroom, I usually edit one more step in the image by holding down the OPTION key and sliding the “black” and “white” sliders until I can just see “something” showing up on the blank screen. After doing so, it almost always looks good to me.

Specifically, moving the tone control is something I’ve done, and it seems to work.

While I’m in the middle of editing, I notice I’d like the entire image a little lighter or darker - I will try things out again to accomplish this. It’s probably just me, as a “newbie”, not knowing what to do.

There are so many filters - what I am starting to work on, is copy all the filters from Nik Collection into an Excel spreadsheet, add my own notes after each one, along with a “number” for how useful it is for me, then sort the list by “most important”. There are so many, even with keeping notes, it’s still difficult for me to remember which one to use for an effect I’m after.

Finally, when I send a photo from Lightroom to edit in Nik Collection, what preset or filter is applied automatically before I start editing with the Nik tools?

The first really decent stereo audio system I owned had a pushbutton on the front panel called “loudness.” I pushed it. I decided the audio sounded better so I left it pushed in; never changed it. OK, occasionally I would push it to the off position; nope, sounded better on. Thinking back, I really didn’t care what it was for.

2 Likes

i had one called Dolby B (noise suppression on low volume parts) and my mum yelled always “get that volume down!!!” so loudness was not my always active knob… Now i know the system behind it they both manipulate the soundwaves so it sounds better.

Apologies for continuing this off-topic discussion - but I figure this is now the relevant place to do so …

On the basis that one is shooting in RAW, may I ask why you suggest this?

My understanding is that a RAW file is not specific to either sRGB or AdobeRGB (instead, it contains data “in the native colour space of the camera”). So, when a RAW file is read by PL it converts the RGB values from this native colour space of the camera into AdobeRGB (which is PL’s working colour space).
Note: I’m quoting Wolf, from DxO, here.

My conclusion from this is that it doesn’t matter which colour-space setting we apply in-camera, provided we’re shooting RAW. Have I misunderstood something ?

John M

1 Like

John - I have the same understanding as you.

I have one reason to use adobeRGB in camera.
Panoramashots are ooc-jpegs and focusbracketing video incamera stacking also.

The other i would export a Tiff to for instance to NIK in adobeRGB.
Why?
My main export is sRGB so i got some wigglespace when my source is adobeRGB.:relaxed:

Because my screens are not calibrated or Eizo’s i have no use to develop in adobes colorspace, i would prevere to continue in sRGB.
The adobe ooc jpeg and tiff are a wider colorspace source. Like a “raw” is a much wider colorspace then the adobeRGB has.

Ergo setting your camera in sRGB all ooc-jpegs are cut down/compressed go the same level as your export modes which will result in less clipping recovery possibility’s.
By using the wider adobeRGB the image is more wider mapped in which allows you to use those edges beond sRGB in dxopl.

(I assumed always that dxo workspace has not clipped all data beond the set colorspace and dat you can use the extra reach of the collected data in the wider colorspace.
Sort of frame which you can move over the exposurevalues of the picture around to set exposurelevel (lightnes) exposure is done by capturing. And bij pushing and pulling the tonesliders stretching or compressing of your liking wile seeing the sRGB clipping to recover the color using the AdobeRGB colorspace data.)

If it’s a hard cut when set in sRGB workspace non of this is relevant.:tired_face:

1 Like

All image related in-camera settings, including the color space, only affect JPEG images and the JPEG thumbnail embedded in the RAW file (this thumbnail is what you see when loading the RAW file in any software before that software could process the RAW data).

Some demosaicing software are able to read these settings (from the RAW file metadata) and to use them for the default settings of the RAW engine (so, the default preview for the RAW file will look similar to the JPEG). For example, this is what DPP does for Canon RAW files.

Another example : if you select b&w on the camera, this will not affect the RAW file but the embedded JPEG thumbnail will be b&w. So, when loading the RAW file in LR or DPL or whatever, the first thing that you will see (usually for a few seconds) will be a b&w image.