Colour Management in PL6 - Updated for PL6.3

Thanks that’s the point I’ve tried to explain, and what the the term softproofing is for normally :smiley:

Understood, Guenterm … In my initial ignorance, I thought SP was necessary ONLY for printing - It took me a while to understand that it’s relevant (for specific images, particularly those with saturated colours) when exporting for display too.

John

2 Likes

And that’s what I constant shout out lately. You see the result in the monitors color space. The soft proof image is converted to the monitor’s color space. One should be aware of that.

George

Wile we are used to just let dxo smartypens PSC unit and the hidden rendering algorithm in export section do the trick of fitting all inside and we didn’t lnow any better it was as we wanted, know we can see a bit “under the hood” what’s need to be done.
And it’s same as any take “ill over the wheel” concept if you don’t grasp the theory and can’t manage the tools your end up with more problems then which you tried to avoid by taking over the wheel. :joy:

So in deed the user interface needs to be split in edit and screen “softproofing” the bit your always be looking in for general outcome towards a viewingscreen.
SRGB, AdobeRGB or P3. Take your pick.

And printing devices and papers.

The first must be usable in continues modes. Histogram, clipping (sun and moon icon) OOG icon. And (live) previewing switching between the edit screen and export screen.
Preverable a splitscreen toggle so you can see the probable difference.

The latter can be connected to the export /printing tab.

Precisely. Conversion actually changes data of the image. Assigning a profile only gives info about how other programs should interpret the colors. And soft proofing is as you said technically a conversion but its neither conversion in a sense that data in the image is changed, nor is is info about how of interpret color meant to be used outside of the soft proofing interface of the program. And since there is simulation of gamut as well as gamma, one need to be in that soft proofing mode inside a specific app to simulate the results. The term soft proof is not accident off course. Proof was actual print one would do to see what adjustments might need to be made before final print. Soft proof tries to save ink, paper and time by doing the process digitally, for the same basic purpose.

Converting from one color space to another or assigning a permanent profile to an image has very different use and its not the same as soft proofing.

Your explanation is best one yet. " Interpretation is technically equivalent to conversion but has no permanence."

Exactly.

Exactly. That is correct explanation of the terms.

1 Like

No, it is not converted it is interpreted or simulated. Many, including myself in this threat have pointed out to you the distinct differences. Bottom line is that you are not correct in usage of the terms, but too stubborn to admit you are not correct.

Interpretation or simulation during soft proofing is technically equivalent to conversion but has no permanence. Since it neither changing the data in the image or assigning permanent color profile for the image. It only happens temporary inside the app for the explicit purpose of simulating output color space. It is not the same as hardcore conversion you keep yelling about, There is a distinction. An important one.

Furthermore, soft proofing is done with the use of other color profiles that can be loaded form external source and have no relation to the image itself, nor do they have to represent the device (monitor). They can be printer profiles. Hence there is no actual conversion being done, nor is the profile assigned to the image , it is a completely difference process of simulation. Technically conversion happens, a conversion of ones and zeros, as anything else in the digital space, but it is not a conversion of the data of the image, and that is what happens when term conversation is used. For this simulation purpose we use another term, soft proof. Not only does it avoid confusion, but it is more accurate description of hte process.

Except that conversion doesn’t mean that the source data must be changed. The output of a conversion can be somewhere else. That happens all the time when image data is send to the monitor. The source data is converted to the monitor’s color profile.

George

I’m sorry George, but I think you might have that wrong. In my case, I have an sRGB screen, and an aRGB printer. If the source data is converted for the screen, there is no way that it can replace lost data for the aRGB printer. I think what you may be meaning is that the image that is sent to the screen is converted to sRGB and the image sent to the printer is converted to aRGB. While the source image still remains at the colour gamut of the camera.

More precisely the data is converted to the output color space. Mostly the monitor but if you print the printer’s.
It doesn’t change anything.
The main thing is that what you see, the monitor, is in the monitor’s color space. Also the soft proof image.

George

The idea of introducing the third term — interpreting — was to avoid the confusion between converting the stored image versus converting to a temporary copy of the image.

4 Likes

I like the third term - interpret but can’t quite find a way to add it to the diagram.

  1. Box 3 - this happens in memory in the computer and changes image data but only into a temporary working image. All edits are performed on this working image which disappears when you exit PL or move to another image.
  2. Box 5 - this converts the temporary working image into a display image (most likely into graphic memory). Again this is temporary and changes with each edit or image.
  3. Box 6 - simulates or interprets the image data based on the SP profile all within the display image which is in the monitor profile. Yes, image data is changed to fit within the SP profile IF it is smaller than the monitor profile.

Because SP needs to take into account TWO profiles with the defining profile being the monitor profile and then simulating the SP profile which should be a smaller gamut than the monitor profile.

As soon as you turn off SP the display image reverts back to the monitor profile alone.

So, NOTHING is actually changed because everything is done in memory in the computer/graphic card. The only thing that comes out of this whole process are exported files and/or printed images. As soon as you exit PL or turn off your computer all these intermediate images are lost but the recipe on how to recreate all your edits are recorded in the database and/or DOP files.

I appreciate all the comments and discussions on the topic as it helps us all understand what is going on with SP and the Wide Gamut working space. I have made many changes to the diagram based on many comments posted here. I strive to make it as clear as possible but cannot please everyone all of the time. Terminology is tough but I think we have got it right.

My final comment is: the diagram is there to help people visually understand what is happening with regards to colour management in PL6.3 and help them understand what SP is and what it does should they wish to use it.

2 Likes

I think we need to define image variations.
1 capture image, => camera convert(internal software) a latent image made of charges in a grid towards a rawfile. (this is ireversable and thus a hard chance/convertion.)
This is manly a “colorless” group of pixels with some metadata attached to it so the rawdeveloper knows what to do.(camera native “colorspace” we called it. )
2 after demosiacing, which means go from r,g,b,g towards RGB. White point, blackpoint and Whitebalance metadata is added (and some other correctioncalculations are done) and from there all the pixels got it’s numbers to represent a certain color. (from this point the image is viewable on a screen.)
Here we need the first actual colorspace definition.
DxO does an realtime interaction between the demosiacted and colorspace related RGB image and the latent rawfile image. (explained earlier for deep prime and CA correction)
So factual the image itself in the working colorspace (legacy or WG) is also temporarly. Aka just living in the pc’s rammemory.
3 this plain converted floating image is then touched by the edit preset of DxO set in preverence. Tonality contrast, saturation and such is added to the metadata of the RGB pixel => default+editvalue. And ofcaorse every manual changed setting.
4 in order to see a good representation on your screen this image in the working colorspace must be translated towards the capabilities of that screen/videocard.
Stil only living in your rammemory/(maybe tempfile on your harddrive)
This conversion/interpretation is done in the videocardchipset wile the dxo application manage the how by there software to translate from WCS towards monitor colorspace/icc profile.
All this is realtime changed every time you adjust a slider so you can edit a rawfile.

5 And this is new, they added a checkingtool which shows the pixels which are out of gamut of the monitorprofile and thus can’t interpreted correct by the monitor hardware.
And they added a automated correction profile (the protect saturated color algorithm) in order to give you a almost real correct image on your screen.
Note i can’t remember if the red layover is disapering if i increase the PSC slider or that only the colors are changed in a more representable tonality.)
Stil this is temporarly only living in rammemory.
(the editsettingvalues are stored in the dopfile but the actual raw to screen image is only there when the program is running.)

6 another new part, softproofing from rawfile interpretation towards end file.
This is also only living in a ram memory.
And here it got complicated.
The working colorspace image is touched by edit settings and “sent” to the softproof algoritm. We choose a export colorspace and the algoritm calculate which colors can’t be fitted in the choosen export colorspace/gamut. The blue layover is added to that latent image and then “sent” to the earlier in step 4/5 described “conversion” for viewing on the monitor.
Note we have stacked aid algoritmes. Monitor out of gamut correction, softproofing out of gammut correction.

All this is latent as in not irreversible changed. There is no file made with the new pixel valuedata.

The last step (hitting the button [export]) is only really a conversion exporting to a container. Jpeg, tiff, dng.

My problem in this story is part 4 to 6.
Does the out of gamut data(flagged by blue layover) of the latent image by softproofing to say sRGB profile be changed by the softproofing’s PSC only or also by the out of monitor algorithm ,(flagged by the red layover)? I think both.
Why? Well if i softproof towards display P3, (i want a jpeg file with a P3 colorprofile), my monitor can’t display this 100%. So the second correction has to be made. Out of monitorgamut correction.

So the question remains : Do i see a good representive of the in the future made jpeg file on my screen?
Or is it a estimated gues? (yes you need to have calibrated screen anyway.)

Unless dxo staff is explaining this we can only gues the hierarchy of the corrections.

Personal i don’t realy care much because:
1 no calibrated screen and thus no calibrated monitor color profile. It’s a factory default profile. I see a estimated gues of my edited image.
2 99% of the exported files i watch by again non calibrated screens and smart tv’s
Even when i made right choices before i created my jpeg it stil can be f***ed over by the monitorprofile and driver of the viewingdevice.
3 if i would print i probably go with trail and error or let a professional do his thing with my file.
( Because this is the only real Irreversible action. (putting inkt on a piece of paper.)

But in order to understand the results of your choices in the application dxo photolab v6.3 for your exported file it’s necessary to understand the processteps.

:hugs:

@OXiDant I think you have just repeated what I explained above with the addition of the OOG overlays which I don’t think affects the whole process from an understanding point of view.

Yes global your correct.
Point is i think not everyone understand,grasp, the active connections between all steps.
Like CA, every time you change whitebalance or a part of a color it’s correction in the pre demosiacing part is changed.
You can test this behaviour by creating a DNG and refeed this in dxopl
The only thing that’s then finalized, irreversable, is denoising and optical module corrections.

A rawfile is like a bitmap which is to create the latent image on the 4 optical drums of a printer.
4 different charged surfaces which holds parts of an image.
The synchronation of the 4 turning drums is like CA correction to get all 4 cmyk precisely stacked on the main transfer belt. This is the "demosiacing and optical module part.
The transfer Belt could be the working colorspace part in dxopl. A visual preview of the image still adjustable.
The drums have just charge levels and which tonerparticle c,m,y,k, is drawn to it doesm’t matter so it’s a colorless proces in theory.
The balance between the four charges is the “white balance” which has a black point, max toner attraction and a “white point” no toner attraction.
Controle of this section is bount on brightnes,(lowering and lifting charges on the drums basicly) density of particals(resolution) the dpi, amount particals dropped on top of each other the blackpoint. Saturation. And contrast is done the same way as microcontrast. Dropping black particals on edges of colorplains to enhance the edge visability.
Again when it’s on the main transfer belt its stil a latent image which look roughly the end result. (toners arn’t melted and blended together so sub colors arn’t completed.)
(if i could manually change things on the belt i could alter the image stil because the toner particals are just mildly stuck on a charged surface.)
So this could be the working colorspace area. :blush:
From that point it is transported to the paper, (export material.)
This influence the end look of the colors and image by the paper structure, color and thicknes.
So you need to adjust temperature development, fusertemperature(melting proces) accoording to thicknes and type of material. And speed of transfer, passingspeed of the paper(melting and pressing moment). And colorbalance to keep image color as wanted. Yellow on a yellowisch paper got washed out so saturate extra on yellow to compensate.
(this is the softproof part.)
And the paper exiting the printer is the jpeg.

The difference of “cmyk laserprinting” and RGB pixel developing of dxopl is dxopl can work bi-directional as long as you don’t hit " fuse all colors together in this container file".
Which means it’s usefull to look at the half baked latent image with an monitor.
On the printer i could stop the paper by creating a jam and look at the image on the transferbelt but that’s not restartable again.(also then is my eyesight and surounding light temperature a factor of how i see this latent image)aka monitor softproofing😉
(most man don’t see much pastel colors so those are “out of gamut” for us. :rofl:
So by understanding how a laser printer works you can lay this knowledge over the dxopl development proces.
I just try to give people an other point of view on the matter in other to let “the coin drop” In dutch " het kwartje laten vallen"

:grin:

Peter

but with the wide-gamut option, the soft-proofing option to sRGB is more important than ever, also for monitors and regular web output.

It’s the only way to see what your sRGB export will look like, before actually exporting. This wasn’t the case with the classic gamut option.

3 Likes