Explain the DNG color differences between dxo5 (legacy) and dxo6 (wide gamut)

DxO6 exports:




And as a reference for what’s it worth, the same processing steps in Darktable on the original CR2 file:


So, why is the export DNG label changed to All Corrections except Color Rendering while color-rendering module clearly has an impact?
Is the DNG export with the ‘wide gamut’ option now broken? Because it clearly seems wrong.

Furthermore, since the DNG export should keep the pixels in the original camera space, if something is ‘wide gamut’ or not should be determined by the processing software of the final DNG, not by DxO right?

If DxO6 is applying some sort of camera space conversion to the exported DNG with wide gamut enabled, the color matrix it writes in the DNG is clearly wrong.
(I made sure to double-check that Darktable was set to ‘embedded matrix’ instead of the default matrix for the camera model).

It seems DNG export has some clearly weird things with PL6. Either the ‘color gamut’ option should have no effect at all (leaving the pixels demosaiced but still in camera space, like before, so that every program can open the DNG and apply its camera matrix like they normally would). Or, it alters the pixels doing some processing, but then the embedded color matrix is clearly wrong that is written to the DNG file, and it makes the DNG file sort of unusable.


Together with the fact that DeepPrimeXD is unusable for me, that makes PL6 one of the worst upgrades for DxO Optics Pro / Photolab. I’ve upgraded every year without thinking too much about it, because it mostly gives ‘that one feature’ that I really want, and for the rest it’s more of the same.
But now that one new feature is not working for me, and the DNG export seems broken unless I revert to legacy settings.

I hope I get some clarification on how wide gamut and DNG export is supposed to work, and how that would effect the work flow of people with programs opening the DxO DNGs.


For what it’s worth, it’s the Windows version of PL5.4 / PL6.
The original raw file can be found here: Highlight reconstruction darktable vs. rawtherapee - darktable - discuss.pixls.us

I never had to deal with DNG files. I even had to teach my mac which program to open it with.
But anyways, not really understood what you are talking about, because your workflow is so
different from mine, but wanted to give it a try. So I exported a photo as DNG with Wide Gamut
and Deep Prime DX.
Same photo as TIFF. The photo as DNG is much flatter in colours. You could almost say grey, against the original and the TIFF. The latter corresponds to the original.
Then I exported the same image as DNG with Legacy Colourspace and Deep Prime DX. And see, the colours are right again. So clearly, DNG export with Wide Gamut has to the result a worse color reproduction.
P.S.
All exports with PL6

The color rendering is applied to the internal working space of the image (which is using wide color space in PL6). Turning rendering off does not turn off the wide color space, it changes what you see by changing how the wide color space is being mapped.

You might see differences in PL5 vs PL6 standard vs wide color space because the color mapping being used to convert to whatever output color space selected is different. They have to be different because they are starting from different color spaces

You will also see differences because all corrections will most likely have changed levels. I can clearly see this by looking at the DNG produced by PL6 with default preset (none) and your original CR2 in fast raw viewer. The original CR2 has about 16% of the green over exposed while in the DNG it is only 5%.

I can’t explain why the DNG export that is labeled `(All Corrections except Color Rendering) is different when you turn color rendering off, but it is. Once again, fast raw viewer shows with color management on, 3% of the green is overexposed while with it off, 5% of the green is overexposed.

There is no sense to export in dng to see the effect of the wide gamut. The dng is reinterpreted by the motor of the soft used.
The same dng can see with difference between the softs. I think there is no transformation of colors of DxO.
But if you export in tiff (I hope heif format in futur version) you will see a big difference with splendid colors, grades of colors.
Thanks DxO. It’s what I was waiting for.

1 Like

DxO said (in another thread about DNG in PL6) that they do not apply ‘camera profile’ because that is up to the software used to process the DNG created.

But… there is a clear difference in the DNG output when the wide gamut is selected and not, so the output of the software to process the DNG is clearly different… and not for the better (as in, really green cast, etc…).

It’s more and more clear there is a bug in the processing or usable of DxO Wide Gamut and DNG export, because at the moment it’s basically unusable.

Software can’t use their built-in camera profile, because DxO altered it in the DNG (which it doesn’t do when ‘legacy’ is selected) and they can’t use the DNG-built-in profile, because it’s not there (or clearly wrong).

Both options are fine when selecting ‘legacy gamut’.

I have the suspicion the DNG has the same embedded color matrix / profile no matter the ‘gamut’ option in DxO, but the pixel data written is clearly different. So this can’t be right.

I am OK with the ‘wide gamut’ option having no effect on DNG export (this seems to be the right way, as ‘wide gamut processing profile’ is something the software should or shouldn’t do that is processing the DNG).

And the wide gamut option is fine when writing TIF files, in which DxO is doing the entire pipeline.

But at the moment, writing DNGs with ‘wide gamut’ enabled is clearly broken, and this requires every file opened in PL6 to be manually set to ‘legacy gamut’ before exporting to DNG, but then having to reset it back to ‘wide gamut’ if I want to use that and export it to TIFF… Like I said, this can’t be right.

This topic and the following topic seem to have converged:

Good discussions.

I can’t see this in the EXIFtool output, both legacy and wide-gamut use embedded profile. All the other parameters related to color rendering seem to be the same. Also, with my CR3 files, I am not seeing much of a color difference, if any, when viewing with fastraw viewer on my sRGB monitor.

Ok, so I take a RAW file, set it in DxO to 'no correction.
Then I export it to DNG one time with gamut set to ‘legacy’, and another time with gamut set to ‘wide’.
I use the ‘apply all corrections, except color rendering’. DxO tells us that no camera profile / matrix is applied to it, so the software that is processing the DNG can do what it normally can.

Now, I use dcraw_emu from the libraw project (same as FastRawViewer) to export the original RAW, and the two DNG’s from DxO into a linear file. I make sure to set ‘do not use embedded matrix’ and to use the ‘colorspace: raw’. So I’m not transforming any pixels colorwise, I just want the values as they are in the raw file and to compare to the DNG. The only thing I do, is I ‘apply srgb-linear profile’ to the final output file, and resize it down and then convert the ‘srgb normal profile’. So basically I’m applying the default sRGB gamma curve. Just to make viewing easier.

So no, colors are not ‘correct’, but we’re not going for correct here, we’re using it to compare the pixel values ‘as they are’ in the original RAW data vs the DxO DNGs.



Flip between the images in any viewer that makes you do so easily (just opening in different browser tabs will suffice I guess).

You’ll see the original RAW and the ‘legacy gamut’ DNG are like for like colorwise (there is a small cropping difference because of different demosaicing algorithms, but they are basically the same in every other aspect).

The wide-gamut export clearly has different values written to the file. So even if DxO says that DNGs are written without any camera transform or something, something has clearly changed, and it causes programs opening the DNGs and applying their own camera profile to the file to suddenly have a completely different - and not pretty - rendering.

Now, if I do the export again for all three files, but I enable the option to use the embedded matrix… nothing changes in the output. So if there are embedded matrix/profiles written to the files, the ‘wide gamut’ one is wrong (and probably the same as the legacy one).

Thus, this makes the DNG files unusable if the ‘wide gamut’ file is set, because you’ll always get wrong colors.


Now, what I think happens is: DxO is converting the raw data into their own ‘DxO Wide Gamut’ profile. So they are doing a color conversion (as clearly can be seen by looking at the values written out by PL6). So I think whatever input file throw at PL6, if the wide gamut option is selected, it will convert it to their own DxO wide gamut profile. If you write a jpg of tiff file, it will convert it to a normal profile as a last step in writing the output (which makes perfect sense).
But if you write a DNG, nothing like that is done, but the color data has still been converted to DxO Wide Gamut.

So, we basically need a DxO-Wide-Gamut ICC and/or DCP file, to tell the program that’s opening the DxO files ‘the colors are in this color space, use it’.


But to summarize:

  • The DNG data is clearly changed by using a different gamut option in DxO.
  • They tell us that this should not be the case
  • Without knowing what colorspace and/or camera matrix to use for this wide-gamut DNG, they are useless to use in other software.
2 Likes

Hi,

I suggest that anyone reporting tests about this issue mentions on what type of display they are checking the files : wide gamut or not ?

Thanks.

doesn’t really matter if we’re talking about the unprofiled data in the files that DxO writes.

The data in the DNG is different, so the result is different. There is a reason I didn’t apply any camera profile or matrix, it isn’t about looking good, it’s about the DNGs with the wide gamut option on clearly touching the camera data so any program will do something different with it.

DxO’s internal working profile should have NO effect on the data written that is supposed to be still in native unprocessed camera colorspace.

How it looks on anyone screen is far besides my point. The fact is that there are differences where there should be none.

What do you mean by “original camera space”?

I see references to camera color spaces here and there on this forum, and I’m trying to learn what that means, because my understanding of RAW image formats (outside of Linear DNG) was that you got an intensity value for each pixel, and you have to know what the color filter array was and process that information and look at neighboring pixels and factor in a white balance value to figure out what “color” a pixel is. And that it’s basically up to the RAW development software (an external one like PhotoLab or PureRaw or Lightroom or libraw; or an internal one in the camera’s firmware when producing JPEGs and previews in-camera) to invent what RGB value a pixel is and to pick a RGB color space for the image.

Would love to know if I’m mistaken.

This page has some information on the DNG format and Linear DNG.

Linear DNG:

  • Contains de-mosaiced data, usually in RGB (though there are possibilities for other formats)
  • Must contain a generic color profile and tone curve
  • May contain custom color profiles in addition to the generic one

So it seems to me that to produce a linear DNG file, PhotoLab has to demosaic the image, interpret the sensor data into RGB in some color space (you can’t have NO color space when dealing with RGB data, at best you can have RGB values meant for an unknown color space but to render it you’ll have to use some color space). Then when exporting that data as Linear DNG, it might put the RGB data from its working color space as-is and attach the working color space’s ICC profile in the DNG, or it might convert to a standard color space (i.e. change the RGB data) such as Adobe RGB or ProPhoto RGB and attach that profile.

The main questions would be:

  • Does PhotoLab save the working space RGB values directly in linear DNGs, or convert them to a different color space? And if it can do both, which options control that?
  • How does DarkTable handle, say, a Linear DNG with RGB values in PhotoLab’s wide gamut profile?

My hypotheses for why you’re seeing differences would be:

  • Either PhotoLab exported a linear DNG with RGB values for a specific color space (maybe the wide gamut color space?) but without attaching the color profile (maybe because of some option selected in your tests).
  • Or DarkTable only renders Linear DNGs well if they’re using RGB data and one or several standard color spaces (like sRGB, Adobe RGB and ProPhoto RGB), but the exported file was using’s DxO’s custom profile instead and DarkTable doesn’t support arbitrary profiles.

But it could be something else. Hard to debug without being a digital photo format engineer. :stuck_out_tongue:

You are never seeing the pixel values “as they are” with any software that renders RAW files or Linear DNG files. So the tools you are using for your investigation can’t tell you what you think they are telling.

If you want to look at a raw file with a level of technical detail that tells you what is in the data, you can use RawDigger, but I’m not sure it supports Linear DNGs.

I’m probably using the wrong term (non-native English).

But a raw file, stores numbers. You get numbers for red, green and blue. (Not really for each pixel! This is another topic :wink: ).

But for one camera, the number 1100 (just picking something at random) in red can mean ‘really red’, while for another camera that could mean ‘not so red’.

So, those numbers need to be mapped to colors. Only after you know what those numbers mean for a certain camera model, can you convert them to something that is more natural in the digital world (like sRGB, Adobe RGB, etc…).

So, to follow that example. In a RAW file, there is a pixel somewhere which has the value 1100 for red. If you open that raw file in Lightroom (as an example), Lightroom knows that for that camera model, 1100 means ‘very very saturated red’. So it uses that information to display an image on the screen.

Now, if you convert the RAW file to a DNG file… you want that pixel to still have the same number 1100 in the DNG. Because if you open that DNG in Lightroom (again, as an example), Lightroom will interpret that number 1100 according to the camera model and assume it to mean ‘very very saturated red’…

Of course, if in DxO you start to mess with things like exposure, contrast, HSL, saturation, etc… that number will not stay 1100. But that’s logical, you are changing things.
But if we’re not changing things, you want that number to stay 1100 so that the next program that opens the DNG will interpret it correctly.

Now, DxO in PL5 (and PL6 in legacy gamut mode) will do just this: If the original raw file has the number 1100 somewhere for red, it will just write the number 1100 to the DNG without touching it. So that the next program interprets it, and interprets it correctly.

PL6 in wide gamut mode, actually starts to change the numbers. So it now suddenly writes a 900 to the DNG file. Lightroom (again, as an example) will read the number 900, and think ‘hey, that means it is not that saturated red, but a bit less’. Because it still thinks that the numbers have the same original meaning to interpret colors.

Now, a DNG file can contain information that tells programs how to interpret in the DNG, so that they don’t have to assume things based on the camera model. And DxO does write this information.
But the information doesn’t change between legacy-gamut and wide-gamut, while in wide-gamut the numbers it writes are different, so they need to be interpreted different.

This is a very ‘abstract’ and dumbed down version, but the essence is there :slight_smile: .

A digitized , sampled version of something is just a bunch of numbers. A program needs to know what these numbers mean before it can do something with them.
So imagine there is a universal way to describe what color something is.
So if you load a file (be it a camera raw file, or a JPEG of the web) the numbers in that file need to be mapped to the universal color-language. And from there, you can convert it back to the numbers an device expects, like a printer, or a screen.

Reading a normal image file is actually no different, there are just assumptions. If there is no profile, we expect a file to be sRGB mostly. Some people understand that an image file can contain numbers between 0 and 255. And an red of 255, and a blue and green of 0 means ‘very saturated red’. But if you tell a program those same numbers (255,0,0) are not sRGB, but AdobeRGB… it now suddenly means a way more saturated red. And if you tell a program those same numbers are Rec2020, it means they are even more red.

So, numbers in the digital image world need to be interpreted to mean a certain color. The numbers that a digital camera sensor produces need to be interpreted to ‘a universal way of describing colors’, and then need to be converted to the meaning that you want for your output (file).

It’s quite common to have something ‘in between’, a ‘number meaning’ that is made for doing calculations and modifications. The ‘working space’ / ‘working profile’.

So, most RAW converters work like reading the numbers from the RAW file, interpreting them to a meaning, and converting that to a working space. There, all the edits and modifications are done. And when it’s time to write the output to disk (or to a printer, or…) it’s converted to the ‘output space’.

3 Likes

Perfect. To display it, you are absolutely correct in that you need to assume something about what color space the data is in.
The original, unaltered data generated by the sensor (and written to the RAW file) is what I call ‘camera space’. This is basically the ‘no profile’ or ‘unknown profile’. And you are correct, for displaying you need to know something about it to convert it to - for example - sRGB for displaying.

But, to write a (linear) DNG file, you can absolutely work in ‘no profile’ / ‘unknown profile’ land. You don’t know what the numbers mean, you demosaic them and write them back to the DNG file. That way the next progoram can read the data is if it was an oriiginal RAW file → the data is still in the original, raw, sensor produced ‘color space’ and needs to be interpreted to do something with it.
This is what PL5 did (writes the numbers to DNG before interpreting it and converting it, but after demosaicing and it can even do denoising, sharpening, vignetting and other stuff to it without touching the color space! Awesome). PL6 in ‘legacy gamut’ does the same.
PL6 in ‘wide gamut’ writes the values to DNG after interpreting it and converting it to their own ‘dxo wide gamut’. At least, this is what I think is happening.

If you tell DxO to do only denoising/optical corrections, it doesn’t do this (perfect), but if you tell DxO to do all edits besides color-rendering, it does alter the sensor-color-space when writing to the DNG, while it never did this before. I think this is a bug / mistake. Because no other program besides DxO can know what their ‘dxo wide gamut’ actually means, and this cannot interpret the data correctly to do something with it.

1 Like

RawDigger, is the same engine as FastRawViewer, is the same engine as ‘libraw’. Which dcraw_emu uses (becaues it’s an example tool from the libraw toolset).

You can tell it in what color space it needs to write the output image, and then it will map it. But you can also tell it to do no color-space conversion at all. In this case, you get ‘raw’ values from your raw image… which are in a technically ‘unknown color space’.

Now, for display I tag them to assume srgb, just so that something is displayed.
Again, the issue has NOTHING to do with how the colors appear on the screen. The issue is that the numbers written in the DNG file are different depending on the gamut mode selected, and this should not be the case.

Like you said, a raw converter needs to know what color space the input is in. Most of them look at the camera model and use a profile according to the camera model.

DxO in wide-gamut mode writes the DNG in their own dxo-wide-gamut space it appears, which means that the only program that can read that DNG file correctly is DxO PL6 itself… which breaks the workflow to use the DNG in any kind of different tool.

We need to a) get the DNG written before it’s converted to dxo-wide-gamut (because old PL5 doesn’t write the DNG after its converted to AdobeRGB, did it??)… or at least give us the dxo-wide-gamut ICC / profile so that other tools have a chance of interpreting the DNG files correctly.

1 Like

Agreed that RAW files store numbers. I think they’re more like bits representing a floating-point value between 0 and 1, where 0 means “no electrical signal” and “1” means “this sensor pixel was saturated because its photon well filled up with photons”.

If you cross-reference that information with the color filter array data, you can know that this pixel with a value of 0.20498249341 is a “red” pixel. Then you have to figure out what “red” even means, because color filter arrays for different sensors can have different properties.

You also have to invent data, because for this sensor pixel you only have data in the red channel (since it was behind a red filter), so to find out what its green and blue channel values are you have to look at the pixels around it. There are different algorithms for that, which can be public or proprietary, and every software will produce a different result here. Demosaicing by DxO PhotoLab or PureRaw and by Adobe Camera Raw or Lightroom or Capture One will produce different output. And you cannot go back from this generated RGB data back to the sensor data, it’s a one-way street.

Here’s a technical course on how sensors work:

Here’s a nice talk that looks at sensor data and how to produce a RGB image from it:

Finally, when you’re producing RGB values (aka 3 non-decimal numbers between 0 and 255, for 8bit RGB, but if you are working with 16bit RGB that’s three numbers between 0 and 65535, so you have more room for nuanced differences in values), you also NEED to use a RGB color profile. Your color profile will determine what “100% red + 20% blue + 30% green” even means. Depending on the profile, that could even mean a value that is way outside of human’s ability to perceive colors. So when converting your RAW data like r=0.20498249341, you really need to know what the target RGB profile is so that you produce RGB values that make sense.

Then there is the whole gamma correction / gamma curve business. Not sure if or how that intervenes here, but I wouldn’t be surprised if you need to do some gamma math to produce those RGB values as well. Here’s another talk about gamma correction:

So to produce a RGB value you need this whole pipeline of de-mosaicing and interpreting values for specific pixels into specific colors and possibly more. Raw development software typically do this interpretation by targetting an internal color space, often ProPhoto RGB or a custom color space (Lightroom and Capture One seem to be using variants of ProPhoto RGB; PhotoLab is using either Adobe RGB or their own large color space which seems to be a variant of Rec.2020, depending on the settings you choose for each image).

Then in order to display the image on your screen, there’s a conversion happening between the working color space and the smaller color space of your screen (usually sRGB, or Display P3). That last conversion is just for temporary display purposes, so it won’t affect the final image you export as JPEG, TIFF or Linear DNG (but that export may be done by using a color profile like sRGB or Adobe DNG or ProPhoto DNG too).

Now, could PhotoLab export a de-mosaiced Linear DNG that uses the same value scale as the sensor’s raw data? I suppose it might be possible, but:

  • it doesn’t seem to be an industry practice, instead software tend to use their own working color space or a standard large color space;
  • I’m not sure the Linear DNG format would accept that data (i.e. it might not be specification-compliant);
  • chances are no software would be able to read and interpret that data (unless they specifically decide to work on supporting it).

Without a published standard for “same values as in the RAW file, but de-mosaiced” with a precise specification describing what that even means, and buy-in from industry players, I don’t think your idea of “PhotoLab should just write the sensor raw data, but demosaiced, in a DNG file” can ever work or reflect the reality of what is actually going on when creating Linear DNGs in PhotoLab (or any software).

But I’m wondering what the practical implications are for you. If you’re using PhotoLab or PureRaw, the goal is to interpret the RAW data into RGB. If you don’t want to do that interpretation, and take the original RAW data into a different program, you can just open the original RAW file in that different program.

It seems like the only practical question here is instead “why is that specific image more green than the others when exported with those settings and read in that other software?”. Which is an interesting question, but does not entails what you thought it entails about RAW values in Linear DNG.

Maybe it’s a bug. Maybe it’s the expected output given the input image and the options you chose. Would be good to have someone from DxO looking at this example.

If they produce Linear DNG files with RGB values for their wide gamut profile and embed their profile’s ICC definition in the DNG file, then other software should be able to interpret that correctly.

Which is why I was wondering if the issue was either a) DxO not embedding the ICC profile at all when you use some specific output settings or b) DarkTable not applying that profile correctly.

Actually works from the programming point of view as well. Every camera has a different blackpoint and whitepoint to use (and it even depends on things like shooting mode and ISO and…). And then the DNG files that DxO writes are always with a blackpoint of 0 and a whitepoint of 65535 (full 16bit space). So it makes it simpler to think of the values as something between 0% and 100%.

So what I want, and what DxO has always done, and what they claim they do: Is if your raw file has a value of 25% somewhere, that 25% is written to the DNG. The next program opens up the DNG, reads the 25%, and interprets it as it would have done with the original raw file.

This is what I said ‘that not every pixel has a value’ and also while that is a story for a different time, because it makes it way to complicated for this part. And… it doesn’t matter for the issue, because we’re talking about the color spaces, which doesn’t change.

Almost all software these days operates in floating point, and only converts to the final numbers when writing the output file. DxO doesn’t support it, but other programs supported things like EXR have to stay in floating point.

In theory, this is all part of the ‘color space’ thing. Color spaces (and also ICC files) have a response curve in them, so you can have a linear RGB colorspace, and a sRGB color space (which is with the srgb TRC applied).

I didn’t list all the steps a RAW software does, because the order and steps differ from software to software and it’s way out of scope for this topic.

It’s what it has been doing for years…

There isn’t a lot of software that is used as a ‘raw preprocessor’, but it’s getting there. ON1 NoNoise also tries to do this when writing DNG files, and the later Topaz AI software. DxO’s own PureRaw is made to be exactly this, a raw preprocessor that fixes things in the raw data before handing it off to another program.

Basically, this ISwhat a linear DNG is. It’s like any ordinary 16bit TIFF file, except that it is meant to hold sensor data and thus is in an unknown color space (or… the color space that the camera produced).

Again, this IS workflow DxO sort of invented even years ago when they added DNG export options. It allows their demosaicing, denoising, sharpening, lens corrections and other stuff to work raw camera profile data, and write that back to a DNG file so another piece of software can be used to do the ‘tonemapping’ / editing / etc…

No, not PureRaw. That’s not meant to display anything. That’s meant to fix data and to hand it off to another program, still being in the original raw camera space. And also not how I have been using DxO for years and why they made the DNG export option years ago.

Not the same tools as DxO now, is it? The healing, industry leading denoising, lens corrections, etc… For years now I could use any raw software out there regarding if they supported my camera or the lenses I own, no matter how good their denoising is or not, etc…

The question is that DxO mentioned that the export to DNG option does not alter the colorspace of the raw data, while with the wide gamut option selected it clearly does, so it’s a bug.
It’s a minor bug. I could switch it to legacy before exporting. But if I want to export a TIF from DxO I want to use wide gamut, as it’s quite nice (and something DxO had to catch up with), but then when I want to export the same images as DNG I have to set the option back to legacy.

And since the option defaults to ‘wide gamut’ for new files, even if you set something different in your presets… it’s kinda annoying to say the least.

(PS, I have no clue if DxO can do al ltheir algorithms ‘while still in raw camera colorspace’… or that they interpret it to their working profile, do the edits, and convert it back… but the end result has always been the same: The DNG files written were in the raw camera colorspace)

Hi,

I don’t understand your answer. If the current working colorspace is wide gamut, any single computation made while editing will use the extended wide gamut colorspace. This will possibly produce colors that are outside of the the camera colorspace. When you export including as DNG (all corrections except color rendering), the DNG will therefore contain such colors.

Now, when you try to view the image, that image must be “realized” so that all pixels match a color that is inside the colorspace of the viewing device. Whatever the viewing software that you are using, a conversion will be made between the wide gamut color space (which is probably not recognized by other software than DPL 6) and the colorspace of the viewing device. If the device is “wide gamut” it will have more chances to correctly display those colors coming from the DNG wide gamut than if it’s not.

In other words, if you have decided to work with an extended gamut, you have to use software, printers and viewing devices that have the possibility to work with it or at least using a working colorspace as near as possible to the initial extended colorspace. This compatibility must be maintained as long as possible in your working flow. But since there’s currently only rare devices that are able to fully support an extended colorspace like DxO’s wide gamut or Prophoto, at one moment in your workflow significant color differences will appear. And soft proofing has always been there to check this. It’s new in DPL 6 but other applications have this feature since a long time. Releasing a software supporting a wide gamut colorspace without having a soft proofing feature would be nonsense.

If the working space is “legacy” (which is closer to sRGB than to Prophoto or to DxO “wide gamut”), there are more chances that the color differences be so small that you don’t notice them.

A similar problem arises when working in the Prophoto colorspace (in Photoshop or Lightroom for example) and trying to print to a personal printer that usually can’t do much more than sRGB.

So yes, I insist, mentioning on what type of display the testing is made is important.