sun, highlights, moon ,shadows.
These warnings have nothing to do with being out of color space gamut, but rather that a pixel value has reached or is near to a minimal (moon) or maximal (sun) lightness. It indicates blown out areas of white and black, where no details are present any longer. There is no color proofing functionality in PhotoLab to indicate out of color space gamut areas.
To be really professional PL should have a working color space as large as possible and support soft proofing including rendering intent, to indicate where color information will be lost in a smaller output color space. Here is what Lightroom does: https://www.slrlounge.com/soft-proofing-lightroom-adobe/
Here a nice lesson about the topic:
Do you guys think “Soft Proofing” is an own topic, or does it belong somehow into this topic?
I do not need a specific working color space, if I cannot soft proof it against an output color space. I realy would like to check, how my photos could look like on the printed paper before I send them to a printing service. I would leave a vote here, if soft proofing is included, otherwise I will create an own topic for it, with a reference to this one.
Asser, I would create another request. In my workflow I would not need softprofing in DPL. I export 16-bit tiffs as masterfiles and then either us PS or more and more Affinity for final touches and exporting to print etc…
OK. Side question, if I may: Do you store the exported 16-bit TIFFs on disk afterwards or do you delete them after final export? I am asking, because I am not ready yet, to spend 200 MB disk storage for every image I take. It is just not worth it.
generally I try to follow good DAM principles. I have the following folder structure:
2019 and so on
2019 and so on
So my raw files are stored in the original folders. The 16 bit tiffs get stored in the derivatives folders. My naming structure is. i.e
As a DAM software I use Media Pro which is catalog based. If I create virtual copies of a file in DPL the the name might look like: 20180105_3456_1_master.tif
For prints and emails to friends I might create jpgs or whatever is needed as “Throw-away” files, based on the derivative file. I do not keep those files.
Storage space is cheap. Just look at how much a 1TB harddrive costs.
Hi, yes i get your point, certainly after seeing that clip.
Aldoh the color black and white are also the end of the colorspace.
And playing with the colorrendering tool:
you see working of “protect saturated colors” when i change default intensity (200 is blackisch and 0 is white-isch.) When bluewant is active it goes up when the changing intensity or other things
test with sun and moon
So i agree it is no color proofing (that specific part of colorspace mapping/remapping is a bit over my grasp at the moment but i get the general idea.) but it’s sun and moon shows imo still if it is outside its working colorspace in saturation and lightness. rudimentary but it reacts.
i think we like to have from rawfile to workspace color profile the perceptual-version with WB correction to get the rightcolorbalanced image on the previewer/monitor in a sRGB colorspace (normal monitor) even if the selected colorspace is wider like AdobeRGb in this case.
This compresses all colordata from the RAW-image inside the workspace of DxO instead of flatline when edge is reached. (the relative colormetric).
(pitty that DxO doesn’t view which colorspace is active so we assume it’s always AdobeRGB on the preview side and selective on the export side Then we can do the compress part from RAW directly to sRGB if we want that.)
But when we work inside this colorspace (AdobeRGB in this present case) and we export in a smaller one the WB and colors change in perceptualmode thát you don’t want and then is relative colormetric (cut of/flatline) better or you need to recalculate all colors and re-set WB to maintain chosen WB when you compress.
As far as i understand al this:
demosiac explained on a simple way.
He sounds if he had drained a bottle first but its rather simple explained. And this reveals why Prime is working before this process. It works on Luminance noise which is before this not really color related only photosite related with a filter on it marked as Red or Blue or Green.
The actual colourisation mapping to a rgb-colorspace is done after that (demosiacing) and there is also the WB calculated by the algorithm that comes with the sensor/camera characteristics.
So back to Prophoto or AdobeRGB?
What is the actual advantage working in this Prophoto-colorspace we can’t really view on a monitor?
possibility of printing closer to what the camera captured? (check!)
Is this gona work without a form of preview as in soft proofing inside DxOPL?
AdobeRGB is much closer to the sRGB and thus less risky to use without this feature.
I think that’s the main reason DxO largest colorspace is AdobeRGB.
lack of softproofing possibility.
So if ProphotoRGB is added to DxO without all those softproofing things. (preview of changing from one color profile to the other colorprofile to check if it don’t fall appart what you editted) what will be the profit?
But i am still very interested what influence the lighness, contrast, toning tools have before the actual colorspace mapping is done. (is it static or does it recalculate after every correction when a pixel is pulled inside the colorspace?) (remember primenoise is previewbox without actual full rendering that’s done by export. So that demosiacprocess is also really baked in clay in that export moment.)
I will give you a simple example.
Assume you have an expensive camera, which can capture colors outside of AdobeRGB color space. I have drawn this colorspace in orange below:
Now you go and capture a photo of a gradient with this camera, where the colors of the gradient cross the AdobeRGB border (upper blue range in image).
Now if the working color space of the software is AdobeRGB, the part of the gradient that lies outside of AdobeRGB border is clipped. The color information for the full gradient range is lost right from the start.
If the user now performs some color adjustments in his photo, so that the gradient colors are moved into sRGB as denoted by the arrow in the image, he will see something like the upper AdobeRGB -> sRGB gradient. The full color range is lost during the camera color space to working color space conversion and cannot be rebuilt, neither on screen nor in the export.
In contrast to this assume that the working color space is ProPhoto RGB, so that no colors are clipped from the camera color space for that gradient. If the gradient is now moved into sRGB, it has a much wider and visible color range (ProPhoto -> sRGB gradient).
A wider working color space allows to preserve color range information, even if it cannot be seen on screen at the moment, but could become visible, after the colors are transformed.
Ah thanks for this clear view about this, i don’t know enough about the raw image to colormapping to a colorspace.
But i assume that this 0 =black no luminance and 255 is white/ 0 =no red or green or blue and 255 = max saturated R G or B. and there is no perceptual calculation. (based on your :
Then indeed the widest colorspace you can get is the best even for me as non profi with basic monitor and skills.
(i pm you for something else about the sun and moon icons and saturation otherwise this thread got over the cliff and filled by this “side track”. ok? )
The black to white gradients I have chosen, so that the color details are better visible. In real they should be colored, but it would be hard to see the color nuances in a small image. Yes, you can PM me. Meanwhile, here is a video, which shows, how it is possible to move out of gamut colors from a larger color space into a smaller one in Affinity Photo. This would not be possible, if the non visible color information would not be retained:
I am quite horrified to have here the certainty that DPL uses AdobeRGB as workspace !!!
It has already been said above, but the truncation of the gamut of the apn is really violent with damaging consequences …
A few years ago, I had a lot of discussions about the DOP colorimetry with the public relation of the time, and in fact I had abandoned the use of DOP for my work pro … Now I understand better …
I don’t understand this choice so restrictive, many other software either allow to choose the workspace, or use a very large linear space in 32 bits or floating point. This allows the least possible loss in subsequent calculations.
You probably know that the characterization of an apn can easily exceed the gamut of ProPhoto. The reduction to AdobeRGB leads to unrecoverable losses.
You speak a lot of output on screen or paper but it is not the most important. Firstly, the gamut of an ink / paper couple can exceed the A98 in cyans and yellows. Secondly, in pro environments, images often go into a bitmap editor to finalize heavy editing that a raw converter can not do. In this case, exporting from DPL in 16 bits in a large colorspace is completely illusory since we have no better than the A98!
It’s for me, for a pro work, completely redibitory.
Moreover, the gamma 2.2 or 1.8 of the A98 or ProPhoto are also not the best choice.
Today I learned that:
- Lightroom uses Prophoto as an internal colourspace and the user can NOT change it, this is not a setting.
- Capture One offer the possibility to the user to choose which internal colourspace is used
- That for this reason many (most) professionals that are serious about colours matching are NOT using LR for this exact reason -because it imposes Prophoto even if your source is sRBG or AdobeRBG-.
Therefore I suggest that Photolab should play in the more serious league and offer not only Prophoto but also the possibility to the user to choose which internal colourspace is to be used.
Because when someone only edit pictures for the web, sRBG is enough for this person because most of the world has no clue about aRBG and do not have a better monitor to enjoy it.
Because someone that would like to print a few picture and have to knowledge will switch to aRBG to get most of it’s print.
Because there is no display in the world able to show Prophoto’s full color set so who on earth has the knowledge to post process a picture in Prophoto, a colourspace that can not be displayed on a screen today ?
So WHY convert to Prophoto at all cost and get lost with conversion from Raw to Print or Internet ?
I guess the less complicated is to stick to sRVB all the time.
One level up is to go through a aRBG workflow but just for prints -even some labs stick to sRBG to make sure that customers get what they can see-.
Top level expert is using Prophoto somehow, that nobody will see on a display. Let’s leave it to the pros for now.
I am not an expert in the colour management so I will let you think about this point.
Just wanted to share what I learned and what made sense to me.
DPL uses the colour space that was used to generate the jpg (as posted by DxO). This basically leaves either AdobeRGB or sRGB and their respective limits as working colour spaces.
Changing colour spaces can change perception of colours massively depending on how one‘s system is set up. Changing colour spaces will also change colours according to the rendering intent, be it on screen or for printing.
If DxO were to add more working colour spaces, they should also add a selection for rendering intent.
Canon shooters can check these things in Canon‘s own DPP application which allows several settings for both colour space and rendering intent.
My conclusion is without having proper color space
capable media ( monitor or printer ) there is no real advantage to use wider ( e.g. ProPhote ) color space to see images.
The issue is that many of us do have aRGB or DCI-P3 capable monitors - and many ink sets (see FOGRA39 CMYK) go way beyond the aRGB/P3 gamut. Add to that, at least a few of us are soft-proofing (in PS, using ProPhoto as a WS).
Why limit colorspace to the lowest common denominator?
Thanks your post.
That is the point. Why ?
If the limitation makes any advantage ( color fidelity increase, processing speed increase, space saving during store the images, better editing quality for later use, etc.) let’s use the narrower color space. If it has no advantage, use the wider color space.
My opinion is under the actual printer and monitor selection, scientifically the wider color space make better fidelity image. However, as a human, I have some aesthetic arguments as well. The image composition and editing are based on the human taste - which is different one by one. There is no numerical expression about my color sensing taste ( different from e.g. the human eye flexibility, expressed in dioptria ). In my practice, there was no real advantage to use AdobeRGB instead of sRGB, that is why I do not miss the wider color space application.
Anyhow, I’m really curious about your practical examples when the wider color space makes better final quality in the real life. Let’s discuss about other people experiences.
I’m also curious about some real world examples. This is an almost duplicate/closely related topic, which both end now with the same question.
Just added some comments in my previous post.