Larger working space than adobe rgb - for not loosing colors our often very expensive sensors provide us

Bit depth and gamut are independent of each other.

Within a given gamut, more bits simply mean that the hues within the gamut will be shown with higher precision or differentiation and a lower potential for banding. We want more bits for better hue differentiation. More bits don’t enlarge a gamut.

  • Putting more sheep (bits) in an enclosure (gamut) does not make the enclosure bigger!

Now, if we have more bit depth than what is necessary for desired precision plus some leeway for extreme corrections, we can work without hard limits, which seems odd if we thing of sheep… If we push sheep out of the enclosure, we still want to keep the count (if we can) and we’ll have to decide what to do. Let them go or bring them back into the enclosure? This is where rendering intent enters the game…

1 Like

I am a screen viewer so print very minor amount of images.
I think the main point is how many DxOPL users are in need of a larger colorspace then AdobeRGB?
Because that’s what developers of DxOPL will take in account in there time table

How much printers are give a wider gamut then AdobeRGB.?
And how much people are use those printers?

for the editing on screens AdobeRGB is large enough and most screens can’t show beyond AdobeRGB.

Al those questions did ask the dxopl developers them selfs too.

It’s interesting to see the divergence of video and stills technology. Video technology has advanced with the emergence of new HDR televisions and projection technology (for in-theater use)(although the percentage of available HDR content is still fairly limited). But stills technology is still limited by available monitors that mostly don’t support HDR. I’m sure most of us stills photographers (especially those who mostly see photo on-screen) would benefit from the changing landscape, but it may take a while to get there.

Exactly.
But the larger the gamut the bigger the steps. Gamut/bitsvalue=step.
The main reason that AdobeRGB is standard 16 bit meaning 65536 steps. Obvious now is when the gamut gets larger then the steps get larger.
Color management is dealing with color, analogue colors. And how one deals with the situation when moving from one gamut to another, the rendering intent. But that was not the purpose of my posts.

George

Unlimited number of steps in an unlimited gamut is what we want and from older posts, I took that DPL provided this capability within the scope of the number space used. Now, AdobeRGB seems to set limits, but we still don’t know how all of it is “stuffed” into the gamut of the screens or printers we use. Anyway, we’ll have to work with hard proofs, until softproofing is introduced. Until then, I set my colours in Lightroom, before printing.

And what is that soft proofing in LR doing? It’s showing the colors in another color space and what colors can’t be used. Let’s forget about the used rendering intent.
But back to what I want to mention. I think that in demosaicing time every color is used independent of the color gamut. So no blinkies are added to the overexposed blinkies.

I just wonder if this is true.

George

Hi,
I’ve got some feed from a friend for the specialists :smiley:

“…In the Develop module, by default Lightroom Classic CC displays previews using the ProPhoto RGB color space. ProPhoto RGB contains all of the colors that digital cameras can capture, making it an excellent choice for editing images.”

I’m still wondering…if I have an RGB or sRGB capable monitor, certain colours are not displayed, but by working in ProPhoto mode, the colour nuances/gradations should actually be displayed in more detail within the respective colour space…right?

I have already considered asking Eizo, as the topic has been discussed for several years, but apparently it is still open to discussion.

greetings

No, I don’t think so.

George

Hi George,

Belief is religion - knowledge is science :smiling_face_with_three_hearts:
or as my former boss used to say " There is my truth, there is your truth and there is the TRUTH".
The subject is not so important for the realisation of my requirements, as I can give my pictures to the lab using a lot of information from the forum and get them as they correspond to the development with DXO, AP, or LR. My monitor has 100% sRGB, is calibrated with SpyderX, I know the lab’s printer and paper, use the appropriate profiles and voila.
Nevertheless, I find it exciting how the topic is filled with many opinions, ideas, old info, etc.

have fun

Nope. The screendriver renders the colorspace to the choosen profile of the screen no matter which workingspace you in in a application. Only the chroma an lumination sliders would work in a larger area/range between black and white and no color and full saturation. ( don’t know if the compression rendition is lineair squeezing or simply clipping.)
I don’t think you can run a prophoto profile on a sRGB screen and get authentic colors.
I would say, they look bleeched. No full saturation because that’s out gamut.
But i am no match for pro printers of images and keep my work safe in adobe and sRGB. :grin:

I don’t believe, I think and that’s the door to knowledge.:smile:
Your monitor can only produce colors in its native color space. So some remapping must take place and for that perceptual is choosen. See by example Color Management: Color Space Conversion.
That’s why I don’t think a larger color gamut will reduce blinkies.

George

@George & all …

As already said, the size of the colour gamut (= colour space) has nothing to do with clipping from overexposure.

  • An overexposed raw-file stays overexposed, no matter what colour space is used.

Clipping colours from raw-file mean, they are cut off (thrown away) or shifted/squeezed into the smaller AdobeRGB colour space, not sure how PL handles that.
– What’s known, during image export DxO uses perceptual rendering intent and no black point compensation.
– Not to forget, the amount of colours outside of PL’s internal colour space (AdobeRGB) depend on the subject, the cam/sensor and the conditions. – On a foggy day, the ‘needed colour range’ will be much smaller than in full sunshine, not to mention some HDR conditions.


LR internally uses the widest colour space available, commonly known as MelissaRGB, similar to – if not the same, only named differently – ProPhotoRGB, which does not restrict the user – neither while raw-file conversion nor with editing!

The screenshots from post # 13 → Larger working space than adobe rgb - for not loosing colors our often very expensive sensors provide us - #13 by Wolfgang
show, how to activate a warning while in LR’s Softproof mode,

  • screenshot #1 – for mismatch of the pic’s and the (current) monitor’s colour range
  • screenshot #2 – for mismatch of the pic’s and the (chosen) printer/paper’s colour range

to indicate, which part(s) of the pic’s colours are currently out of gamut.

With printing, the LR user has the choice to handle (to map) those exceeding colours with relative or perceptual rendering / to adjust the pic to get the most of it – e.g. to keep important textures etc.

2 Likes

Thanks Kyosanto to try to put this subject back on the rails.

You’re right in the subject : Who does dxo target with photolab ? Does it want to stay in the course of the high-end software ? Technic is evolving fast. And target too : Soon, if not now, the larger target will be phone and social network users.

This is not the point.

It seems there’s a lot of confusion beetween working color space and display color space.
I do render 3D since more than 20 years.
3d render engines work now in 32bit floating point (not a color space, but the biggest space avalaible to decribe light they have). When working with them and viewing images on monitor, you have to choose a display color space (the one your monitor can display, of course). So yes, you can’t see all colors and values your 3d software is computing. But after that you save your images on hard drive, and here is a big choice : for what purpose are the images intended. And there either you choose to convert your images in a smaller color space and loose colors and precision, or you can still save them in a larger color space than your monitor can display, and even save them in 32bit floating point for later work on them (compositing generally) without having loosed anything.

Dxo working space is adobe rgb and no more. That’s it. And working space clips (or move in) everything is out of it. No return possible.
Photolab should work (to stay in the course of top photo software) in a larger working color space, but provide smaller display color spaces to match monitors display.

So yes, nearly no need of a more modern color space to look or print (the most usual way) in sRGB. No need of it to print on a middle range printer probably. Anyway for now.
But why to buy expensive sensor ? Soon cell phones and their very smart algorythms will give nearly same images as middle range dslr.

So the question still here : who does DxO target in the future ? (future that is already here).
And why it provides calibration for high end sensors and glasses if it CLIPS the result they give ?

4 Likes

@JoPoV
Understood. And i agree.
DxO must decide if they want to be on the top of developing or not.
And if prophoto is a demand they should reply to that call.
But then the softproofing part should be doen too.
That application @Wolfgang pointed me to is a great visiualisation app to compare profiles, colorspaces.
As @wolfgang correct wrote a camera colorspace is sensor depending and the stored colorspace, aka exposuredata, in the rawfile depends on the scene, in color and contrast. You can’t store color you didn’t capture.
So for me i would like a rawfile colorspace viewer compared to the colorspace i work in. As i showed rather rudimentair above.
And a crosshair to see where the part in the image is in the colorspace.
This way i can examen in detail how the rawfile’s data fits in the workingspace. (how much it sticks out.) the blinkies and histogram does more clipping and you need to compres to fit. So you disform (change) the hue and colors blind until you see them in the viewer.

So i am with you. DxO needs to step up the gameplay in the colormanagment department.

Peter

Dear Peter,
one side note from the Eizo Webpage
"When photographing with the RAW format, the raw data recorded by the sensor is saved as RAW files, i.e. “in the raw”, without conversion and without profile. A RAW image file contains all the colours that the sensor can capture, but without any information on how to interpret these colours, without a profile.
I know that all people know this, but :innocent:

2 Likes

What you have is charge of a certain wavelength on one photocel.
The bayer array, gives a grit which allows a Red wave length and Green wavelenght and a blue wavelenght to reach a photocel. ;pixel.
So the data is “colorless”
Full charge is “white” and non is “black” or to label the greenpixel, saturated green en no green.
So the colorspace of a sensor is theoretical fixed. But the use of that space not. That depends on the scene you capture.
A raw file is a map of all the pixels charge, colorless defined by the grit version r,g,b,g.
A raw converter interpreted this info and creates a white and blackpoint where the colors in the used colorspace, adobergb in this case, are calculated from that basepoint.
If you change WB it starts to calculate again. If you change CA correction it calculates again. So no rawfile data is clipped until you export.
Only thing which is clipped or compressed is colors which are seated outside the working colorspace.
Edit the other clipped thing luminance and colors is just outside the DR of the sensor and simply not captured so no larger colorspace in the raw converter will recover that
It can be simulated doh. And some can fill in detail which is never captured. Artificial color so to speak.
At least that’s how i learned it.

As far as I know the used rendering intent in photography is perceptual. Everything is compressed in the smaller gamut, causing NO extra blinkies…

George

Say a value is 0-254steps between no exposure and full saturated exposure. (8bits RGB.)
Under exposure blinkie is triggert at 10 and overexposure blinkie at 245.
The DR of a pixel is rendered and compressed in the 0-254 of each channel.RGB. L is a common value of all. Aka the lightnes in all three colors.

Thus the histogram and blinkies are no tool to examen a rawfile they show the export value of the jpeg’s rgb values in 8bits of the present status of the developing sliders.

And what i would like is a tool to examen a rawfile.
FRV has a histogram and channelexposurelevel viewer which does give more accurate information of the rawfiles exposure status .

Off course I mean clipping with blinkies.

George

1 Like

Thanks JoPoV, what you say is also very true.

I will not paraphrase you and the others who said very interesting things. I support the enlargment of the working color space and the softproofing features.

If I had to rank it :
1- larger and selectable working colorspace, or ProPhotoRGB color space if you need to hardcode it. The thing is I want to have a workflow that doesn’t force me to choose between amazing DPL denoise and demosaicing algorithms and strict color fidelity. today If I have to do a fine art print out of my medium format RAW, I use CameraRAW. At the very minimum to be able to output 16 bits TIFF (or more?) in ProphotoRGB, without the step of “color losing internal AdobeRGB conversion”. It will straighten the workflow.

2- Softproofing, better management of the color profiles, with actual rendering simulation (and black point compensation)

3- updated printing module, with pattern printing (ie put several time the same picture ,in a given pattern if you want to print it large…what I do sometime. For example be able to copy paste the image and move/rotate it to optimise the paper when you print out of a very expensive roll). This latter functionality is missing from Adobe, you have to do your own printing pattern by hand…

2 Likes