PhotoLab 4 with X-rite i1Display Studio display calibrator

No data, but my subjective feeling. Just edited first time with the calibrated monitor, photos from the sun going down this evening, here at the river Rhine. What I noticed in particular. I have not changed the colors on any image. The calibrated monitor is much closer to the original image with the new color profile. Normally I almost always use HSL to enhance colors.

1 Like

Spoiler: Built-in screen capture usually reads values from video memory.
It does NOT show us what you see. It ± shows what your processor sends to the graphics unit.

Taking a picture of the screen with an actual camera does not really help either, unless you include a color checker in the shot and balance screen and room lighting etc.

1 Like

But also if, what is sent to the graphics unit, is what I see on the screen and therefore also the same what you can see, on the taken screenshot. When I put both aside, there is no difference.

If I can ask one simple question, it is what do you “see” in these images?

I don’t understand, so I see three seemingly identical images. The only noticeable thing to my eyes is the position of the “x” for the white point. What is it that I’m supposed to look for, to show me useful information about the settings?

I assume the numbers at the top right are to identify the location of the “white point”, but why is one position better than another? In all these years, I never got a good explanation of what I’m seeing, and why it matters. I’m not even sure why the outer shape is what we see here - is it always the same?

Welcome to the forum! …and thanks! No, I wasn’t aware of that. I will see what I can find.

I will look into this.
Maybe test it with cardboard, and if that works, order it.
The light that is hitting my screen is mostly from behind me, two huge doors to my balcony. It won’t help with that, but it still might be a good idea.

Light coming from behind you is a challenging situation…

If at all possible I would try to rotate the entire setup (desk, seat and monitor) by 90 degrees so that the light from the outside world strikes the monitor from the side and not face-on.

FWIW, the hood I am using is lined on the inside with black velvet-ish material to reduce reflections.

Serious editing is not possible with bright light shining on or reflecting off the screen. Move the workstation away from any window too. Controlled results require a controlled environment. Rooms should be lit evenly and be not too bright. Sunlight shining in is certainly a no-go…

Other than that, here are a few comments about what happens between PhotoLab and our screens. It’s a chain of functions that creates an image from the mess if bits and bytes of a raw file. Please note that the drawing is meant to illustrate the tech, not to completely describe it.

  1. Data representing the image is created in position 1. That’s PhotoLab or any other app indeed.
  2. Data is transmitted to the graphic unit (position 2) over some interface (A)
  3. Data can be manipulated in position 2. That’s where we calibrate a screen in most cases.
  4. Data is then sent to the monitor over interface (B) e.g. the cable that connects PC and monitor.
  5. Data is then processed in position 3. Here, data is made ready for display.
  6. Position 4 is a driver unit that opens and closes the LCD valves to let backlight (5) through or not.
  7. The viewer (8) looks at the screen and sees the image and reflections from room lighting (7)
    The viewer can change brightness (of the backlight) and contrast (amplification in the driver)

Calibration can take place in position 2 - or 3 in case of monitors that allow hardware calibration.
Most monitors (not the cheap stuff) are calibrated in the factory. Monitors that can be hardware calibrated let us access the internal unit (position 3) in order to change how data is translated between B and C more precisely or in a way that supports the respective need, e.g printing with the monitor set to 5000K or normal viewing at 6500K etc. Changing color temperature is not made by changing the temperature of the backlight, but by changing the “translations” in positions 2 or 3.

Screenshots come out of the box in position 2. And there’s a lot going on before we actually see the image. A screenshot taken in position 2 will never look the same as seen by the original viewer, unless an other viewer has the same equipment set up exactly tike the equipment of the original viewer.

From a technological point of view, the sketch illustrates one possible implementation. Other implementations exist and may, or may not, differ considerably.

1 Like

Hm, that’s going technical now :neutral_face:


(short version)
The Eizo CG2730 (main screen) is equipped with a built in device and does hardware calibration ‘automatically’, based on my custom sets of parameters in Color Navigator 7, and as I’m very happy with the results I did not experiment any further.
My Eizo L767 (extended screen) as well my notebook are both software calibrated with Quato’s ‘Silver Haze 3’ (i1 Display xx from X-Rite) and Quato’s iColorDisplay software xx.

That’s how it works since 3 years now – all set to 80 cd/m².


I’ve been ‘fighting with colour’ long time (don’t remember, when I switched from Win95 to XP).
Around 2008, I started exploring colour management more seriously (X-Rite’s ‘Color Munki Photo’ + software, 2x Eizo L767, Epson R2400 at that time).
Must have been Color Munki’s bulkiness (>> monitor calibration), the software’s limitations as well as sRGB colour space, that I decided for an upgrade and went for the hardware calibrated Quato IP262ex, bundled with Quato’s extraordinary iColorDisplay software + ‘DTP94’ from X-Rite. Later I got to know, the (regulary) hardware calibration process was done much quicker with Quato’s new ‘Silver Haze 3’ (1st generation of X-Rite’s ‘i1 Display pro’).
When my Quato went wrong, I replaced it with the hardware calibrated Eizo CG2730 + Color Navigator software. Their ‘buttons’ allow to flip instantly between 6 custom calibration targets, meanwhile my second monitor rests on sRGB, (manually approached to the main screen, when set to my standard: 5900 Kelvin, 80 cd/m², AdobeRGB) – so, no automatic display control stuff.

As I was building paper profiles with Color Munki Photo + software, I got the chance for a (second hand) X-Rite ‘DTP20 UVcut’ (commonly known as ‘Pulse’), which was sold by Quato + bundled with their extraordinary iColorPrint software. Right out of the box, their process is (was) a lot faster. In contrast, the easy to follow but stripped down Color Munki Photo software works iteratively plus the profiles can be tuned (and then are as good as with DTP20). BUT, when one likes double weight paper, better let it dry overnight before continuing.

When I replaced my computer (and changed from Win7 to Win10), I had to realize, X-Rite is not supporting my trusty DTP20 anymore (was really mad), as about the same time they had started with renewed products, also offering their i1 Studio line to beginners and enthusiasts.
This i1 Studio software *) supports my old Color Munki Photo, the new ‘i1 Studio’, 'i1 Studio ‘Display’
[ see https://www.xrite.com/categories/calibration-profiling/i1studio ] … long way to go.

have fun, Wolfgang


*) set up to check Mike’s monitor calibration process

What did you decide?

My solution to this has been to edit in the evening or at night.
During the day, the room is much too bright.

What did you decide?



Mike, I didn’t decide anything, but simply wrote


"This i1 Studio software *) supports my old Color Munki Photo, the new ‘i1 Studio’, 'i1 Studio ‘Display’
[ see https://www.xrite.com/categories/calibration-profiling/i1studio ] … long way to go.

have fun, Wolfgang

*) set up to check Mike’s monitor calibration process"


because I CHECKED how to exactly monitor calibrate with i1 Studio software (Windows version) BEFORE I wrote my post # 166 to you, not to tell you something misleading, aka rubbish.

The moment you open i1 Studio software, you are prompted to choose between monitor calibration and paper profiling. Maybe you didn’t notice. – And as already said in referenced post # 214, I use i1 Studio software for paper profiling (building icc-profiles for printing) only, not for monitor profiling and it’s some time ago, that I looked at that part of the software … therefore I checked …

have fun, Wolfgang


(followed by double post to be withdrawn)

I likely didn’t pay attention, as I just followed the instructions for calibrating my display. I don’t remember, but that’s likely what happened.

1 Like

From what I have been able to gather, the shape is known a s “chromaticity” diagram and represents the full range of possible colours.

A gamut or range is then laid over that to demonstrate what part of the whole a colour space can cope with…

In this case, I attached this diagram to show that there is a difference in the white point between the three profiles. As to what the numbers mean, I haven’t a clue but, presumably they represent the coordinates of the white point in relation to its surrounding colours and, therefore, what shade of white (colour temperature?) you perceive as “normal” under a given lighting condition.

The outer shape is always the same as it represents the range of all possible colours - it’s the triangles they draw inside it that show the range or gamut a profile is able to render.

Benq have an article that, sort of, explains better than I can but I’m still not sure if I really understand all the in-depth stuff. All I really understand is how to profile a screen or printer using a profiling tool. The science and numbers like the coordinates is not something I have been able to relate to in the real world.


I would agree with @herman about changing the position of your workstation if at all possible.

1 Like

I have no idea what the numbers mean, or why the colors are displayed in this rather unusual shape, but based on what you wrote, and looking at the white colored triangle representing sRGB, my immediate thought is that sRGB is not very good at picking up many shades of the color green.

So, if I want my camera to pick up as much data as possible, I should never use sRGB but instead use AdobeRGB.

But I am viewing my images on my 27" ASUS sRGB display - what happens to the data that was captured in AdobeRGB that can’t be displayed?

(These are probably very simplified, and foolish ways of describing things, but if my understanding of things is correct (doubtful), then sRGB must be ignoring anything outside of its rather limited range.)

Or, to exaggerate even more, it’s like taking a photo of a mountain with your huge view camera, and then taking the same photo with a $100 point-and-shoot camera. All the wonderful detail that was there to be captured so well with your view camera no longer exists in the image captured by your P&S camera. Does this analogy represent what we’re talking about in “color spaces”?

(Not to say the huge view camera is any kind of limit - I’m sure you can use an even larger camera, with an even better lens, and capture an even better photo. Which leads me to another question, which I’ll post elsewhere, rather than make this discussion even more confusabobbled…)

That’s certainly what I do.

I cannot say for sure but my understanding is that the software you use for editing in the AdobeRGB space (like PhotoLab) gets sort of magicked into sRGB on the way to the monitor, unless it’s an AdobeRGB capable monitor, where no magic is involved.

All I know is that my digital pictures taken in AdobeRGB, or 5" x 4" transparencies scanned into ProPhotoRGB, look good on my monitor and produce stunning 40" x 32" prints.

As I said before, a lot of this coloury, spacey, techy stuffy is way beyond my pay grade. I just do what I’ve always done and it works as long as I choose the widest gamut possible at the taking/scanning stage.

What a lovely word. I’ll have to add that to my list of sillinesses :laughing:

1 Like

Raw data is just that, raw data.
It has no color space attached, it is just everything the sensor registers.

When you set in-camera aRGB or sRGB it affects only the JPG generated in-camera.
As long as you shoot in raw you don’t have to worry about camera RGB settings.
You may be shooting raw + JPG, in that case the camera RGB setting is applied to the JPG.

DxO PhotoLab uses internally the Adobe RGB color space.
You may be viewing the DxO screens using an sRGB monitor (as I do).
That means that you can not see all color gradations DxO is creating.

If that is a problem or not depends on what you want.

I would say that when you create images for web-use it should be no problem at all as you see the images in sRGB as does the rest of the world (remember that sRGB is the web standard).
Just make sure that you export images in sRGB.

When you want to produce paper print things become slightly more complicated but if you would like to discuss that you’d better start a new topic, it is an entirely different can of worms :grinning:

1 Like

I guess I wasn’t thinking. Since I’ve stopped using ‘jpg’ and I’m leaving the camera in ‘raw’ all the time, that setting is meaningless unless and until I take some photos in ‘jpg’ for whatever reason. I was looking at “the trees”, and not “the forest”. (I guess if the camera is set to ‘raw’ only, that “rgb” setting should be grayed out, like other inappropriate seting selections are.).

Thanks!

Just to confirm, I took this image as ‘raw’, and I now understand PL4 does its work in AdobeRGB. When I finish editing, it will be shown on my sRGB ASUS display. I hardly ever print my images, but if I wanted to, I can only see it in sRGB because of my display, but when it gets sent out for printing, they will use AdobeRGB. They will interpret the image data differently than what I saw. Unless I was to purchase an AdobeRGB monitor, I guess I’m “stuck” with that situation, despite all my effort in calibration - or am I missing something?

Something else -

Curious if this is supposed to happen. I took this photo this morning, and copied it to the iMac around noon, culling out all the poor images. Then, on the iMac, I did some basic cropping, then put it away until I could finish it in the evening on the ASUS.

I worked on it for half an hour or so, until I thought it was finished. I got curious what it would look like in black&white, so I changed the preset. Interesting, but by comparison, boring. So I changed the preset back to Standard.

My question - everything I had done to optimize the image was still there, except the cropping. It went back to the full-size image. Is this a bug, or is this the way it’s supposed to work?

_MJM2517 | 2021-01-02-boats, seagulls.nef (19.1 MB)

_MJM2517 | 2021-01-02-boats, seagulls.nef.dop (10.7 KB)

It will be rendered differently for sure.

Please do never assume a printer / paper / ink / whatever combination will render the entire ARGB (or even sRGB) spectrum of colors.

The whitest white will be determined by the paper and the incident light, almost never the 6500 K where both ARGB and sRGB are based on.

The color gamut will only partly cover sRGB on one end and it may exceed sRGB (or ARGB) on another part of the spectrum.

On my Windows system, when I click the Apply preset thingy, it opens a dropdown box giving previews of the various presets. The previews show if the cropping is applied or not. Is this not the case on your Mac?

1 Like