Internet doesn’t have a color space. It’s the monitor that has a color space.
It’s still very confusing mentioning AdobeRGB as RGB.
Standard gamut for a monitor is sRGB. A more specific gamut is AdobeRGB. And if an AdobeRGB image is shown with the “right” colors on a sRGB monitor depends on 1) the used software to show that image and 2) if that software knows the used gamut in that image/embedded color profile.
When you mention the internet you mean the internet browser. FF has been for long the only color aware browser, for windows. But even then not standard. One has to change some setting in the preferences.
So even when your friend on the other site of the world has a wide gamut monitor but not a color managed browser, he will see different colors.
Read the introduction. It’s the standard in the software/monitors. Everybody is supposed to have that.
And the transition of a picture that uses AdobeRGB to sRGB is depending on the used software. It’s not magic.
A nice article by the way. I’ll read it completely later.
“* When you import the RAW file into PhotoLab – PhotoLab will apply demosaicking and convert the RGB values from the “native color space of the camera” into AdobeRGB. – PhotoLab will apply any color adjustments (saturation, HSL, but also FilmPack color rendering if you happen to use that, etc.) in that color space. I call this “working color space” because it is the color space PhotoLab does most of its work in. – To display the image in PhotoLab, PhotoLab will, after all other processing, convert the image into the color space of your screen.
** If you select AdobeRGB for export, that last step will not take place, and everything will stay in AdobeRGB, as you say.*
** If you choose “as shot” for export, PhotoLab looks in the EXIF data of the RAW file whether you set AdobeRGB or sRGB in your camera settings and will either keep AdobeRGB or convert to sRGB.*”
As has been said before which one you select does affect the jpeg preview you see on the display on the back of the camera. It was for that reason that I chose AdobeRGB, to avoid getting a restricted colour rendition there.
All you wrote is true.
To be exact.
Idyn, D-lighting are a split tool.
The toning aka highkey lowkey adjustments to stretch DR, in practise a smartlighting action are jpeg only effecting.
The modes which stepping down initial lightmeatering is effecting exposure of the sensor thus the raw file.
On or off?
My personal use is based on my normal shooting envelope.
I walk with my family around and take shotscof things i think would be nice to turn in a image. No tripod, nor minuts of preperation. Only a quick choice how to get my thought in the camera.
I have idyn on Auto. So it only activate in 1/3 2/3 3/3 -ev if it detects a overrun in DR of the scene. In my older camera , micro sensor, i alway shot -1/2ev compensation as default knowning underexposure is more easy to recover then overexposure.
Now i have both worlds, normal exposure and a automated correction if scene needs it.
Less time needed to click snapshot… less moawning family members waiting for me.
In my memory, i tested it long a go, my manual EV correction overrules idyn.
The weather here is dull no High Dynamic scene so i can’t test this right now.
Further i have 4 custom settings.
1 1.4x electronic zoom, jpeg only, for the moments the lens is to short and i need cropped light meatering.
2 bird mode, back focusing and locking and tracking mode or centrebox mode.
3 a bare Aperture mode, all aid systems off. I am in control.
4 geesh down know.( i know again a copy of Aperture mode best settings…)
(sign to re build my customs to my new use preference… )
Extra is when i meshed up settings in A or P i just turn to customs and see what that has set. Then i know what i decided after closly reading the theory behind the setting. Best option so to speak.
If any reader here have a idyn or active d-lighing on there camera and is in summer, high dynamic scene in garden, please test automode and manual override by spotmeatering and ev compensation. (i am fairly sure its overriding idyn’s settings but not for sure.)
I your case or anyone who are mostly take photographs not snapshots and P&S, indeed turn everything “magic” and automated aid of.
All is slow shutter compensation, shotnoise compensation, a aid which i think if i am remembering correct ( we posted about this here) also for rawfiles a good feature.
Lensshadow correction, vignetting, is jpeg only but straingly enough dxopl seems to read and applies this when i activate it in my panasonic. And thus overshoots this correction because it has the same in optical module… Knowing this i don’t turned it of but in certain shots i look closly to correct this correctionovershoot.
What i tried to say is don’t be afraid to use aid settings aslong as you know the limitations and benefits. And test effects on your rawfiles. See if it’s doing what you expect. Never assume that rawfiles are bare storings of manual settings.
Electronics are quit smart these day’s sometimes smarter then we think.
That are not my words. It’s a quote from wolf himself. A part of the link.
A raw file doesn’t have an output color profile only an input color profile dealing with the sensor characteristics. Only during exporting a rgb image the output color profile is getting important: for which monitor is the image meant??? Just to stick with monitors.
Sorry, but in my mind RAW files does not have any color space.
The color space in camera is only used for the embedded jpeg and eventually for proprietary camera software to use it as default.
PL works internally with AdobeRGB, and it’s only a working space.
It’s only at export that you choose the color space that you want (original - meaning the one from the camera -, sRGB or AdobeRGB) and whatever the camera choice it does not have any influence or quality loss.
Every capturing device has a range of in which it capture the build for data.
In camerasensors its a range of hue, saturation, brightnes. So a sensors colorspace is just the fysical limitations build in the hardware. The second “colorspace” of the sensor is embedded in the readout of the sensors photoncharge. (note that the filters rgbg (4 wells) raster creates what we call colordata rgb not the sensors wells that’s just photon’s changing in charge.)
The sensitivity of the wells for the different lightwaves range redhue, greenhue and bluehue.: If calibrated correct the sensor readout should be neutral. (white light, what we experience as white light.)
(but we all know that different manufactorers has different “neutrals” as color sensitivity caused by the fysical buildingspecs.)
This last “colorspace” is converted in to digital data and mapped in the rawfile.
This is what most people sees as camera’s colorspace. The biggest colorspace you can use in your computer by using the rawdeveloper software to decode the file.
Dxo is choosing AdobeRGB as workingspace, the pixel preview, and if you set monitor colorspace to AdobeRGB it converts this to the monitors profile. If sRGB it convert to sRGB.
One thing is alway’s foggy.
If this conversion is always active are the clipping points (borders of colorspace) amorfe and changing with the tonal settings?
I think so, this is visible when you use recovery sliders as highlight and shadow.
So it’s a floating bowl (srgb) in a floating bigger bowl(adobeRGB) in your sink(cameracolorspace) so you van move both bowls around till you hit the sinkwall.
Edit, a profile suggest that it has a 0-point. Which only is when WB is profided and set.
Raw data itself has not a “whitelight”-point that’s given in the exifdata which we also can set at fixed 5600K.
So the colorspace as we normal speak about is the one DxO produce just after demosiacing in AdobeRGB with a calculated white point and blackpoint and whitebalance.
correct, because the exported output colorspace has A) a profile( hopefully ) suited for the landing device, monitor, smarttv, 4ktv, printer etc, and B) a WB (whitepoint/blackpoint)
The point is we humans see a certain colorspace so every viewdevice needs to be converted to that colorspace. a X-ray photo is also converted from a rontgen wavelength(colorspace) to your srgb monitor.
What you said about, developing programs are using the embedded jpeg as a reverence that would be implementing that the second part of idyn/active Dlighting (tonal contrast change) is also effecting a rawfiles preview in your developer.
i know for sure that Silkypix camerastyle profile reads rawfile exifdata and even applies Ires settings and colorsaturation profile so there are in the exif data settings of camera which arn’t in first place hardboiled in the rawfile but can be used as additional corrections taken over by the developers program.
Apparently, that is an obsolete article:
" Note: This document is obsolete, and is retained here for historical purposes only. It was published on 5 November 1996, as a proposal specification for sRGB as a standard default color space. sRGB has since been standardized within the International Electrotechnical Commission (IEC) as IEC 61966-2-1. During standardization, a small numerical error caused by rounding error was corrected. The viewing conditions were also clarified.
The W3C CSS3 Color specification specifically references “Multimedia systems and equipment - Colour measurement and management - Part 2-1: Colour management - Default RGB colour space - sRGB”. IEC 61966-2-1 (1999-10) ISBN: 2-8318-4989-6 - ICS codes: 33.160.60, 37.080 - TC 100 - 51 pp. as amended by Amendment A1:2003.
The latest official sRGB specification may also be purchased from the IEC."
They have values based on the used sensor/camera/color array.
From that link to Wolf
“That’s what I call “native color space of the camera”. It is not intended for display, it’s what the sensor “sees””
Without the knowledge of what the camera sees it 's impossible to continue.
So the input color gamut of the raw file must be known and the wanted output color gamut/profile. That conversion is part of the demosaicking process.
I just looked the book up - I knew none of this about her, and about Robert Capa. I’ll probably order the book myself at some point. https://www.amazon.com/Girl-Leica-Based-behind-Robert-ebook/dp/B07P7S3DD9
When I was growing up, I wanted to be a photojournalist. My “hero” was Bronson, who starred in theTV show “Man With a Camera”. He usually seemed to use a 4x5 Speed Graphic, but had a small Leica too. Back then, I was using a Contax II and later a IIa. That led to a Nikon SP. It takes some searching, but his TV series can still be found.
As I grew up, I loved the way people covered the news, and wanted to be part of it - except that they often ended up dead. Memories. The book should be a fascinating read.
As PhotoLab (still?) doesn’t appear to have AWB available as an option, you might find it worthwhile to let the camera fill in the AWB data in the NEF, giving you another easy option when setting WB in post.
If you mostly shoot outdoors (yeah, me, too), you can create a profile set to “daylight”, then set that to your default profile in preferences.
Letting the camera calculate AWB for you does not affect the RAW data, other than the embedded JPG in the NEF, but if you’re using PL and might sometimes want AWB, you need that data.