The color work space is in my eye’s only fixed after exporting out the rawconverter.
in jpeg tiff or dng.
before that it’s changeable because it’s converting in a cameracolorspace which is cut /clipped by the selected colorspace in the application, which you can change in some apps.
So i get the Adobe RGB camerasetting to get some extra “room” processing the oocjpeg.
Maybe i change camera settingto AdobeRGB.
The question i have is is PL’s AdobeRGB cut out of the camera’s one and is the outer section(colordata which falls outside theAdobeRGB) “thrown” away? And not recoverable when fiddling EV and hue sat and contrast. Because if so then is prophotoRGB as base workspace indeed a much better choise. Because if data is lost by rawconversion in smaller colorspaces the clipped data gone if you use a small colorspace.
stil iam convinced with noncalibrated sRGB screen i don’t see any difference until i print something.
This is NOT my area of expertise, so I cannot say for sure - - but, I would say, logically speaking, that this would have to be the case … because AdobeRGB is smaller than the camera colorspace.
This is probably only an issue for “future proofing” tho (were there to be future technological advances), as we’re not likely to be able to see any difference in colour detail on our screens (according to my understanding).
So one reason more to keep your rawfiles and developinfo in archive.
If you need wider gamut, bigger colorspace in the future it’s rather easy to redo those by “re-edit” and just adjust the limitations more free. export again. Done.
This can’t be done if you archive Tiffs or lineair DNG. That is demoniac’d and contains a colorspace of that time.
That 3D visualizing colorspace “trick” is a very handy way to see how to fit the most of a cameradata (image) in the available colorspace aka clipped colors.
Is that a expensive feature?
This is a question best answered by people like @wolf because I can only speculate. If the camera colour space got mapped just once to the working space at the beginning of the imaging pipeline (without some further re-mapping), and if the relative colorimetric rendering was used for the mapping, then PhotoLab would re-map the out-of-AdobeRGB colours so that in effect there were no colour subtleties preserved for those initially out-of-gamut colours.
Well I can see subtle differences for some colours outside of Adobe RGB on my pretty old wide gamut monitor (esp.in the yellow/red range).
I did this colour management test with PhotoLab2 and it appears the program uses the Perceptual rendering intent (at least for jpegs). Additionally I learnt that the thumbnails in PhotoLibrary and in the image browser are not colour-managed in the sense that the embedded profile is not read by PhotoLab.
Thankyou for confirming / clarifying how PhotoLab 2 handles our colour profiles …
… but thankyou also for providing those test images for us (from the link that you kindly provided), so that we can see how well (or how badly!) our other programs and browsers are handling the ICCv2 profiles (while taking account of the few limitations mentioned in your text on that linked page).
Hi, agreed of course !
When you talk about ProPhoto or Melissa, you talk about the gamut of course (which is the same for both), not their TRC. A workspace must be linear. Moreover, these spaces are not the largest available …
I agree with that.
I also reported this in 2017 and I did it again three days ago … I also hope for the 1.7, but not sure … Affinity also has a lot to learn in color management: there still has no input profile in the development module!
Colour space: almost all of us are working on sRGB monitors (hopefully 100% sRGB). There’s only a few affordable 4K monitors which display even 90% AdobeRGB let alone ProPhoto RGB. So regardless of internal processing engine, we’ll only be ever able to see the sRGB version.
Theoretically I can see the math benefits of processing in a larger space.
Does anyone have some real world examples where processing a larger colour space ended up yielding richer images in sRGB? Keep in mind, I’m not looking for examples where processing and outputting in a wider space yielded richer images as that’s not an option yet for 95% of users. And that’s before we get to printing.
This is a serious question. My own experiments with AdobeRGB and and enhanced colour spaces about ten years ago ended with no visual benefit and bad colours when moving images between applications (Apple Aperture and Photoshop) and/or posting to the web. I was calibrating with basICColour and had quality monitors even then.
PS. Of course input in AdobeRGB, processing in AdobeRGB and viewing in AdobeRGB on an AdobeRGB 95% coverage monitor should yield richer colours. Even input in sRGB, processing in sRGB and viewing in sRGB on an AdobeRGB 95% coverage monitor should yield richer colours too - the monitor has better colours.
My camera can be set in sRGB or AdobeRGB colorspace.
there are two point of views
1 ) if 99% is sRGB output and also monitor is sRGB why starting with a larger colorspace in your camera which edges you can’t see and thus control?
2) a larger colorspace in capturing and storing in rawfile can give you more headroom in recovering highlight and shadows. even when the workspace is set as output sRGB.
But one tiny problem is the space between edge sRGB and edge AdobeRGB isn’t visible so strange effects can happen when editing color.
I am split in both. option 1) is the safest. no surprices. Option 2 sound clear, more data means more to handle.
One thing is always unclear with me: If you load a bigger colorspace adobeRGB in a sRGB workspace, is the data, which falls out the sRGB colorspace stil there or is it cutoff from the start? If cut of option 2 is a non working idea.
So then i am bount to sRGB from camera till end product because of my editing limitations in equipment… ( no softproofing)
colorsettings in menu yes, But as i remember correct DxO is following colorspace setting of the camera in exif data of the rawfile. So a raw-file don’t have a specific colorspace other then the limitations of the sensor. Normal the Raw developer is the first place you set colorspace and workspace. This the place you cut off everything beond the colorspace of your choise. Hence my question, does it bin cut off from the start or can you move outer edges inside chosen colorspace? when changing hue, saturation and and exposure.
Edit: i starting to doubt some things.
Working space: is maximum colorspace the rawdeveloper can handle?
colorspace setting, only for export purposes. (?)
(i can imagine that if you choose a narrow colorspace (sRGB) in export settings the “preview blinkies” like highlight and shadow and thus color saturation/hue are also adjusted to that chosen colorspace. So for me i only interested (for now) in the fact is the maximum workspace still adobeRGB? (in where the raw data is converted to RGB levels/data.)
If so then it doesn’t matter which setting in camera or export is chosen but if it follows the export colorspace it does. (when in and out is the same colorspace)
It would make more sense to start you RAW editing with the ability to select the widest colorspace, not everyone shoot jpeg or do basic edit just to upload to Facebook or IG. Having the choice of Prophoto / Adobe / sRGB to start with would just facilitate the whole process, if PL can handle it why not having this option in the menu setting.
It’s partly an OS issue. The two OS handle colour spaces very differently. The more moving around among colour spaces, the higher the chances of some awful results sneaking out during automated colour space conversion.
I’d love to see a real world comparison between:
an image started Pro Photo RGB and went through a complex image processing pipeline as Pro Photo RGB only and then is saved out to sRGB for printing or web.
an image started AdobeRGB and went through a complex image processing pipeline as AdobeRGB only and then is saved out to sRGB for printing or web.
an image which started in AdobeRGB but which was processed in a sRGB pipeline in the same applications.
I’m genuinely curious about how big the difference would be between the three. We might be counting how many angels can dance on the head of a pin or maybe the difference is huge and improving the processing colour space and output options should be priority number one.
For me it’s rather unclear how DxO is using colorspaces it allready has.
you can set : original, follows camerasetting, sRGB, AdobeRGB and A personal icc profile as export profile.
So that’s clear.
What’s unclear is if you load a raw file and it’s viewed onscreen as preview(by export you process to tiff or jpeg before that it’s a preview), would it be viewed as according to the setting for export? This would be making sense.
And if sRGB is chosen, is the full spectrum of the rawfiles colorprofile still avaible? Or is it cut off?
This for pushing and pulling in development.
With this whole discussion I wonder if there are cameras which can reproduce colors outside of AdobeRGB. Canon offers only sRGB and AdobeRGB for in-camera development. If they had the hardware to make use of wider color spaces wouldn’t they offer an export option for that? That would be a killer feature, even if 99% of consumers would never see it for the lack of hardware. Therefore I think the discussion here is rather theoretical.
More important is the bit depth to avoid banding. I would be surprised if PL used anything else but 16bit. Does Photoshop still default to 8bit?
No Pl isn’t. But what is mean is: A raw file got demosiaced and processed to a RGB value mapping.(otherwise you can’t “see” the image on your screen.)
I set my screen as sRGB in DxO, so i limit the colorspace for viewing, and then i export in AdobeRGB or set original( exif data of camera calls sRGB or AdobeRGB) I would find it odd if i limit preview in screen setting more then the exporting Jpeg colorspace. So is workspace “cut off” in adobeRGB or sRGB when i set export to AdobeRGB?
option one: it’s processing the colorspace of workingspace to screendriver to a lower point ,namely sRGB. and actual developement to Jpeg/Tiff is still AdobeRGB.
option two: no use to use Adobe when screen is sRGB set. (DPL is lowering to lowest/smallest colorprofile in the lineup.)
Seeing those horse shoe soft proofing of the two images i wonder why we still use sRGB and throwing those amount of data away. (As the last comment states it’s the viewing device which set the limitations not the sensor)
So my initial daffed mind screw stil stands, if i am push and pull colors around in my sRGB viewing device, drive set to sRGB, What’s the DPL’s workspace of the image after Demosiacing rawfile? AdobeRGB? or further because the demosiacing is a continues process until you develop? (shifting exposure and such pulls color inside AdobeRGB) If this is the case i just set my stuff in sRGB and use raw in DPL and be done with it. (export for HDTV and some printing on cheap printers will do fine in sRGB i think.not worth the hassle to switch to Adobe or prophoto as a home enjoy-er)
I did some test with color and the blinkies of highlight;
I’ve tried Nik collection with Prophoto and there is a huge color shift, only converted to adobe rgb and srgb work, at least for me
I had a similar issue - turns out there was a problem with my ProPhoto ICC. You might want to re-install yours.
That said, if PL is working internally in aRGB at best, there isn’t a lot of point to up-converting to a wider space on export. The wider space (or LAB space) is needed for internal operations.
If you’re exporting to, say, Photoshop, and have PS’s internal colorspace set to ProPhoto, you can convert to ProPhoto on import, then after performing whatever operations you see fit, you can convert to some other colorspace. If I’m printing from PS, I usually leave the image in PP, and convert to my paper/ink ICC colorspace as I softproof and print. You need a decent calibrated monitor for this to work.