This is only meaningful for the jpg or tiff made in the camera.
A rgb image isn’t mentioned. But I think it will be opened with the embedded settings of that image. Not sure about that.
Your photo’s will always be viewed in the color space of your display. If you’re lucky, color aware software is being used. When the embedded color profile is different from the displays it will be converted to that color space. If not you will see wrong colors.
What ever you will do with your pictures is up to you. But again: the selected color space does not affect the raw conversion.
Because of all these discussions, I went out today to take outdoor photos with a 40-year-old Nikon E-Series lens, with no auto anything. Manual focus, manual aperture setting, I had to decide everything. The camera was set to AdobeRGB, processed in PL4, and exported using the same export settings as were used when the camera recorded the image - AdobeRGB. Everything seemed to go according to plan, except for White Balance. I set the camera to 5500K, and the images looked more “blue” than I remembered, so I used the white balance tool to adjust them.
I tried to follow the guidelines from everyone here, as best I knew how.
I’m very curious if the image from this morning is “acceptable” to everyone here. I know it could always be tweaked, to look different, but is this image, as-is, an acceptable starting point for an image to be processed in PL4 ?
(With the help of a technician at x-rite, I downloaded and installed my display calibrator, but I have a few more questions to ask them on Monday before I finish the setup. I might also post the question in this forum - it’s pretty generic I think.)
I’ve scrolled through what feels like hundreds of posts. It looks like there is a big misconception about RAW files, particularly with regards to color spaces, but also with a few other things.
User George has given the right answers. None of this matters with a RAW file. Let me describe a simple way of thinking about RAW files that I have found helpful.
RAW means raw.
A RAW file contains the raw sensor data. We know this and yet ignore it. RAW = raw, i.e. no processing.
To turn the sensor data into a color requires processing; therefore, a RAW file cannot have a color space. Or a white balance. Or anything that requires processing.
Here’s what happens when you take a photo:
Photons hit the sensor and are converted to an electrical charge. This charge may be boosted by a gain circuit (driven by the ISO setting) and is then converted to a number using an analog-to-digital converter. Some camera manufacturers may include some other small tweaks, but that’s basically it: the RAW file is a collection of numbers, each representing an electrical charge.
Color space? White balance? This is all done starting from the RAW data. In a RAW file, you can’t lose or gain any amount of color by selecting a color space in the camera. All the information captured by the sensor is in the RAW file. There is no way to get more (or less).
A JPG image is a totally different. It is a processed image that is created from the RAW data—it is never used to create the RAW data.
Converting the RAW numbers into something that looks like an image is always a somewhat arbitrary procedure—there is no absolutely right way to do it. Many software programs will use the metadata to guide the process, at least for the initial presentation (many also apply an automatic minimal amount of sharpening and maybe a few other things). What you’re seeing is not the RAW file—it is an interpretation of the RAW data. You can’t view a RAW file—it’s just numbers.
This default presentation seems to fool people into thinking that their camera settings affect the RAW file. Well, some do, obviously—the things that affect the RAW image are things that affect the number of photons falling on a pixel such as shutter speed, aperture, ISO, etc.
In-camera white balance and color space settings do not affect the number of photons reaching a pixel and are thus not relevant to the RAW file, other than that they are stored as metadata along with the sensor data. Software can use or ignore this metadata.
I’m going to address another common misconception: the one about ISO. There’s nothing wrong with ISO 400 (or ISO 100 or ISO 6400), but people have the idea that lower ISOs generate less noise (technically, I should be saying “have a higher signal-to-noise ratio” or SNR). This is incorrect: for any given exposure, higher ISO values have higher SNR (i.e. they are less noisy).
Note the qualifier “for any given exposure”. In this case, this means shutter speed and aperture. Keep the shutter speed and aperture the same and I guarantee that the shots with higher ISOs will be no more noisy and usually less (they will, sadly, also have less dynamic range, which is why we don’t shoot using high ISOs all the time).
The misconception is so common, I wrote a white paper to address it:
Before you tell me I have it backwards, read the document.
Let me add that this does not apply to any truly ISO invariant camera (none exists although some come close). In such a camera, by definition, the ISO setting is irrelevant and the noisiness of an image depends solely on the exposure. However, even in such a camera, there are advantages to using the higher ISO numbers (except where you have a generally dark image with a large dynamic range).
There are certain environments (such as in photographing birds) where the choice of aperture and shutter speed are constrained. In these environments, the best setting for ISO is automatic.
What I was taught, and what I believed before things got so confusing here in the forum, was essentially what you wrote. A raw file was a “capture” of the data on the sensor at the moment the image was taken. The amount, and color of the light, is converted into data, and that data is recorded in a file.
The camera gets information from that data, and creates images we can look at. So does a computer later, when is loaded with that data. None of the things tools we use to edit an image have any effect on the raw data. Changing the light (varying the aperture) will instantly have an effect, along with changing the shutter speed.
It makes sense that any “gain circuitry” driven by an ISO setting would have an affect on the sensor, making it more or less reactive to light. At the extreme, if the gain circuitry were set so the sensor would not react to light at all, there would be no data. The “image” extracted from that sensor data would be a black rectangle. If the gain circuitry was designed the other way, so the sensor was overwhelmed by the light hitting it, the image eventually extracted from that data might be a white rectangle. In this crude terminology, I can imagine how the raw file will lead to a lighter or darker image later on, when it is interpreted by software to make an image.
Once any gain circuitry is set, if someone were to measure the light settings of every pixel, one at a time, that is the data that leads to a “raw file”.
…just like in the film days, when film that was known to be underexposed could be “pushed” in the development process, to get more useful data from a negative.
To be honest, we ought to talk more about what happens as you change the ISO dial from 100, to 1000, to 10,000, and maybe to 100,000. Does the raw file change as we might expect it to change?
Sometimes as in shooting birds like you noted, the shutter speed needs to be quite high, and the aperture needs to be giving me a “sharp” bird image over a “less sharp” background. I usually don’t care what the ISO ends up as, and with the noise filter technology built into PL4, that is much less. important than it used to be.
I used to allow the ISO to be whatever I needed to make the image “work”. Following your suggestion and leaving the camera in “Auto ISO” sounds like a very good option to me, at least most of the time.
A correction to another common misconception: the ISO setting does not affect the sensitivity of the sensor. It applies a gain to the signal coming from the sensor. And it only applies a gain—the signal will never be less, only more.
Clipping can occur when a pixel is saturated with photons. It can also occur when the ISO gain raises the signal above what the analog-to-digital converter (ADC) can record. This is why increasing the ISO drops the dynamic range and why you have maximum dynamic range when using the base ISO (i.e. no gain).
By the way, I should add that there is no benefit from using non-native ISO values. Many cameras have, say, a native ISO up to 6400 (using gain circuitry) and then non-native values like 12800 (using digital multipliers after the ADC). The latter are no better than raising the exposure in post and they don’t provide the noise-reduction features of the ISO boost.
It matters because idyn, Active D’ighting change the exposure.
Thus the rawfile’s data.
How do you call the wavelength range a sensor is sensitive for?
Bayerfilter is letting redisch, blueisch, greenisch through. So that’s the colorspace of the sensor.
Break the bayerfilter off and infra red can be effecting the sensor to.
So it’s a camera’s colorspace with no whitebalance.
Black is no photon charge, white is saturated charge. As in colorless charge. Greyscale based. The bayerfilter creates r,g,b, numbers. => a color is computed out of that.
Nope, ISO doesn’t drive a (electrical) gain. ISO is just a number.
(i was in this the same assumption earlier, like that ISO is defining the sampling size of the analoge signal of a well. But it isn’t i was told by a guy who really seems to know about this stuff… I need to find the conversation for you because the details are over my head.
Edit: Definition of ISO:“A raw file has no lightness, it is just exposure measurements.”
“ISO defines the relationship between exposure and lightness such that an exposure of 10/ISO lux seconds should result in an object rendered with the lightness for 18% grey”
So there is no connection between ISO and electronic amplication. Just a number"
Yes, agreed, and in this rawfile there is a number which defines the ISO.
So the rawdeveloper knows which lightness there should be coupled on the numbers.
Whitebalance? True agreed, No wb in rawfile latent image,that’s bound to the exifdata’s camera’s setting.(also part of the rawfile.)
The colorspace set in camera is for oocjpeg not for rawfile.
Do you know the theory of a latent image on a photodrum of a laserprinter?
That are numbers converted in a laserbeam strenght writing on a charged turning drum.
Laserlight is decharging the charge. “burning” a latent image in the surfacecharge.
And four of those drums turning syncronised catching kcmy toner.
The toner is mixed with a carrier, magnetic material with a coating, and together it is called “developer”.
Only the correct color of toner in those four developerunits will show the correct image visible for us on paper. When i stop the proces every drum has a part of the image stuck to it charge. If one of the charges is off the endresult is off. Blackbalance in this case.
Back to the rawfile, in the numbers is a location for every pixel. So there is a latent image in a rawfile. Every pixel of the sensors resolution has r,g,b,g numbers attached.
If you process only the grid you see a unfinished latent image. Like on the four drums before the toner is atracted on the surface.
But yes, if you strip the metadata, exit data from a rawfile then what’s left is just exposure. Charge levels. Controlled by the shutter and aperture.
As I said, only the things that affect the electric charge coming out of the sensor matter. Photons affect the charge and exposure affects the photons. ISO gain changes the signal before it reaches the ADC, so that matters as well.
I believe we are in agreement. I also believe everything I said in my prior post was correct.
The RAW file records the raw sensor data. Color information is not stored there. But, yes, if you know the characteristics of the sensor, you can use that to determine how to convert those numbers to colors. The conversion is as good as the sensor characterization.
I think of a colorspace as a way of mapping a value to a precise, specific color. Mapping a RAW number to a color is not precise in the same way. It depends on the accuracy of the sensor profile. Adobe might use one profile, DxO might use another. Given a value and a colorspace, DxO and Adobe would agree on the color; given the same RAW file, they might not.
If a sensor profile were an absolute thing, then one could call it a colorspace.
Using ISO to mean an ISO standard definition to relate exposure to lightness, you’re correct. However, in a camera the native ISO setting is implemented using a gain circuit (non-native ISO is done by a digital multiplication after the ADC). For instance, go to https://photopxl.com/noise-iso-and-dynamic-range-explained/ and check figure 3.
No, every pixel has one number attached. It will be either a red, green or blue channel, depending on the color filter above the pixel. For four pixels, the pattern is typically (but not always)
The way ISO is implemented on most cameras, raising the ISO lowers the exposure that the camera can accept before the highlights clip. Thus the top end of the DR ratio is reduced, reducing DR.
Changing the ISO doesn’t affect the full well capacity of the sensor at all. However, if it results in a change of voltage gain before the ADC, it does affect the level at which the ADC clips the highlights. Suppose the ADC is set to accept a maximum voltage of one volt, which let’s say represents 10,000 photoelectrons in the sensor. Now we double the voltage gain. One volt now represents 5,000 photoelectrons, so the highlights are clipped at 5,000 rather than 10,000.
Not my text by the way. I quote a guy who tried to explain it to me.
As you see, he talks about gain on the ADC. What is effectivly done is changing the charge, aka voltage to a point it can be read properly by the ADC.
Say we change iso 4 stops wile exposure is stable, then we we don’t overexpose the sensor (that’s stil the same exposure.) we only overflooding the ADC inputchannels, clipping everthing higher then max accepted voltage. That’s why the DR of the camera is getting smaller.
Yes we have a agrement.
ISO is loosy coupled on the gain circuit of the ADC. When we dail the isowheel we tell the camera to change the gain of the adc input to maximise the voltage, for the conversion Analog to digital, which represents highest exposure. (edit:on the way also enlarging the photonnoise, shotnoise with the “image” data they are equally enlerged by the gain. By shorten the shuttertime due raising isovalue the amount of gathered shotnoise is less , that’s why most sensors are better of with raising iso then a longer shuttertime. )
Agreed, that’s why every rawdeveloper has a slightly different color and WB interpretation. It’s a “floating” space not precize and therefore no real whitepoint and blackpoint and also no real defined balanced White Balance. That part is done by the raw conversion algorithm of the rawdeveloper which can be different in interpretation.
Owh, miss reading/writing. I ment the resulting rgb pixel . Not the photonwell.
That’s indeed 1 number. When you need four wells, r,g,b,g the sensors resolution is (atleast) four times the native photoresolution. Setting a m43 on 16:9 is just a crop. Exposure is still full sensor. (i know you know this.)
Thanks for this link.
I skipped most formula’s because of my formulablindness. Dyslectic.
Those DR calculations are killing.
But understood most of it in general. That bsi part was very interesting.
The noise part of his explaination was also interesting. Snr.
I did notice some thresholds on my camera but didn’t understand, had some idea’s, which part of the route picked up the noise. Specially heating noise due long exposuretime is a bit…h which post processing can’t handle.
That’s why i use 3200iso(with deepprime even 6400iso) in darkness, the (iso)gain noise and digital adc noise(banding) is less troublesome for denoising then the longexposure shotnoise, light polution, straylight and heatingnoise,
One thing is clear: a better lens , better optics, has less self made strayphoton’s which are separated from the real path through the opticals and causes random adding to the photoncount and has by that a higher resolving power. Aka it handles long exposure time better. The better the glass the less noise caused by strayphotons.
Hot days is better to raise iso then overdo shuttertime heating up the sensor and it’s cicuits. => more noise added to the count. Mostly "red"count right? That’s why long exposures are redisch in shadows.
First, other than advertising stuff, in your example, apparently there is NO reason to go beyond the native ISO, in this case 6400?
Second, if I accept that, and it does sound logical, how does a person know the “limit” of how high one can go in ISO? I’m looking at my Nikon Df as I type this - the highest ISO listed is 12,800. Then there is H1 and H2. Are those extra vaulues useful for anything other than advertising? And how would I know if the 12,800 is still a “native” ISO, to use your words?
I guess one more question, since this is the PL4 forum - how high can one go in ISO, and still expect the awesome noise software to still get an acceptable result? Based on what you wrote, I suspect the noise won’t get any worse if you go beyond the highest “native value”. Am I correct?
You are still off. I used to think the same thing: four sensor pixels make one image pixel. If I could just rip off the Bayer filters, I could get a grayscale with 4X the resolution.
Nope. This what demosaicing is about—you interpolate the missing color values. So those four “wells” are still four pixels. And after demosaicing, all four pixels will each have a complete RGB value. For each pixel, two of those channels will be interpolated.
There are precise definitions of colorspace and imprecise ones. In the precise sense, there is no colorspace for RAW files; in the imprecise sense,you can choose a sensor characterization to resolve the sensor numbers into colors and call that a colorspace, but I was using the precise definition.
To hammer the point, your camera’s sensor probably doesn’t match any existing sensor characterization and will probably drift with time (the dyes in the filters might break down). A RAW file’s “colorspace” is always going to be an approximate thing.
That’s my take. Manufacturer’s are always coming up with weird tricks, so without having a specific camera and an expert on hand, I wouldn’t swear it’s always the case; for most cameras today, though, it’s probably the case. If someone knows an exception, I’d like to hear.
There’s nothing to say that ISO 6400 is the native max. The max depends on the camera; it could be 1600 or it could be 256,000 (in theory).
Apparently, ISO 50 is also not native, but done by just raising the exposure by a stop and then lowering the exposure in post. The “lowering the exposure in post” is automated and has to be understood by every software tool dealing with the RAW file. Software that missed this rule would display the image one stop overexposed. Basically, you don’t really gain any dynamic range this way and you might blow out some highlight.
I notice that page has the statement “As we move up to higher ISO values, noise obviously starts becoming an issue.” Hopefully, those of you who read my document on ISO will understand that the author was actually lowering the exposure on the comparison shots and that that is the source of the degraded images.