How do I view image with corrections from LRC

Your intent is clear enough, but your difficulty in explaining it is exactly the problem. One need only look at the threads on colour management precipitated by the PL6 changes to realise that ones and zeroes are only data and not a picture. At the most basic, a collection of ones and zeroes could be called “an image” when they are, in fact, designed to be executed by a processor. Such image might not be pleasing to the eye, but it is 100% possible to render any data as an image. The fun part is there are millions of ways to do this! What is the colour depth (bits per sample), what are the colour components (RGB, RGBG, CMYK, etc) of each sample, what specific colour space are the colour components recorded in (sRGB, Adobe RGB, ProPhoto, etc)? That’s just the start point. That gives you “a picture” but then you want to modify those values? Every RAW processing application sells itself on how easy/powerful their sliders are. I don’t know enough to speak authoritatively, but I would guess exposure could be done consistently, but then, as already mentioned, there’s white balance, which is… tricky… and highlights which are… usually roughly the same… except in PhotoLab (at least).

And so on, and so on, and so on.

I remember my early computing days on a BBC Micro, where I learned about the 8 “colours” and their component red, green, and blue parts. There was one binary digit (bit) used to store whether each of those three was off or on. “101” was red on, green off, and blue on, so “Magenta”. I upgraded to an Acorn Archimedes which could display (at any one time, a subset of) 4096 colours! More bits per R,G, and B components! I got it! It made sense! (Well, kinda. It was a little weird in how you selected them.) Later came 1.67 million colours — more bits per R,G,B again. Easy! And then I learned about colour management. Or rather, learned about the complexity of colour management.

Consider that colour management alone represents many, many ways to represent the same data to the end user (or indeed to the next step of a process). Now, what does it mean to say, parametrically, “increase the red by this much”?

If you want to really lose faith in technology, read up on character encoding. I know smart people who claim “plain text” is the perfect, future-proof way of storing written language. Except, there’s no such thing as “plain text”. Even these very words you are reading now need quite a bit of interpretation (by many pieces of software) to make sense to anyone. We know it works well here because áèïōû. (Acute accent, grave accent, umlaut, macron, and circumflex.)

A Linear DNG is so-called because it contains what most of us would consider to be a ‘regular’ grid of (RGB) pixels, as distinct from RAW files (generally including camera-native DNG files) which (generally) contain individual R, G, and B pixels in a Bayer (or Foveon) matrix. So in that sense, it is like any other RGB format — JPEG, PNG, GIF, or… TIFF.

And from Wikipedia…

In other words, it contains TIFF-like data in a TIFF-based container format. Hence… Linear DNG is effectively the same as TIFF to an end user. The differences are in the metadata.

1 Like