Content Aware Fill for black edges

That’s not correct. The point of DXO Wide Gamut color space is not to do what others have done, but to solve what others didn’t bother to do. Which is how do you work with colors you cannot see in a least destructive way, knowing you will have to eventually squeeze them in a smaller color space.

Why DxO PhotoLab 6 has moved to a wide gamut, official explanation

Wide gamut monitors can display more vivid color than those with a standard gamut like sRGB. Whether this is useful depends on the content of an image, because under normal lighting conditions, even objects that we perceive as a very colorful – for example red tomatoes or a blue sky — fit within sRGB.

However, there are a lot of colors that do not fit into sRGB. These are usually encountered on artificial objects such as brightly colored sportswear or from artificial lighting such as laser stage lights. On a wide-gamut monitor, these colors can be reproduced more accurately than on a regular monitor.

To fully exploit the capacity of a monitor, photo-editing software should use a working color space with a gamut which at least matches that of the monitor. When we created our first RAW converter almost two decades ago, it was safe to assume that monitors would be either sRGB or – for the high-end, color-critical models – AdobeRGB. Choosing AdobeRGB as our working color space seemed to cover all needs, so that is what we did.

Since then, technology has evolved and monitors have improved. With Display P3 monitors used in recent Apple computers, their native red is “redder” than the “reddest red” that DxO PhotoLab 5 could produce. In order to simulate pure AdobeRGB red on such a monitor, the color management system must dilute it slightly and make it less intense by adding a small amount of blue. The much wider working color space of DxO PhotoLab 6 — which comprises both AdobeRGB and Display P3 — solves this and can produce pure, native color on such a display.

The same applies to printing. Certain printers and printing services can produce colors that are outside of AdobeRGB, and DxO PhotoLab 6 allows you to harness their full potential.

At the other end of the imaging workflow is the camera. Camera sensors do not actually have a gamut. Instead, they’re sensitive to every wavelength in the visible part of the spectrum and high-end sensors only differ from low-end models in that they are better approximate the spectral sensitivity of the human eye. Thus, every color in a scene can be observed and recorded in the sensor native color space.

However, when converting from sensor native color into a working color space, as you do when developing RAW files, it may happen that a color cannot be represented. Essentially it has fallen outside of the working color space’s gamut. Having a working color space with a wider gamut therefore allows us to preserve more colors, just as they were recorded by a camera’s sensor. In combination with a wide gamut monitor and printer, the scene can then be captured, processed, and reproduced without losing its original intensity.

Finally, working in a wider color space gives photographers more headroom for adjusting the color in their images. For example, PhotoLab’s ClearView Plus tool can produce certain colors that do not fit within AdobeRGB. But with DxO Wide Gamut they are preserved. You can therefore use the ColorWheel or a Control Point to desaturate these colors, and bring them back into the gamut.

The problem of ‘clamping’ out-of-gamut colors

What does falling outside the color gamut mean precisely? Let’s start by going back to the idea of color values.

The simplest way to describe out-of-gamut colors and how they are managed is to think in terms of 8-bit images. In an 8-bit image, each of the red, green, and blue pixel values can range from 0 to 255. 255/0/0 would be the reddest possible red, while 128/128/128 is mid-gray.

Mathematically, a color would be out of the gamut if at least one of the three RGB components had negative values. But obviously, this wouldn’t make sense as a monitor cannot emit negative light. A color could also be out of gamut if some of the values exceed the maximum. That, again, is not technically possible as a monitor cannot display values brighter than its limit.

One way of handling out-of-gamut colors is to simply clamp them to the closest allowed values, for example, setting them to 0 if they’re below the low limit, or to 255 if they’re above it. This is what many color management systems do, but they can produce unwanted results.

What do we mean by unwanted results? This ‘clamping’, whereby one of the RGB components is altered while keeping the others unchanged, means altering the hue. A more sophisticated method involves preserving the hue while accepting a reduction in saturation, and this generally yields better results. Unfortunately, even this approach can cause some problems. For instance, textures flatten as the contrasting color within those areas falls completely out of the gamut.

How DxO’s Reimagined Color Processing Fixes the Problem

For DxO PhotoLab 6, we’ve worked to ensure that all of the luminance details captured by the sensor are maintained throughout your workflow. For the best possible quality, our reengineered algorithm is designed to act in two stages: first when converting from sensor native color to working color, and then when converting from working color to output color.

As the image moves from sensor native color to working color, in order to avoid losing any of the details originally captured, the algorithm smartly analyzes the colors in each image and then desaturates – only if necessary – highly saturated colors by a small amount. This applies even to those inside the gamut, and is done in order to make headroom for those outside the gamut. Thanks to this algorithm, we can therefore produce images that contain all luminance details that were captured by the sensor — and although they appear less colorful than in the original scene, all of the tonality and detail is maintained.

The first stage (Protect saturated colors in the Color Rendering palette) has been reworked and improved compared to PhotoLab 5, the second stage (Protect color details in the Soft Proofing palette) is entirely new.

Most of the time, photographers use wide-gamut monitors which, in combination with software such as DxO PhotoLab 6, allow accurate reproduction of most of the colors contained in images. But when it comes to sharing images, either online or as physical prints, these output media have different gamuts that are typically a lot smaller.

A smaller gamut means that colors can look different between what you see on your monitor and what you get in print, or after exporting to other devices. Those changes in color also mean that delicate textures can be lost. Wouldn’t it be better to take that output gamut into account during editing? This is where soft proofing comes into play.

Soft proofing allows photographers to get an on-screen simulation of what an image will look like when displayed or printed on a certain device. It gives an overview of the outcome by emulating the less saturated primaries of a standard screen, or the inks of the printer and the way they physically react with paper.

The conversion properties are embedded in specific color profiles created for each combination of printers/inks/papers and are usually provided by printing services, device manufacturers, or are created for personal printers.

Once downloaded and installed, users can select a specific profile to be used as a soft-proofing base, and after activating the option in their application, can adapt their color adjustments according to the displayed results in order to achieve the desired image. This can include adjusting color casts, or contrast and luminance issues in areas such as shadows or highlights.

Though it cannot completely replace a hardcopy proof, soft proofing is crucial for saving time and money that would otherwise be wasted in the trial and error of getting a print acceptably close to the original image.

However, soft proofing isn’t a free pass to perfect output. It’s important to remember that soft proofing mode, as with any settings dedicated to color accuracy, requires editing on a calibrated monitor and in a consistent viewing environment.

The ProPhoto RGB color space on the right is far larger and easily contains all of the surface colors. However, while every RGB value in sRGB corresponds to some color, part of ProPhoto RGB lies outside of the spectral colors and corresponds to something that doesn’t exist. While fully saturated magenta, red, yellow, and cyan correspond to actual colors, fully saturated green and blue correspond to imaginary colors. This can make ProPhotoRGB counterintuitive when it comes to editing photographs.

For this reason we decided to design an RGB color space with the widest possible gamut that can be achieved utilizing spectral colors as primaries. The result is a color space that includes close to every color that can be reproduced on the best monitors and printers available today, and encompasses all of Pointer’s Gamut, the 4089 real-world surface color samples collected by scientist Doctor Michael R. Pointer at Kodak Research in 1980. link to https://onlinelibrary.wiley.com/doi/abs/10.1002/col.5080050308

The DxO PhotoLab 6 working color space uses spectral colors as its primaries. It is big enough to contain all real-world surface colors, and it achieves this without imaginary colors — i.e., every combination of R, G, and B in this color space represents an actual color.

DxO Wide Gamut: An intelligent compromise

We believe that this color space, which is quite similar to the television standard Rec. 2020, provides the best possible trade-off between preserving as much color as needed and allowing users to manipulate color in a way that feels natural and intuitive. Combined with our gamut-squeezing algorithm and soft proofing tools, it allows photographers to reproduce any color they may encounter, as closely as possible to the original, without ever losing details.

Adobe RGB 1998 as the name suggests was made long ago, and sRGB “s” for standard sRGB is a standard RGB (red, green, blue) color space that HP and Microsoft created cooperatively in 1996 to use on monitors, printers, and the World Wide Web. It was subsequently standardized by the International Electrotechnical Commission (IEC) as IEC 61966-2-1:1999. sRGB is the current defined standard color space for the web, and it is usually the assumed color space for images that are neither tagged for a color space nor have an embedded color profile.

The problem is that some wide gamut monitors and some wide gamut printers can reproduce more saturated colors that fall outside the gamut of both sRGB and even AdobeRGB, while ProPhoto RGB is a loose cannon when it comes to color since its way too wide. It was never meant to be a working color space per se, but more as a archival container for all the colors. Even those that don’t exists outside of mathematical abstractions.

Adobe tried to solve this problem of working with a wide gamut but still delivering for smaller gamut color spaces and working with monitors that have perhaps sRGB gamut. They tried to use modifed ProPhoto RGB for working with raw files. (Its prophets RGB primaries gamut + linear gamma), and while it was not as bad while you worked with such large color space, by the time you were forced to squeeze is for sRGB you had issues. You had to use perpetual or relative colorimetric or few more methods that have been around since sRGB which is so long ago, it’s a problem for modern hardware/software workflows.

Color Management “Lost Tapes” Part 5 – Introduction to color spaces

Color Management “Lost Tapes” Part 6 – RGB Working Spaces in ColorThink Pro

Color Management “Lost Tapes” Part 7 – RGB Working Spaces and monitor in ColorThink Pro

Color Management “Lost Tapes” Part 8 – RGB Working Spaces and Output Spaces in ColorThink Pro

ProPhoto RGB and Adobe version of RGB that was used in ACR/Lr, was big enough to allow color operations, but you couldn’t really see what you were doing since most monitors would not be able to display much of the colors and some are just mathematical abstractions. There are situations where this can be a limitation, as this tutorial I made long ago, demonstrates.

Color Management “Lost Tapes” Part 13 – Color Management in Adobe Camera Raw

As you can see from the tutorial, I have demonstrated the problem. While you could preview sRGB, AdobeRGB, ColorMatch and ProPhotoRGB, you couldn’t really account for other types of color profiles until you go to Photoshop and there you had to use tools to squeeze the larger color space into a smaller one in a very imprecise way. There was no dedicated tools for that, and only automated options for conversation between color spaces are outdated relative colorimetric and perceptual options.

And you would end up with the same problem how do you squeeze it into smaller color space. You could hard proof or soft proof it, but you had only two options in most programs. Relative color metric and perceptual. And there was no easy way, especially in RAW workflow to squeeze out of gamut colors into smaller color space. The closest thing to that was vibrancy slider, but it didn’t operate based on out of gamut colors relative to the color space you are working with or preparing to output to. It was just designed to tame some of the most saturated colors, especially in skin tones, based on some arbitrary criteria that is unrelated to color management.

When you work with for example Lightroom you are working in Melissa RGB… (its prophets RGB primaries gamut + gamma 2.2) you run into other problems that I’ve explained here.

Color Management “Lost Tapes” Part 14 – Color Management in Adobe Lightroom

Not only does DXO Wide Color Gamut space helps to overcome the limitations of Adobe RGB and ProPhotoRGB, it also helps to squeeze colors with more pleasing results. As they say: The first stage (Protect saturated colors in the Color Rendering palette) has been reworked and improved compared to PhotoLab 5, the second stage (Protect color details’ in the Soft Proofing palette) is entirely new.

That is new. A problem that only DXO managed to tackle in this way.

2 Likes

A good and logical structured post. I didn’t watch the videos yet but that willcome.

George

1 Like

yes that is what other did time ago … the problem is nothing new, even with AdobeRGB you still face that when you display & output target is ~sRGB :slight_smile:

I read the marketing material when it was published, there is no need to bother with quoting it

I think you are assuming too much about something you don’t understand well enough to have a good argument. You are opinionated but not well informed. I have provided the arguments, you have provided the opinion that is factually inaccurate. You are entailed to your own opinions off course. But not your own facts.

you are totally clueless how ACR/LR works… what is called “Melissa RGB” is just one of many color spaces that Adobe uses for various purposes and it is used to render a histogram with ProPhotoRGB primaries and sRGB curve - not gamma 2.2 … please try to educate yourself in some basics first

Now you are getting nasty. Try facts instead of emotions. And keep your opinions to yourself, or provide evidence of something to support your opinions. Something factual.

you read too much marketing material w/o any clue how things are working in other raw converters… gamma 2.2 in MelissaRGB is a prime example of your "knowledge’ :slight_smile: … have a nice day

Just be more specific. sRGB has a gamma curve of 2.2.

George

Yes. sRGB has a gamma curve of 2.2, Melissa RGB, an unofficial name Adobe used for Lightroom has the same gamma curve as sRGB of 2.2, but the color primaries are that of ProPhoto RGB, meaning much wider in gamut than sRGB. Its confusing color space because ACR for example does not use Melissa RGB (ProPhoto RGB primaries gamut + gamma 2.2), but uses some other space that is ProPhoto RGB primaries gamut, but Gama is linear of 1.0. And yet display quasi soft-proof and histogram follow whatever is chosen in the preferences of ACR (ProPhoto, AdoneRGB, color match or sRGB). I demonstrated this in a video that I posted earlier.

Neither of these solutions, in ACR or Lr are ideal because ProPhoto gamut is too large for any device and Adobe RGB is becoming too small for some workflows, like printing on wide gamut capable printers or using wide gamut monitors that can display more color-saturated colors than Adobe RGB.

This leaves a problem of not only which color space is best suitable for modern workflows and has least compromises but also how to take very saturated colors in an image, and work on it non-destructively through the processing of the image and finally deliver it to target color space, with minimal loss of fine details in the image in the very saturated areas.

Until DXO tried to solve this, there was no real well throughout the solution. Most of the solutions were outdated and meant for different eras of hardware and software workflows. DXO team tried to solved the problem of gamut with their Wide Gamut Color space, which is large enough to accommodate colors one would see in real life, but not so wide like ProPhoto which is best used as archival color space. And unlike Adobe RGB, a previous contender the DXO color space is wider and better suited for wide gamut printers and monitors. I’ve demonstrated the problems of Adobe RGB in my videos as well.

The protect color details option in DXO combined with their soft proofing is another innovation that is small but significant, since it allows the user to use a too specifically designed for that purpose. A purpose of bringing the saturation down as much as it is necessary, when moving from wider to narrower color space, while at the same time not the saturating anything other than out of gamut colors.

As for as I know, there is no other tool that is specialized for that purpose. We can do all kinds of things with color, but nothing that is specific for this purpose.

Here, lets take this image, and deliberately overstaturate it.

There are few tools like out of the gamut warning in Photoshop or color range filter in Photoshop allows for out of gamut color selection but its quite cumbersome and not robust enough to use efficiently.

sshot-3395

And even in soft proofing in Photoshop, you only have standard old school methods of clipping or containing colors.

Nothing like preserve color details, that DXO has introduced.

Here I saturated colors too much in DXO. And turned on out of gamut warning.

Here you can see as it is on the monitor screen.

And than I activate the protect details and soft proof for SRGB, and you can see that as you would expect the colors becomes less saturated and changes hue to fit into smaller sRGB, but details are not lost and change is mainly in the area where it has to happen, not entire image.

I would have a hard time doing this with other programs. Even Photoshop.

Also someone else confirms this on another thread.

DxO Wide Gamut color space test
Started 7 months ago | Discussions thread

NAwlins Contrarian • Veteran Member • Posts: 8,153
DxO Wide Gamut color space test

A quick test I ran supports the hypothesis that there are visible benefits from DxO’s new “Wide Gamut” raw processing color working space, as compared with DxO’s previous Adobe RGB raw processing color working space.*

The test uses some photos of tulips I took as gamut test images. Previously I found I could not get all the color in DxO PhotoLab 5 (which processes raw files in Adobe RGB working color space) that I got in Lightroom (which processes raw files in the “Melissa” variant of the ProPhoto RGB working color space).

Here is one of the files, as exported in sRGB (just to give you an idea of the image):

So it appears to me that with images of natural subjects, captured with my camera, the new DxO Wide Gamut raw processing color working space is providing a visible benefit.

*DxO confirms the previous use of Adobe RGB for processing raw files, but not TIFFs and JPEGs, at https://support.dxo.com/hc/en-us/articles/6754299074077-What-color-space-does-DxO-PhotoLab-use-.

Full thread here: DxO Wide Gamut color space test: Retouching Forum: Digital Photography Review

I’ve done similar tests with flowers that other day and it definitely is a visual improvement of other software.




What I understand is that the in-memory image is in Prophoto with a gamut of 1. Then this image is sent to the monitor. On this way the image is converted to the monitors color space and corrected with a gamma of 2.2, or whatever value. The histogram is also based on these values.
Eiditing is done in the working color space with a gamma of 1, viewing is done in the monitors color space with a gamma of 2.2 or something like that.
This is how PS works. Is this also with PL?

George

This is true for ACR and Lr, yes.

Yes and no. ACR will change, histogram and appearance of the image to whatever color settings are chosen in color settings, that little blue text, that is underlined. But the image will still be treated as ProPhoto RGB primaries with linear gamma. Once the image is opened from ACR to Photoshop, Photoshop will take over color management based on whatever its settings are for color management.

This does not happen if image is opened in ACR from Adobe Bridge, than as far as I know, .XMP metadata is appended to the image or it can be written into a DNG and preview will be based on whatever settings were in the ACR.

Lightroom is confusingly different, as I’ve posted in my video. Lightroom will use ProPhoto RGB primaries and linear gamma to process RAW files, just like ACR, but its histogram will show Melissa RGB, an unofficial name Adobe used for Lightroom has the same gamma curve as sRGB of 2.2, but the color primaries are that of ProPhoto RGB. Also, at least it used to be, that in the library module in Lightroom one gets Adobe RGB previews that are different than one in the develop module.

That is close to how it works, but as I’ve explained if one wants to get nitpicking about it, there are few minor differences in ACR and LR, under the hood.

Well PS if you mean Photoshop, is different. It works primerally based on the color management settings chosen in the preferences. It has its own working color space which can be various flavors, and it will also honor the color profile embedded in the image, but this is optional. One you set up color management settings in Photoshop, about what will be default working space, what would be default soft proofing color space, than you can also set up how Photoshop treats images that are opened in the program. And you have three main options.

Don’t color manage, meaning that is will essentially preview the image based on Photoshop working color profile, but it will not change color profile in the image itself that is open in it.

Convert to working profile. Meaning it will try to convert the image you open in PS to whatever the working profile of PS is. Especially if there is a mismatch

Honor the color profile that is embedded in the image and override the working profile of PS.

You can also deactivate this whole warrning box and PS will do the process automatically in the baground based on settings you choose in color settings. .

photoshop-mismatch-dialog

Also Photoshop can do conversion between various color spaces, and when that is used. Behind the scene Photoshop will use Lab color space, a device independent conversion space that fixes the numbers so we get proper conversion. One can also work in Lab color space and one can also softproof in Photoshop. So its quite a bit more versatile than typical raw converter.

I don’t know as much about PhotoLab internal color management, but I think they use now two working color spaces. Working color space: lets you filter images by legacy (Classic) or DxO Wide Gamut color space.

DXO in their manual for PhotoLab says this:

Working Color Space

DxO PhotoLab (from version 6) uses an extended color workspace: DxO Wide Gamut, in addition to the Classic profile (Legacy), which matches the Adobe RGB 1998 profile, kept to prevent users from applying unwanted changes to images that they have already processed. The Colorimetric Space Subpalette lets you to manage images according to their color profile and convert them:

All images processed in versions prior to DxO PhotoLab 6 will use the Classic colorspace, but you can convert them to the DxO Wide Gamut space.

All new images opened in DxO PhotoLab 6 use the DxO Wide Gamut color space, for even richer colors.

Converting images processed in Adobe RGB to the DxO Wide Gamut profile may change some colors and so, depending on how the picture looks, you may need to redo some corrections.

Indeed, soft proofing is available for the DxO Wide Gamut space, as well as the Legacy color space.

Important

Since version 6 (October 2022), DxO PhotoLab is no longer constrained by the color space of the input image, as each one is converted to use the expansive DxO Wide Gamut color space. For most screens with restrained color spaces, out-of-range color warnings may appear in the Soft Proofing tool when correcting images. However, getting rid of these warnings should not be your aim as they do not concern the quality obtained in exported files or prints.

Since DxO PhotoLab 6.3 (February 2023), the DxO Wide Gamut color space applies to both RAW and RGB files (JPEG, TIFF, linear DNG).

So it seems that unlike Adobe, DXO uses much smaller working color space. It used to be Adobe RGB, but problem with that color space is that there are printers and monitors that can display wider gamut of colors, and that only leaves something like Adobe does, ProPhotoRGB or some flavor of it. Problem with that is that one is working with colors one cannot see, and while we can see what is happening on screen and on the monitor, the tools (sliders) for color correction still use very wide, too wide color space. This makes it easier to get out of gamut colors while working and harder to tame them for print or sRGB conversion. Adobe RGB is usually used as compromised but as I’ve explained its getting long in the tooth, since 1998 and as the hardware and software have improved, it is not ideal anymore. Hence the best compromise between Adobe RGB and ProPhoto RGB or similar flavor of it, is DXO Wide Gamut Working color space. . It it indeed the best of both worlds. Easy to work with as Adobe RGB, but wide enough to fit all the colors, hence best of both worlds. An intelligent compromise.

Combined with other internal changes DXO made to how sliders behave and with the help of soft proofing and “protect color details” feature, it makes even more challenging, highly saturated colors, easier to tame for smaller color spaces used for output.

ProPhotoRGB was always designed to be more of an archival color space than working color space. meaning the gamut it supports is so wide that some colors only exist as mathematical abstraction and no device can actually show the colors. And even if you could show it, the human brain cannot process it. Its beyond human vision. As far as I understand it, ProPhotoRGB was used because TIFF and JPEG’s might have color gamut in the scene wider than previously more reasonable format, or Adobe RGB. So ProPhotoRGB was more a compromise than ideal solution. I don’t know why company such as Adobe hasn’t done the same as DXO has, but for better or worse they decided to do go with ProPhotoRGB. As I’ve said, its not ideal for working color space, but rather more as archival (warehouse for storage) of TIFFs and JPEGs. RAW files already contain all the data and need to be processed so it was mostly for TIFF and JPEG format.

But AdobeRGB was always more suitable working color space than ProPhotoRGB was. Since Adobe RGB became too narrow and ProPhoto was always too wide color space to be used as working color space, I welcome new DXO Wide Gamut color space as innovation.

Something similar exists in video world. Blackmagic Davinci Resolve has done something similar few years back with their DaVinci Resolve Wide Gamut. Its similar intelligent compromise as DXO.

DaVinci Resolve Wide Gamut is a timeline colorspace designed to be used with wide gamut footage, but it can also be used with SDR. It provides a universal color space across Mac and PC and makes it easier to work with multiple LOG based image formats such as Arri LogC all within a single color space. To achieve the correct gamut for both broadcast and web, one can use a broadcast monitor, Rec 709 2.4 color space, and when exporting for web deliverables, use Rec 709-A transform. Wider color spaces allow for reproducing more saturated colors and producing less clipping in saturation during color correction. Raw data automatically converts to the timeline color space in color-managed mode.

There is also attempt to create standard color space called ACES.

The Academy Color Encoding System (ACES) is a color image encoding system created under the auspices of the Academy of Motion Picture Arts and Sciences. ACES is characterised by a color accurate workflow, with “seamless interchange of high quality motion picture images regardless of source”.

In practice its not as ideal as one would like, but that is a different topic.

1 Like

Thank you for some more details – while we are now way way off … the initial topic. :slight_smile:

2 Likes

That’s true. Hehehe. On the bright side threads that organically go every which way and attract people end up being more noticeable overall, and perhaps as a consequence it will have better chance for the feature request to be spotted by the development team and more people voting on it. Also I found that sometimes topics just organically evolve in various directions. Trying to micromanage discussions ends up suffocating many of them. So if thread goes sideways, its not always worse, sometimes its just part of interactions that are organic and in the end it might be noticed by more people. If we are lucky it will also be more people voting for the original suggestion.

That being said, you are right off course. It is way off original topic. Better to keep it more focused. And if there is any useful info on color management and DXO Wide Gamut color space, maybe via search option people will find it here anyway. At some point in the future. And who knows, they may even vote on the original feature suggestion.

Cheers!

1 Like

A the risk of waking up trolls who have been vomiting into this thread: I don’t think this is a good idea and that the resources should be focused on further developing or improving existing functions.

I’d rather see focus stacking in DXO, but I wouldn’t request it and I use other software for that because I would rather have DXO focus on greater utilisation of graphics cards for processing for example.

6-7 seconds with Deep Prime and nearly 50 seconds with the other two options is not right.

Or how about better noise reduction with non-raw or incompatible files.

If you do a lot with multimedia, you will never have one bit of software that does everything - you might even have to use multiple operating systems.

Regarding phone support that was also mentioned: I am happy with PL and have found any answers I needed here in the forum or via e-mail from DXO. People seem to think phone support does not cost anything to set up and run. I do not need it and just like everyone else, I’m not willing to pay for things I don’t need (also because I am and have always been a broke ass hobbyist).

Yes. I don’t think this is mutually exclusive, but certainly stability and performance and streamlining applications should be always a priority over shiny new features. That said, there needs to be something to justify new major release, for example DXO PhotoLab 7.8.9 etc. Along whit improving existing features and working on stability and performance, we can suggest new ideas, and I’m sure the development team at DXO will have to balance requests with their budget, internal plans and other obligations we may not be aware off.

I would not mind focus stacking in DXO myself, although it seems to me that much like Panorama stitching and HDR mode that was added to Capture One, it seems to have found very usage, because more mature, established and specialized programs already existed and most people who were specializing in that kind of work, already used them.

If I’m not mistaken, Helicon Focus - Helicon Soft is quite popular and specialized tool for focus stacking.

I imagine that for development team its a classical dilemma. If they invest in focus stacking in DXO its something that probably won’t be used by many casual users, and those that specialize in focus stacking, will probably demand as good or better than specialized software like Helicon Focus. Therefore it would make sense if the causal user number was high enough to use the feature or it was better than what is already there to attract the specialist. Otherwise its not wisely allocated development resources. I don’t know if [Content Aware Fill for black edges] that I suggested is in that category or not, but if it is, I understand why it would not be adopted. It would not make too much sense.

What is your reference about what is right?

I think that is what Topaz does, but its a very different approach than that of DXO, because of demoseicing aspect of noise reduction process. In other words, DXO is really more of a specilized tool for RAW processing, not so much raster images in JPEG or TIFF format. That seems to be the whole focus and selling point of DXO. Best RAW processing software. And that reflects their focus and features.

Topaz took a different approach. My point is that DXO is competing more with the likes of Capture One and Adobe Lightroom than Topaz. And Topaz is trying to be good in its own niches. Its hard to expect a program to be best in all niches. Adobe Lightroom/ACR new AI powered noise reduction is similar in its appraoch to DXO, works only with original RAW files. Is that by accident or by design? I mean Adobe added that Noise Reduction AI feature quite late in the game, but they chose to go with RAW noise reduction. I would think its because of the benefits of RAW over already processed non raw files, when trying to get the best results. Kudos to Topaz for accepting all sorts of formats and producing still very good results, maybe no the best at the moment, but it did pioneer the AI noise reduction for commercial use and still tries to specialize in that area. It is also generally speaking faster and more optimized so when you said that you expect 6-7 seconds for DeepPrime that is the only reference I can think off. Adobe is even more processing intensive than DXO DeepPrimeXD.

That’s true. Yes. I also use whole range of software because there is no one software that does it all and does it best of all.

Yes. Phone support for all the RAW variations in the way DXO approaches it would be a big investment for DXO team and considering the user numbers of people who would use DXO to process their RAW smartphone photos, vs all the smartphones and models that come out every few months, I think it would be big and expensive challenge for DXO team and yet I am not convinced that user numbers are worth it. No one has showed me reliable data to suggest otherwise, so I’ll assume that is the case.

Hi MSmithy. Sorry didn’t mean to be rude by not replying - just not on here that often.
I do use Helicon (after trying out anything else with a GUI that I could find mention of).

For a dev team on it’s own, I agree… however a software company doesn’t need to develop everything itself. Code can easily* be licensed or purchased (*more of a money issue than a programming dilemma).

Regarding alternative noise reduction, I was thinking more along the lines of Noise Ninja (which became part of Photo Ninja). You could draw rectangles on an image and those areas were used to create a noise profile - it was also possible to download “standard” noise profiles for different cameras and ISO values.

Topaz was one of the ones (along with some other ones) I had tried before deciding on PL by the way.

Noise Reduction within PL is also what I was referring to regarding processing time. With normal noise reduction, it takes about 50 seconds for an image to be exported. With DeepPrime XD, it’s only 7 seconds. I’m aware it’s GPU vs CPU, but it doesn’t seem right, that normal NR takes more than 7 x longer* than DeepPrimeXD. (*depending on CPU/GPU combination).

True. I agree. Providing they don’t pass on the license to users as we have seen with latest “BorisFX Silhouette” , pro software that is basically Photoshop painting tool for video VFX. and they offered AI powered in-painting or generative fill feature, but they use external “needs internet connection” Stability AI service. Requiring to open account with Stability AI and use API they provide with limited number of daily credits. So that was a bit feature they advertised, and when users discovered the limitations, they were disappointed. I hope if DXO ever does something similar they keep that in mind.

I have not used that software, but what you describe sounds very similar to Neat Image and Neat Video, very popular noise reduction software. I think it offers good results, but nothing like DeepPrime, although in video usages its better to use the noise profiling method, because AI noise reduction tends to be very taxing on the system.

DxO PhotoLab offers three noise reduction modes:

High Quality (RAW files and RGB files): standard noise reduction, applied automatically in real time when the image is opened for both RAW and RGB (JPEG, TIFF, etc) file; available in ESSENTIAL and ELITE editions of DxO PhotoLab.

PRIME (RAW files, ELITE edition): advanced noise reduction, ensuring maximum preservation of details and colors, only for RAW files in the Elite edition of DxO PhotoLab. Demanding in terms of power and computing time, the results of PRIME denoising are visible not in the Viewer, but in a preview window in the Noise Reduction palette.

DeepPRIME (RAW files, ELITE edition): noise reduction based on demosaicing technology and artificial intelligence (deep learning through neural networks), for RAW files only. Available in the Elite edition of DxO PhotoLab, DeepPRIME is demanding in terms of power and computing time.

DeepPRIME XD (RAW files, ELITE edition): evolution of DeepPRIME technology allowing further extraction of details.

I think PRIME is CPU only but its very computationally intensive and was great before AI era. I think its just legacy feature so most people either use HQ or DeepPrime/DeepPrimeXD. There are other noise reduction software that usees AI for JPEG and TIFF as well, like Topaz so that is best third party solution.

I meant an actual inclusion of 3rd party code within your own by purchasing or licensing it form the creator, not renting out a third party service. That really would be - not very nice.

Yes, basic approach from a user perspective like NeatImage, main difference probably that NoiseNinja allowed more control (tbh I can’t recall if there was a limit, aside from becoming unreasonable, as to how many different areas from the image you could use to create the profile).

I did mean the High Quality option when I said the normal one. If I have NR on when exporting an image, the it takes about 50 seconds with HQ and 7 with DPXD for a 32MP raw to export as an 8-bit tif (quite the opposite of real-time, so we must be talking about different things.) CPU i7-9700k, GPU RTX3070, 32GB RAM and plenty of SSD space

And why does it always have to be AI? I also think the term is over- and misused.

As far as best solution, whether it’s 1st, 2nd or 3rd party is concerned, it probably depends on personal taste on regards to the final result.

Yeah, I also think the first two NR options in PL are legacy for older files. However I think the buttons could then be made smaller or provide the option to show/hide legacy features from the UI (I’m always happy to not see buttons I don’t need taking up space that I could use for other things).

Strange. For me its the opposite. Maybe its the hardware thing. Either way. I use DXO DeepPrime XD on everything so I would not mind less time for export.

True. Its not really AI. Although I think its just colloquially now use as term money, even when dollars for example are not money. Same is AI that is overused and misused I agree.

Someday a computer will give a wrong answer to spare someone’s feelings, and man will have invented artificial intelligence. (Robert Brault)

John Searle circa 1984 discussing artificial intelligence and his analogy of the Chinese room.

https://twitter.com/justin_hart/status/1660156084659826690?s=20

Syntax (AI) vs semantics (human).

I asked ChatGPT to tell me the difference between Syntax and semantics

Syntax and semantics are two important concepts in linguistics and computer science.

Syntax refers to the rules that govern the structure of a language. It deals with the way words are arranged to form sentences and how those sentences are structured. Syntax is concerned with the grammatical correctness of a sentence, regardless of its meaning.

Semantics, on the other hand, is concerned with the meaning of words, phrases, and sentences. It deals with the interpretation of language and how words and sentences convey meaning. Semantics is concerned with the meaning of a sentence, regardless of its grammatical correctness.

In summary, syntax is concerned with the structure of language, while semantics is concerned with the meaning of language.

“ChatGPT is a statistical representation of things found on the web, which will increasingly include ITS OWN output (directly and second hand). You post something picked up from it & it will use it to reinforce its own knowledge. Progressively a self-licking lollipop. Enjoy #AI” - Nassim Nicholas Taleb, @nntaleb

“Computers are stupid. They can only give you answers.” Quote is often attributed to Picasso (I believe in 1968).

He was correct. Computers are near-omnipotent cauldrons of processing power, but they’re also stupid. They are the undisputed chess champions of the world, but they can’t understand a simple English conversation… unless they have been programed to fake understanding based on specific database and rule set.

A friend once showed a Picasso to Picasso…

who said, no, it was a fake.

The same friend brought him,

from yet another source…

another would-be Picasso,

and Picasso said that, too, was a fake.

Then yet another from another source.

“Also fake,” said Picasso.

“But, Pablo, ” said his friend…

“I watched you paint that with my own eyes.”

[ Chuckles] Said Picasso,

“I can paint fake Picassos as well as anybody.”

ChatGPT is just a database + search engine. Through controlling the data & the algorithms they can make it act however they want.

A.I. my ***

True. I find HQ to give quick less taxing results for working files. DeepPrime seems to be calibrated like lens correction so on auto it gives good results, but DeepPrime XD gives better details, although sometimes it can also show unwanted artifacts so it requires more manual intervention.

You say HQ is 50s when DeepprimeXD is 7 second ?
I don’t understand this …
What is your hardware ?

1 Like