GPU Not Maxed Out

Hello. I am new to DxO PureRAW 2. I am using it on a windows machine with a Ryzen 3900 CPU and an RX 5600 XT GPU. I use Photoshop and use PureRAW 2 as a plug in.

I noticed that when running PureRAW on 136 FujiFilm X-H2s pictures, I am not maxing out my CPU and my GPU. I enabled GPU acceleration. I noticed my GPU is only running 15-30% for the most part.

I have 2 questions:

  1. Why is the GPU not being maxed out? I would think if PureRAW 2 maxed out the GPU, then the process would be much faster.
  2. Would a newer GPU like the RTX 4080 greatly shorten the running time? My current RX 5600 XT has a PassMark G3D score of 13,822. The RTX 4080 that I’m thinking of getting has a G3D score of 35,207.

Thanks in advance for your replies.

Welcome to the forum, @user3560

PureRAW does what it does because it is a subset of PhotoLab and the developers have left a few things behind in order to get a no-frills easy-to-use application.

Processing load depends on how you set processing, including things like distortion correction and noise reduction. Each of the features creates load for either the CPU, GPU or both and that mix shows when you check the loads of the processing units. If the CPU is not maxed out, it might be waiting for results from the GPU…and there is not much we can do about it.

What if you export the same images with a PL6 trial? I suspect PR isn’t making the best use of your resources.

Different apps do things differently. If maxing out the CPU should be the goal, use PRIME noise reduction. In DPL, you can also set the number of images to be processed in parallel. DPR cannot do that though. It seems to process images one at a time.

I’m not saying to use PRIME just to max out CPU usage. But rather exporting the same batch of images from PL6 with settings as close as possible and see how long it takes compared to PR2.

If it’s significantly slower with PR2 then it’s just an issue on PR2 side and buying a new GPU is likely not the best choice.

You have to keep in mind that processing an image is done by both CPU and GPU. It also depends on the noise reduction you’re using. And stressing out a CPU and/or GPU doesn’t necessarily mean that things go faster, just that your CPU and GPU can’t run any faster at that moment.

If I recall correctly, most of the processing in DxO products is done by CPU. Only DeepPrime (and the newer DeepPrime XD in PL6) fully utilize the GPU.

This graph is from processing 5 raw images with DxO Standard profile and DeepPrime. Only difference is that I use PhotoLab 6 but the general processing should be the same.


As you can see, the GPU is used 100% but only for some of the time and the CPU is never stressed to its max.

Now, the same images but with Prime, which utilizes the CPU


This screenshot was taken while still processing. As you can see no GPU usage (the low spikes are not from PL) but 100% CPU usage.

As for your second question: will the RTX 4080 greatly reduce the running time? That depends: if you use DeepPrime it will shorten the time as the RTX 4080 is a lot faster. This difference will be even bigger if you use the newer DeepPrime XD in PhotoLab 6.

If you don’t use DeepPrime, getting the RTX 4080 will only drain the money from your wallet.

There’s a benchmark sheet on this forum mentioned here

The RX 5600 XT is slightly slower than GTX 1070Ti. If you look at lines 170 and 171 in the Excel sheet, you’ll see the GTX 1070Ti and the RTX 4080 processing the same images. Please note that they processed these images in PhotoLab 6 using the new DeepPrime XD but it gives you an idea of the performance increase. It’s up to you if you find it worth the investment :slight_smile:

but don’t forget that there is a life beside DXO, and it depends what other programs the user is working with.
And all the AI stuff coming the next years will stress your GPU.
So the money over the time is important :innocent:

@RvL I believe that this is the case and the Tensor cores (in connection with the Nvidia cards) has been mentioned from time to time.

So the processing is done by

  1. CPU to “develop” and apply edits to the image
  2. CPU to handle task scheduling and communication with the GPU essentially for the elements relating to DP and DP XD processing
  3. GPU elements to handle denoising as per the DP and DP XD AI model, under the control of the CPU
  4. CPU to output the final image to the designated device and move onto the next image etc…

Without an approved GPU all noise processing. i.e. PRIME, DP and DP XD will use the CPU and the timings are big, i.e. the process is slow. With an approved GPU the times of DP and DP XD will be reduced in line with the “power” of the GPU.

But increasing the power of the GPU will “only” improve the performance of the Noise reduction elements of the total processing. With a monster card like the 4080 you might be able to reduce the noise reduction element to close to 0 but you will still be left with the rest of the work to be done by the CPU.

However you are running a Ryzen 3900 (passmark = 30602, 2600) so your time to process is pretty close to the best you can achieve without spending large amounts of money, which an RTX 4080 will consume, i.e. your GPU time will be close to 0 possibly along with your bank balance.

If you are interested I rans some benchmarks on my machines and my Son’s and Grandson’s and the figures are here Which graphics card do you use with Photolab? - #27 by BHAYT

My son’s processor is a Ryzen 3950X which is a bit faster than yours (39012 2710) and my Grandson’s a Ryzen 5 3600X just over half the speed (17795 2657) and I have an even older i7 4790K which is a fraction of the speed with pass mark scores of (8058 2463).

We ran the Egypt and Nikon tests from the spreadsheet alongside 10 images of mine while I was trying to convince myself to buy a new graphics card, my Son’s GPU is an RTX 2080 (Ti?) and Grandson’s is an RTX 2060 and mine is now an RTX 3060.

So please run a test sequence the Egypt test, the Nikon test and 5 or 10 images of your own with “typical” edits

  1. With DP enabled
  2. With DP XD enabled
  3. With NR completely disabled

The time difference (1 - 3) and (2 - 3) is the portion of time attributed to the Noise reduction element of the processing and will contain a lot of GPU activity and some CPU activity.

That is the (only) element that you could improve with a faster GPU or so I believe.

Why you are seeing such little activity I cannot explain but am trying to get statistics from my own system to help my understanding.

Unfortunately there is no-one from DxO on the forum any more so we have no-one to ask except each other.

I will try the PL6 trial. Thanks for the suggestion.

While maxing the GPU and CPU might not be possible for single-image processing…
Allowing DXO to process more than one image at a time should bring a huge speed increase to working on image batches.

On my system, I’m looking at 15% CPU 20% memory, and like 3% CPU…
batching multiple images could definitely be threaded and get around a 5X speed improvement.

If DXO has a command line, you could use Python to test the theory by passing the UI and just subprocess a few instances of DXO to process images in parallel

Here’s my actual experience using Pureraw 3 and processing raw files using Deep Prime and Deep Prime XD.

Older GPU on a 12th Gen i7 was GTX1650. I upgraded to RTX4060. 20 raw files - and processed a mix of 24 mp from Canon R6 Mark II, 32mp from Canon R7 and 61mp from Sony A7CR: Old GPU took 8:17 to complete (DP) and 18:26 (DPXD). New GPU: 4:30 (DP) and 5:54 (DPXD).

Specs that may matter: 32gb of RAM, OS is Windows 11 Pro.

What surprised me the most is that using the RTX4060, DPXD was blazing fast, almost as if I was just processing raw files on DP.