PL4 GPU benchmarking?

I downloaded the 4 test images located here:

those images = 183MB total.

I loaded the 4 into PL4 with DXO Standard settings (default) with DeepPrime noise reduction turned on.

I exported them to disk. Reading images from SSD and writting to SSD. I have a Ryzen 2700x CPU and a Nvidia 1060/6gb GPU.

Total time to export took 62secs. This included a 5sec or so startup delay while the pipeline was being loaded. So I am averaging about 1 sec to 3Mpx. Which seems similar to the others who have posted here. I can see with gpu-z application that my GPU is definitely getting used, occasionally maxing out to 100% utilization.

As a second test, I downloaded this image:
Which is 111MB in size. DXO Standard+DeepPrime took 35secs to process for that one image. Also yielding about 3Mpix per sec processing speed.

Will a faster GPU make a difference or am I limited elsewhere in my setup?

A faster GPU will definitely speed up the rendering process!


Nvidia Geforce RTX 3080 or Nvidia Quadro rtx 4000 ?
Which card will be better now and for the next few years according to the developers?
Which video card resource is most important for a Photo Lab? Cuda cores, memory or something else?

Hi there,
I’ll transfer your question to our dev teams and I’ll let you know.


1 Like

I also use a 2700X but with a Sapphire Vega64 Nitro+.

With the same 4 RAWs and Preset (DXO Standard + DeepPrime) i get around 33-35 sek. (From an M.2 SSD to another M.2)

The 111MB GFX50 RAW itself will take around 16-18 sek.

Since the Vega is a very different µArch the comparison is a bit off but It seems nontheless that the RAW FP32 Power of the Cards translate quite well into an almost equal performance gain.

Would be interesting though if the use of FP16 could speed this up on certain GPUs and also if Tensor Cores get utilized.

OK, here you go. Have a look at this chart to get a direct comparison:

A higher computational powered GPU will translate into better DeepPRIME performance.
You can have more info on our FAQ here:


Interesting GPU performance blog here:

Note the FP32 chart about halfway into the blog. It lists my lowly NVIDIA 1060 and many other GPU’s against their FP32 performance. If in fact DeepPRIME is linear with FP32 performance then this chart can help users determine the relative performance of their cards vs possible hardware updates.

It would be helpful to have additional verification from DxO on how closely DeepPRIME truely tracks FP32 performance.

Another interesting site for GPU benchmark comparisions is here:


Here is a comparison of GPU’s for use with Topaz Sharpen AI in stabilise mode for a Nikon D850 image - DPR Forum. Rough timings because not everybody used the same image file but I would expect something similar with DXO. A similar excercise could be performed on this forum?

Topaz Sharpen GPU figs

Can confirm this, although without stopwatch :wink: and only felt. Export is definitely faster than with PL 3.3.

Though everyone else is talking about Nvidia here, recent versions Apple OS X only run on AMD. Radeon VII rocks on Photolab 4 with about 15 seconds per image with many corrections and DeepPrime (D850, whether cropped to about 30 MP or almost all 45 MP). Finally some hardware acceleration.

Not sure how Radeon cards do on Windows.

1 Like

Of course it’s not surprising that Nvidia dominates the conversation since I believe their market share is somewhere around 80% and most people here are probably using one. But it would be great to get more feedback on the performance of various Radeon models on both Macs and PCs.


Some kind of community-generated benchmark for various GPUs would be useful. But it should be based on a standard set of images available to everyone. I’m not sure if there’s an appropriate set of software to capture this information (including CPU, memory, etc.), either. I’ve tried hand timing results from PL4 but my precision isn’t very good.


Simply agree which files to download from a web site like

A large file like a Nikon D850 will extend the process time and improve accuracy.

If you use say 5 files and simply batch process them with only the default DXO correction and DeepPrime enabled then DXO gives a readout of how long it took to produce the batch.

People can then report the time and CPU/GPU used.

1 Like

DXO will report the time like this:
DXO V4 DeepPrime times

1 Like

Well i`ve already given some numbers for a Radeon RX Vega64 under Windows 10 a little bit earlier.
24 32MPIx with my Standard Preset + DeepPrime are finished in around 180seconds.
So It runs somewhere between 3-5 MPx/Sec depending on the files, batch length and corrections applied.

The CPU does still seem to play a role though, since i`ve seen numbers from a 32C Zen2 Threadripper with a GTX1660 crunching a single 50MPix files between 8-10seconds.

I happen to have a RX580/8GB card here left over from my bitcoin mining days. I repeated the test on the Egypte 111MB image.—copyright-Corinne_Vachon.raf
DXO Standard+DeepPrime

Windows 10
AMD Ryzen 2700x cpu, RX580/8GB took 29secs to output.
AMD Ryzen 2700x cpu, Nvidia 1060/6GB took 35 secs to output.

1 Like

That sounds reasonable. How about people download these selected images::

  • Images #4 through #8 (wedding couple - 5 RAW NEF - NOT JPG - images): nikon_d850_04.nef through nikon_d850_08.nef. These images have ISOs between 1800 and 12800, which should be a good test of DeepPRIME noise processing. NOTE: DxO PL reported ambiguity in which lens was used; I guessed it was an AF-S NIKKOR 105mm f/1.4E ED.—copyright-Corinne_Vachon.raf

  • This is a 111MB image.

Report the total DxO DeepPRIME processing times (IN SECONDS) on this spreadsheet:

(I added times reported by dma.)


Excellent spreadsheet. I went in and update my numbers and added some of the wedding photos. I also reran my Egypte photo runs a couple of times and found that each run was coming out the same at 29secs so I update the spreadsheet to reflect that.

1 Like

jch2103 googlesheets spreadsheet is starting to get populated with data. It really looks like DXO has tuned PL4 for a Nvidia 1060 class GPU. I tried a RTX2070 I had here and it was only 10% faster. That is a big jump in cost for only 10% performance improvement.

1 Like

What output file is being produced jpg,dng or Tif? Does this influence result.