DxO PL 5 support of Nvidia GTX 950M

I understood from the DxO PL 5 system requirements that a minimum of Nvidea GTX™ 1060 is required.

On my lab top I have Nvidea GTX 950M.
The PL5 “Edit> Preferences → Performance tab” shows the 950M as partially supported.
Can we get more details on what is meant by “partially supported”.

Are there any known issues or errors when running PL5 on a 950M
Anyone else running PL5 or PL4 on GTX 950M?

Thanks for the feedback.

The GPU is only used for DeepPRIME noise reduction processing. You can test yourself whether that graphics card speeds up the processing of DeepPRIME.

Go to Performance tab in Preferences and set DeepPRIME Acceleration to Use CPU only. Process a raw image using DeepPRIME and keep track of how long it takes.

Then go back into preferences and in DeepPRIME Acceleration select your card and process the same file again with DeepPRIME. If PhotoLab is able to use your graphic card’s GPU It should process that image more quickly than in the previous test. Processing DeepPRIME using my GTX 1050ti, which is also below the minimum recommended card, runs around 3 times faster than using the CPU only.

Mark

I have a Windows 10 with an NVIDIA GeForce GTX 750 Ti graphics card. This is also a card that is flagged as partially supported.

When DeepPRIME was being introduced there were suggestions that with partially supported cards you may experience problems - such as PhotoLab crashes, issues with the export image. I’ve used DeepPRIME quite often since it was introduced and I’ve not experienced any problems.

I’ve just done a quick comparison between a single image with CPU only selected, and the GTX 750 Ti graphics card selected. The export time, for a highest resolution JPEG was 35s with the graphics card and 75s with CPU only.

In addition to reducing the processing time, selection of the graphics card also makes my PC more usable while the export processing is going on. With CPU only selected it is very sluggish when trying to do anything else in parallel.

1 Like

I have a MBP 2012 with Nvidia GeForce GT 650 M ; I am astonished :

  • that PL5 recognizes it while PL4 ignored it
  • it is probably under the minimum you mention.

I have made a test :

  • processor 22 s
  • GeForce GT 650 M 21 s.

Well unuseful… I wait for my new MBP ARM !

I assume you were testing DeepPRIME and not PRIME, correct? I’m actually surprised it recognized that card.

Mark

My bad, it was a simple test ; with Deep Prime, it is much more and results are unexpected !

image

From top to bottom, processor only, Intel HD Graphics 4000, Nvidia GeForce GT 650 M.
Well I see I haven’t restarted DxO.
After restarting, 16mn41s with Nvidia GeForce GT 650 M :face_with_symbols_over_mouth:
I am impatient to get my future MBP ARM…

PS : the choice for Deep Prime acceleration says : automatic choice is processor only ; that’s clear.

image

Thanks for you feedback and sharing your testing results.
Here are my results with my 5 years old ASUS labtop with i7, 12GB memory, Nvidia Geforce GTX 940M.

Test done with exporting a Nikon D7500 RAW file to a TIFF 16bit full resolution image:

1: With “DeepPRIME acceleration” set to “Use CPU only”:
DxO Denoising = Off : 5 sec
DxO Denoising = HQ : 6 sec
DxO Denoising = PRIME : 48 sec
DxO Denoising = DeepPRIME: 2 min 5 sec

2: With “DeepPRIME acceleration” set to “*NVIDIA GeForce GTX 950M
DxO Denoising = Off : 5 sec
DxO Denoising = HQ : 6 sec
DxO Denoising = PRIME : 47 sec
DxO Denoising = DeepPRIME: 32 sec

My results indeed indicates that the GPU only has effect on DeepPRIME and that DeepPRIME runs about 4x faster with the 950M on my labtop.

I also see that with using the 950M DeepPRIME is running even faster than PRIME !!!

As such according to my test & results: if denoising is needed I should use run DeepPRIME with “DeepPRIME acceleration” set to “* NVIDIA GeForce GTX 950M”.

I assume that DxO states that Nvidea GTX™ 1060 is the minumun required because they did not test GPU’s lower than the 1060. And runnig a lower GPU model may work but its at our own risk.

A popular NVIDIA card for running DeepPRIME is the GTX 1050ti which is also bellow the performance level of GTX 1060 “minimum”. The 1050ti model was inexpensive, usable with a smaller power supply, and easily available before the chip shortage and supply chain issues. On my older midrange I7 machine, DeepPRIME processing went from 68 seconds using the CPU to 20 seconds using the card’s GPU.

Mark