Extremely slow UI responses when using DeepPRIME

Hello,
when PL (4 or 5) is processing images with DeepPRIME, the responsiveness of the UI sometimes gets EXTREMELY slow. For example, after having started “export to disk”, if the next image is selected in the filmstrip, it often takes some 20 to 40 seconds (!) until the selected image is even displayed. The blue selection mask in the filmstrip is switched instantly, however.

When trying to apply settings then, it often also takes MANY seconds before changes get visible at the image.

Take care that the times vary much - sometimes, it takes about 15 seconds “only”, and changes are processed rather quickly (though not as fast as normal).

AFAICT, this happens only when using DeepPRIME. Since I don’t have a powerful GPU, I am using the CPU for DeepPrime (setting is “use CPU only”, accordingly) . I don’t mind if the export takes minutes then, however it is not acceptable that further use is impossible during exports. The PC uses an i5-10400 (6 cores) and has 32 GB RAM - even if DeepPRIME would use five of the cores completely, the UI should still be responsive. (OS is W10 21H2.)

Is it not rather a problem of resource management by the operating system?
Pascal

If it were so: where should I look, and what could I change?

Note that both tasks are part of PL, so changing the available resources for this application (PL as a whole) would not change anything with respect to the relation between these tasks…

Addendum: The resource monitor shows that during DeepPRIME export, all six cores (12 threads) are completely occupied (100%). Unfortunately, there apparently is no setting in PL for this.

So, to me this appears as a problem with the multitasking within PL.

Well, can’t you leave the DeepPrime export for when you have finished some files, taking a break …?

1 Like

No, that’s not a good solution. Then I would need to mark all particular files that need export, and later collect them again for finally exporting them. In a folder with hundreds of files, that’s not comfortable.

The queue in the background which continuously processes the upcoming exports is perfectly fine. However, it should work correctly.

And I really can’t imagine that the current behaviour is on purpose.

Would the ‘Pick’ (green light) in Customize make it easier to do that? I’d think that would work at least for one folder, but I can see that it might not solve an issues with files in multiple folders. This is a situation where a ‘real’ DAM might provide a solution, albeit with cost/learning curve complications…

Check your preferences for the number of CPU cores to use. I would recommend you set it to 2 to start with. If you have 8 cores or more then try using 4. I have found that more than this does not make much difference at all.

There is no such preference.

I used the wrong terminology! You need to find the preference that specifies the number of simultaneous images to process. The more CPU cores you have the more simultaneous images you can process. My personal rule of thumb with PL is that the number of simultaneous images to processes should be half the number of CPU cores you have.

1 Like

Any way to mark the images and then selecting them later for export as group can only be a workaround. While I appreciate that people are trying to help (with workarounds I am already aware of), I’d really prefer a solution to the problem itself.

As mentioned before, queueing the images to be processed while already working on the next is a standard method which should “simply work”. Currently, it does not.

This one is set to 2 (default, apparently). It appears that makes sense for CPUs with 4-6 cores.

However, please note that the problem also occurs when there’s only one single image being processed, so it appears that the problem is not related to this preference.

I still assume that something is going wrong with internal multitasking as soon as DeepPRIME is used without GPU. Maybe something the developers didn’t notice since they all have GPU support, or did not test strong enough with working on the next images while the queue is being processed…

JFI: Reducing this to 1 does not change anything.
Edit: But I think the problem happens much less often then.

@Tilmann

Maybe you like to have a look → Which Video Card? - #30 by Wolfgang and see, that DeepPrime makes use of the GPU. – When setting PL to “CPU only” it simply doubled the time with exports
( CPU comparison → UserBenchmark: Intel Core i5-10400 vs i7-8700 ).

When your GPU is supported by PL, better choose “Auto”.

My PC is almost 4 years now. For a really low noise machine I chose components with low power consumption (CPU 65W / GPU 125W max) in relation to output & price + low noise case & fans.
PL4 computer specs compared to PL3? - #38 by Wolfgang
Getting the PCspec right - #3 by Wolfgang

Specs are always a compromise, but I’m neither professional and nor ‘on the run’. :slight_smile:

With care (and backup) you could try editing DxO.PhotoLab.exe.config in the install directory.

Reducing the MaxExportProcessingThreadCount setting to less the the number of cores you have might leave more CPU power for the main program. You must also reduce the simultaneously process image count to 1.

It would likely be best to reduce the priority of the export process which you can do in task manager but only when it is running. Another setting DopCorShutdownDelay might let you keep the export process loaded for much longer meaning you only need to reduce priority once in a session.

That said DirectML which DeepPrime uses may take no notice of priority or thread counts.

I notice some lag in reactivity if a few local corrections have been used. Feels like PhotoLab is getting tired :thinking:

This might interest you:

It’s was more obvious in previous versions of Windows but now it’s buried deep down in there but still accessible

I know that DeepPRIME makes use of (and gets faster by) a GPU, but that’s not the issue here.

Well, that’s interesting. Thank you!

Yes, one can allocate CPU cores to particular programs - with several drawbacks:

  • it’s uncomforttable (as you mentioned: buried deep down in the Task Manager);
  • it can be done only when the program is already running;
  • it’s not persistent, after the next reboot the settings are gone;
  • it slows down the whole program, but without changing anything at the internal multitasking.

So it takes some effort, without providing any improvement.