Performance is a fairly subjective matter unless we introduce some kind of reference of how much time can be considered and for what. Measurements need a baseline that provides a means for repeatability and objectivity (the possibility to get results if the test is run by someone else. As each system (hardware, file/source, customizing and process steps to mention a few) will vary, each system will provide a different baseline. The tolerance to how much time is considered long or short is another thing that might not be standardized though.
- Import: Image update depends on preset again. As a reference, I’d use “NoCorrection” again.
- Export: Depends on what has been customized in an image. I propose to use “NoCorrection” as a baseline for all time measurements.
- Customizing: Difficult to measure, too many options
I’ve been running performance tests with the same set of images for many years, measuring export times mostly. This did not matter really, because I can do something else in parallel, so I dropped these tests completely.
I also noticed that DPL is much less reactive than DxO OpticsPro (v11). I suppose that this is due to the change of programming of the rendering machine (policy and actual calculations) and found that DPL runs about equally well from the built-in HDD as when booted from an external SSD, which seems to compensate the slower interface (USB or Thunderbolt in my case) to a certain degree.
A few months ago, I got the 2019 iMac that now replaces my 2012 iMac. More cores and SSD instead of HDD made a nice change, DPL got “back to normal”, and, as an additional benefit, I don’t compare DPL any more to OpticsPro, because I did not install older versions of software on the new machine.
Nevertheless, DPL is slower than OpticsPro used to be and I’d really love it if DxO could bring DPL up to that speed again, but alas, there is not much we can do but hope for the best.