Performance Problem and Fix

FYI, the USB-C and USB 3 tests are the same because they are the same type of connection. USB-C is a connector type, USB 3 is a transmission standard. There are various versions of USB 3, but on the same computer, the difference between the USB-A (“classic” USB) ports and USB-C ports are likely to be the physical dimensions only — notwithstanding that the USB-C connectors will also support Thunderbolt (aka USB 4 from what I hear) with capable devices attached.

What would be an interesting comparison is how Lightroom stacks up against those times when opening those folders for the first time.

I get that PL is applying presets etc, but for me the time that matters is from first open of the folder until I can see every image and start triaging. While I can do that in PL as it creates the previews, my computer is not an 8 core 40GB behemoth and so it’s not a great experience with the fans running wild. Usually when I open a folder in PL, I will wait for it to render all the previews, which a lot of the time isn’t long because I have monthly folders and therefore they often do not contain a lot of photos.

I’m looking forward to upgrading to an Apple Silicon iMac in the future in the hopes that the new architecture will smash out workloads like PL in very short order. Speaking of which I REALLY HOPE the fine people at DxO are looking closely at the new technology (hardware and software) coming out of Apple and adapting PL (and friends) to work well with it.

This is a good way to work with a “less-than-behemoth” computer. I worked like this until last autumn, when replaced my 2012 iMac by a current iMac.

The times I published in my post above show how long DPL does whatever it does when a folder is opened. When I test with my set of test images, I get all previews after about 33 seconds while CPU-time is 54 seconds. This means that load averages out at about 164% in this case.

I also find that an 8 core CPU looks like overkill for customising images in NLP. For export and I’ve limited parallel processing to 4 because processing time is not reduced when I set a higher limit.

having used PhotoLab since it was called optics pro. The performance problem got worse with each version. I have a big library, sorted in many folders, but still many folders have more than 1000 pictures. The incoming folder has around 20k pictures. I decided to get the fastest MacBook Pro, maxed on everything except the internal SSD (2TB). I got it last week. I installed PhotoLab 3 on the new computer and didn’t change anything else.
The library, which has been index by using the index folder button (I don’t know where the corresponding DB is placed), is on an external spinning rust (bought this year, USB3) box. After the index was done, over night, I started to work in the photo library tab. View is set at 6%. Scrolling up or down with the trackpad is increasing CPU use to 130% (!!! an i9 w 64GB) and the reaction is about half a minute later. Unbelievable. I thought I can become free from LR. But this is still an issue on this super-dooper computer. I reported this issue already years ago, hoping that when I make the jump from LR, I’ll be safe with this PL. Come on guys fix it. I’ll download PL4 trial to see if that makes a difference.

Hi Ioan,

I suspect it’s not that PL performance is getting “worse with each new version” - - but that you are capturing more and more images, and expecting PL to “wade thru” them all just to get to a starting-point.

The obstacle you are facing is that PL needs to process all corrections to each image before the image is available to be rendered and displayed in the Image Browser … and that takes processing effort = time.

I would never expect PL to work on 100s of image in one folder - and most definitely not 1000s. A better approach is to use a Work-in-Progress folder for processing a small’ish batch of images at a time, moving the results to your image storage area … and then repeat for the next batch.

HTH - John M

2 Likes

I would totally agree with this. You should never be storing that many images in one folder and expect any software to process them all quickly. Computers may do things quicker than humans but it still takes time to do stuff :wink:

1 Like

Agree for PL’s work methode.
It’s always rendering from dopfile data when you select the folder.
But i disagree it can’t be better.
What if “indexing” does use the thumbnail methode, aka store the made thumbnailpreview in the DB. Updating this thumbnail (only) after leaving the image for the next when editing.; same as the dopfile update.

Then the wading trough loads of images is much faster and only the highlighted, big screen one , would be effected by the image processing delaytime.

Then only one needs to be calculated not the hole folder you openend.

Until this is implemented indeed not more then 500/folder is i think the rule.

1 Like

While I agree to the first part, I still expect that software should be smart enough to process images as needed and in a way to not inhibit fluid operations. I notice that DPL does process all images when I open a folder and while there are arguments that speak for this way of action, there are also arguments that speak for smarter ways.

I expect DPL not only to be good, but also to be smart(er): Is it really necessary to process all images in a folder at once? Specially during culling? Open a folder of 3000 images by mistake and get trapped while DPL processes previews on end?

I think you know my answer :sunglasses:

1 Like

We’re aware of this speed issue. It requires some significant rework in PhotoLab so I don’t expect this to be fixed in 4.x. But in the end we should be able to load folders dramatically faster than currently, even with several thousands of images per folder.

This is what happens at the moment… in some scenario, but it’s actually a mistake and is not an actual “need”. RAW images have thumbnails, they’re very fast to load. And these could be displayed very quickly. Processing the images can come after, once thumbnails are all displayed.

4 Likes

Sounds fair to me.

Only update thumbnails if requested (configurable) and if visible

  • maybe rendering in advance a few images - creating a virtually visible film strip of
    V+2L images, V=number of visible images, L=number of look-ahead/look-back images.
    • Make L configurable, maybe in s/m/l steps
    • Interpret sensibly for filmstrips that are 1D or 2D - no 3D film strips yet, sorry :wink:

DPL4 has a way to do things that I discovered lately. Let’s go through the following scenario that I run without auto load/save sidecars for better control. Stay in Library mode, do not change preview size during the sequence if you want to reproduce.

  1. open folder “A” of nn images
  2. apply a preset to all images
  3. save sidecars (use the menu)
  4. point dpl to an empty folder
  5. delete cache (use dpl settings for it)
  6. point dpl to folder “A”

DPL will now redraw the previews, wait until done.
Open settings, note cache size and close settings.

  1. click on first image, wait e few seconds
  2. click on remaining images one by one as in 7.

Open settings and note cache size (has grown)

During steps 7 etc. DPL recalculates previews and stores them in the cache, even though nothing has changed. Simply clicking on an image launches a recalculation. It’s probably meant to prepare for different zoom sizes, but it just wastes loads of cpu time. Wouldn’t it be smarter to recalculate things only when we change to edit mode?

would it be possible to render B&W in when it’s rawfile is processed as so?
So load in the thumbnail and do a update on it? (it’s a small jpeg so easy converted.)
You don’t see any detail on a filmstrip image so anything else isn’t needed. (a used preset-name would be great :wink: )
(when i use NIK silvereffecs i rename the returned tiff in a _filter name i used. to keep track of which used)

The image browser I wrote to complement PL reads the thumbnails out of RAW files and, yes it is reasonably fast, but there are still occasions when I can scroll too fast and the whole app can go into suspended animation for a number of seconds.

As you indicate @Lucas, this is not a simple or quick fix.

Thanks for the answer. Same library in LR zero performance issues, so I don’t agree with you.

That’s a really good suggestion, I reckon - - It would make it very clear as to which images were yet to be worked on/corrected (which is also another request I’ve seen here recently).

John M

Is that because LR is simply showing the JPG embedded within the RAW file ? … Whereas, PL is rendering an updated image complete with all corrections made to it. That would explain the difference.

John M

1 Like

Lightroom starts to display the built-in thumbnails and then redraws images like PhotoLab. Notable difference being that Lightroom does so in a way to not disturb other actions like PhotoLab does.

1 Like

The way LR does it (and some other software) creates endless questions in forums like “Why is my picture changing after a few seconds” or “I didn’t do any edits, but the image is changing already”.

This is often followed by the demand for 3rd party software to do the initial rendering exactly like the camera did, which is basically impossible except for the camera vendor software.

I’d rather have it as PL works today.

In FastRawViewer I know exactly what to expect: It either renders the RAW, as fast as possible, but ugly, or I can make it show the embedded JPEG.

The LR way of working behind the scenes and changing the preview dramatically after a few seconds confuses users. (Some users, though, discover that the OOC JPEG image is actually better than what they can achieve with the software. And then they demand to improve the software. :roll_eyes:)

1 Like

LR has a thumbnail database which, I think, is built from the embedded thumbnails by default. Given you have to import images, the user may be more willing to accept the (reasonably fast) reading of thumbnails to build LR’s own thumbnail database. Subsequent opens of a folder are reading straight from LR’s own, optimised thumbnail database. From memory, if you significantly change your thumbnail size (at least larger) you will notice the low resolution and for this reason there is a “rebuild thumbnails” option available.

This is all from memory, but I’m sure I encountered all of this behaviour.

Note that PL also has a thumbnail cache but this is not a complete database as far as I am aware, but rather a temporary cache of “recent” thumbnails for performance reasons (as rendering every time you scroll would be prohibitive).

as a consumer who paid several times for upgrade effectively supporting further development, I don’t care the reason, I just want it to work. I was hoping this money will go into the direction of a similar behaviour with regards to managing the photos as LR (db, cached thumbnails, etc.). But having a total of 150k pictures I hope PL better not re-render pictures every time I go to a different folder.

We all have wants. I gave Adobe a lot of money and never got all I wanted, which is why I use PhotoLab, which also doesn’t give me everything I want.

I also want the perfect library module, but I (have taken time to) understand its current limitations and work within them. Then I pay for upgrades and via these forums I try to influence future direction.

1 Like

That sounds like you’re using PL to view your final images (?)

PL is image processing software (with especially good results from processing RAW files) - - but it’s not particularly well suited to general image browsing … for reasons such as those you’ve encountered.

John M