Which graphics card do you use with Photolab?

Mine’s a laptop card in a SFF case. An AMD Radeon WX3200, so probably a pretty low performance model.

But I’m not really into swapping components in my PC, I just want it to work - which it does (with deep Prime). Until I disabled the GPU about 50% of exports were failing.

I agree with you on that I just want it to work too. But sometimes it just doesn’t, so I start investigating why it doesn’t and try to fix it.

Just for the sake of it, you could try the settings I suggest here: Which graphics card do you use with Photolab? - #28 by RvL. I’m describing it from the consumer Adrenaline Edition software, but the Radeon Pro software has the setting too.

Quickest fix which already resulted in way better stability is disabling OpenCL and/or reducing the number of simultaneous threads to 1. My current PL settings are OpenCL disabled and 2 threads. Together with changing the GPU Workload setting it makes a quite stable system for PL (occasional failure) while performing quite well for an old system as well.

1 Like

Thanks for the suggestion but I’m cautious about going into settings in that much depth. I’m liable to make things worse rather then improve them!

I currently have a NVIDIA GeForce GTX 1050 Ti Graphics card on a Windows 11 desktop
3.20 gigahertz AMD Ryzen 7 2700 Eight-Core processor
32 GB Memory
4 x SSD’s plus HDDs

Tested processing and exporting with deep prime 10 CRAW images (from 32 Megapixel R7). Took a total of 3 minutes to export all 10 85% quality jpeg.

While exporting from Photolab and from apps like Topaz does seem longer than I would perhaps like. I wonder how much I would need to spend to get a worthwhile improvement. A Nvdia 3060 TI for example costs around GBP 450

Not interested in gaming so photography is the only aspect where I need my PC to work hard.

Unless there is a more cost effective option which will give me a significant improvement in export times I’ll probably live with what I have for the time being.

By the way I don’t seem to have any issues at all with exports from Photolab apart from the performance issue. I have Deep Prime turned on for all images and only use Deep Prime XD for extremely noisy images.

I also have an Nvidia GTX 1050 ti. From what I’ve been reading I would suspect you would experience a significant decrease in processing time. Based on the numbers you posted it appears a single image is currently taking approximately 18 seconds to export on your machine. You could reasonably expect that number to go down to between six and nine seconds, somewhere between 2 and 3 times faster.

Mark

1 Like

@Stefanjan I bought a basic 3060 a few days ago from Scan for just under £330 and it eats my 1050Ti alive for PhotoLabs DP and DP XD. I believe that @BoxBrownie has just bought a 3060 and is also a Topaz user so there might be some PhotoLab and Topaz data available.

Please see the tables in one of my posts above, the 3060 is not cheap but 1050Ti’s still costs about £160, which is what I paid for mine when supplies all but dried up. If anyone is in the market for a new graphics card go for a 2060 for around £260 or a 3060 for around £330.

Better cards are available but start to cost way more per increment in the power that helps with PhotoLab. So the figures in the tables I posted are for

The RTX 2060 is paired with the Ryzen 3600X and the RTX 2080 is paired with the Ryzen 3950X and the figures for certain workloads is included in my tables in an earlier post. Between the 2060 and the 2080 is the 2070 and now the 3070 with a number of variants with U.K. prices around £600.

With my 1050, 1050Ti and now the 3060 I have never had a single export failure.

1 Like

I finally got around to finishing the steps you gave me by locating and installed 22.10.3.

I tested a set of 3 images I had used previously. Earlier, I had had to export them several times before I got a complete set without any processing errors. This time, I exported the three images four times (for a total of 12 exported images) with no processing errors. I also checked the images for the artifacts that I illustrated in my OP. These never appeared.

Then I worked on some new images I took today. I exported 9 images processed with the maximum simultaneous images set to 1. I then tried the same 9 images 2 with the maximum simultaneous images set to 2 and then set to 4.

When exported 1 at a time, the export took 2:04. Exported 2 at a time, the export took 1:34. Exported 4 at a time, the export took 1:32. I had no processing errors in any run. I inspected all images after each run for artifacts; there were none.

While I appreciate all responses to my OP, yours has been the most useful by far. I can postpone the purchase of a new graphic card! Since I don’t game, a new card would mostly have benefited Photolab.

1 Like

Great! Good to hear it worked for you :+1:

1 Like

Thanks @BHAYT great information. My Ryzen 7 2700 is a lower spec than yours so I guess my result would not be as good as yours.

I also thought that I wasn’t able to process an image at the same time I was exporting. But since changing my video driver I now seem able to process and export at the same time. I previously had a Nvdia gaming driver installed, improvement seems to be since switching Studio driver.

I tend to cull images in a DAM (imatch) and send a batch of images from imatch to Photolab. If I can process, export and process next image It will probably take longer for me to process the next image so won’t be waiting for the export.

The bottleneck will come if I want to export to Affinity for further processing.

I’ll also need to do some research to see what impact if any a faster graphics card would have in other apps especially Topaz AI

I’m also trying to figure out what setting to use for Maximum number of simultaneously processed images. My was set to 12 but it says 2 recommended. Should I reduce to 2 or does it matter?

Thanks, are you planning to upgrade or staying with the 1050 TI? I’m currently undecided.

My desktop is 6 years old. In order to upgrade to a higher level geaphics card I would also need to purchase a new power supply. My current computer is not upgradeable to Windows 11, and my current graphics card, the GTX 1050TI was purchased in early 2021. The original card that came with the machine was not usable with DeepPRIME. I have no plans to invest any more money in this particular machine. While, of course, I would prefer that my exports run faster, the processing times I’m getting with his older card are still acceptable.

Mark

1 Like

@Stefanjan I’m glad if any information I posted is useful. I have something of an hiatus at the moment after fitting the new 3060 and running a number of tests I installed the latest Game drivers and my performance seemed to plunge!

However, when I fitted the card the bios was reset for some reason and I have been slowly trying to get them back! I think that when I was getting the poorer results with the new drivers I had also “lost” my overclocking setting for the i7-4790K, i.e. from 4,400Mhz down to 4000Mhz, a 10% drop in processor so I promptly reverted the drivers but then went and discovered the processor downgrade!

So I now need to do some baseline tests, re-instate the latest Gaming drivers and test again and then consider installing the Studio drivers, hmm.

I own but don’t actively use IMatch at the moment.

The issue with performance is that I cannot measure what part of what element (CPU, GPU) is actually being used at any given time. What I do know is that it is possible to “strangle” the CPU and it may contribute more to the general processing that the GPU, which “only” helps with de-noising, take away the de-noising and you take away the need for and benefits of the GPU.

How do you export to Affinity? As far as I know DxPL always creates a file that it stores and then passes to the application. As a result there is not much additional overhead in letting DxPL go as fast as possible, storing to disk and then submit those images to Affinity. The advantage is that it puts you in charge of the workflow and the disadvantage is that it puts you in charge of the workflow.

Consider putting a fast drive in the path between the exported data and any additional application even if you move images etc. from there to a slower drive later on but that might have a problematic impact on the DAM (IMatch)

I beileve that when I had 4 set the times were longer! If you have a “monster” of a processor and a good to excellent graphics card then more is likely to work well, but run a test, e.g. the same x images through 2, 3, 4, … simultaneous image settings and determine the best setting for you current system.

I would suggest that 12 is way too many but it might not make that much difference because DxPL can only start a new thread when it has an opportunity to do so, so reduce to 2 and see how things progress and then start to increase and work out if the per image average time is reducing or actually increasing.

One issue I have noticed with a workload of 10 images is that with 4 selected you start with processing 1 then 2 and maybe it gets to 3 before the two have finished and then is is processing 4 images but it is then time to wind down!

This effect will be changed the more images in the export cycle at any time. If you are starting to export while still editing then stick with 2 and see how you get on.

It depends on what part of the GPU Topaz AI uses. I still wonder if buying a 2060 would have been better than the 3060? The 3060 has more Cuda cores and another 4GB memory but less Tensor cores and it is these that I believe DxPL exploits. The 3060 Tensor cores are supposed to be faster but does that actually make up for the reduced number when processing?

Your Ryzen is about twice as fast for multi threading as my i7 4790K but a bit slower for single threading (passmark 8058 2463 for the base 4000Mhz model so perhaps add 10% or a bit less for the overclock) versus your Ryzen (passmark 15746 2188) versus the figures for the Ryzens that you may have seen in my posts which are for my Grandson’s machine and the “monster” belongs to my Son where it is used for Architectural modelling.

So I now outgun you for graphics power after I replaced my 1050 2GB card with a 3060 12GB card but I am underpowered with respect to CPU.

The reason for running the Egypt and Nikon benchmarks is to provide common ground for comparison but just using the figures as a measure of the “power” of the graphics card is only correct if the CPU remains the same.

I wrote this in another post

I have actually purchased two different used video cards the last 6 months, a GTX 1660 and a RTX 2060, and both work exactly like they should. Just as most of my photo equipment… Saves me a ton of money.

@Sandbo
Corresponds to what I have experienced with the same type of Sony ARW and RTX 3060 Ti.

Not just a new GPU but a new Acer Predator 3000 with Intel i7 12th gen. 1TB Internal SSD 16 GB RAM

The file size doesn’t seem to make all that much of a difference

@BHAYT Thanks again for such a detailed response.

I am tempted to buy a Nvidia Gigabyte RTX 3060 but hesitating in view of the issues you are having with it, which might not affect me as I don’t overclock. And your suggestion that the 2060 might be better.

Separately I have learnt that Topaz will perform much better with a later GPU, so another reason to go for it.

I am currently exporting jpegs from Photolab to two mirrored SSD (Samsung SSD 850 EVO 500GB) using Microsoft Storage Spaces.

I also have two Samsung SSD 970 EVO 500GB M.2 Drives, one with Windows and Apps and the other for cache. I tried exporting from Photolab to my second M.2 drive but that only knocked about 8% of the time.

imatch will easily handle moving masters, versions and buddies between drives but in view of the small speed gain I’ll probably leave on the SSDs rather than move to the M.2.

Where further processing is required, I tend to Export to Application (Affinity) which of course passes a TIFF (which I usually subsequently delete). I then process in Affinity or create a duplicate layer and send to Topaz or NIK before exporting JPEG back to imatch. Depending on whether I expect to do further work I save an Affinity file back to imatch.

I have reduced to 2 simultaneous images and not noticed much difference.

One problem when you start monitoring a system is that it can become excessively compulsive behaviour, particularly when you have invested money to improve things, add new functionality etc. and you want to believe that you are achieving the “best bang for your buck” (vindicated in your choice) and therein lies a “curse”

So I installed the latest game drivers and the performance of the new card (RTX 3060) seemed to decline (and appears to have not “recovered” fully since).

Installing the card also “upset” my BIOS settings and while I am sure that the tests that I did and reported were with the machine running overclocked at 4.4Mhz it did fall back to 4Mhz when I revisited the BIOS to “fix” something else and was running at the slower speed when I installed the new graphics drivers.

I have reverted to the old drivers and resolved the overclocking issue so that the machine is running just short of 4.4Mhz (effectively a 10% overclock) but cannot exactly repeat my original test figures, although they are very close.

But here are some figures to get your teeth into and they are along the lines that only looking at the overall figures timing figures can distort the picture somewhat so I have run tests without any Noise reduction to try to define what might be expected from just the CPU for the edits and rendering the image, i.e. the part of the processing that is CPU only from the part that is largely GPU but with CPU management.

These have all been done with 2 simultaneous exports and it is possible with the more powerful processors that more simultaneous exports may well improve things and lower the overall run times.

Processors fighting it out alone:-

image

I got access to my Grandsons and Sons machines yesterday during a visit to Bromley and ran some repeat tests. The “NO NR” (No Noise Reduction) tests and got the table above. The 17,795 etc row is the passmark score for the processors and an estimate for my two i7 4790K’s running at 4.4Mhz (uplifted from a passmark score for a standard I7-4790K).

The main issue is that my two columns should be identical but DxPL runs as

which means is designed to ensure that it does not “crush” other programs but then other programs can push DxPL aside and one machine in particular is hardly ever idle, i.e. there is lots of background CPU activity/

Given that the power of the machines is roughly 2, 4.4, 1, 1 that is not quite borne out by the resulting elapsed times recorded for the machines, i.e. other factors might help understand what is going on which I did not have the time or software to investigate like

  1. How busy were the processors (it was using over 50% of the I7’s)?
  2. Would increasing the number of simultaneous exports from 2 actually have worked for the faster machines?

I believe I have addressed an error in the first chart, also reflected in another snapshot later and changed the i7 I used as a baseline (now consistent with all the other performance ratio tables)

Which show as 1.86, 2.167, 0.77, 1 using the Main machine (RTX3060) as the baseline (1), i.e. we have a ratio of CPU “power” for this specific task as follows (I have definitely “gone off” Excel!)

image

So for the Egypt image the editing on the 3600X was 1.43, 1.37, 1.71 and 1.81 times faster than the Main I7 for Egypt, Nikon, BHT(10) and BHT (109) images, respectively.

The whole picture:-

These figures were derived from this sheet

Which leads to the following summaries and “performance ratios”

I believe that the RTX3060 outperforms the RTX2060 in DP XD but not in DP (?) and the RTX2080 outperforms both but not by a huge margin (the "law of diminishing returns maybe) but may have been able to take on more than 2 exports at the same time.

However, while a 1 second difference per image “only” amounts to 100 seconds for 100 images it is 1,000 seconds for 1,000 images i.e. nearly 17 minutes.

Plus I believe my Grandson’s RTX2060 may have problems because the fan noise while doing DxPL DP XD was very loud, while my 3060 and the 2080 (which I believe is a Ti but neglected to check) the noise increased slightly (both machines run with the side off)

The GTX1050Ti is out of its depth!

Standard caveats apply and more so because of the various versions I created trying to present the data and the fact that a rogue formula may have crept into the calculations or more likely I copied the wrong data to the wrong place.

The spreadsheet is available if anyone wants it.

PS:- it would also be interesting to compare the figures shown in green, i.e. the “total time” figures and derive ratios to show how well they reflect the performance differences (and show that the above was actually a waste of time and you can rely on the overall comparisons after all).

The following table provides labels for the lines and includes a comparison of the per image elapsed times and derived ratios with no split and using the DP test figures which the Google spreadsheet was originally set up to test, plus the newer DP XD figures.

and the two figures I am having real difficulty with are the “Golf course” DP XD figures for the RTX2060 but particularly for the RTX2080?

1 Like

@Stefanjan from the above it may (or may not be clear) that apart from some minor niggles for me the RTX 3060 looks like a good buy unless you can afford an even better card but in my case the cheapest RTX 3070 is currently about £250 dearer that the RTX 3060 I purchased.

Some articles suggest that the RTX 3070 is a close match for an RTX 2080Ti for gaming, but whether that is true for DP XD I cannot say!

So I believe that an RTX 3070 may perform as shown in the tables under the heading of RTX2080, but please take note of the processor (Ryzen 3950X) that GPU was coupled. The CPU speed will have some impact (slight) on the DP and DP XD figures because the CPU still needs to keep the GPU “fed” etc. when De-noising.

It will certainly save a bit more time but if you look at the figures it is clear that if I want better performance I need a better processor.

A Ryzen 5700G has a passmark of 24,613 3,281 or a 5700X offers 26,761 3,375 and would be a marked improvement on my i7 and a reasonably affordable rig (the £250 difference would buy the processor and a cheap motherboard, to which I then need to add memory) and the processor only figures would be somewhere between the 3600X and the 3950X (closer to the 3600X but with a better single threading performance than any of the machines on test).

But that leaves the issue of licences failing when they are protected by a hardware footprint, slightly eased by those that come with 3 licences (PhotoLabs Elite) or where I have taken out a “family” licence (5 systems)!

Thanks again for all the helpful advice.

It sounds like the best compromise to get a significant performance increase for a reasonable price for my usage and existing setup would be a card like the Gigabyte NVIDIA GeForce RTX 3060 WINDFORCE OC 12GB Ampere Graphics Card for around £330

@Stefanjan yes your processor is similar (slightly slower) than my sons so the 3600X, which is coupled with the RTX 2060.

3060 figures should be close to the 3060 figures I have shown, maybe fractionally faster because mine were taken with the 3060 coupled with my slower (than yours) i7 4790K CPU.

The RTX 3060 is the later GPU and appears to be a bit faster running DP XD, in my tests.

So the options are
RTX 2060 £259.98, should give similar figures to my Grandsons machine (the Ryzen 3600X/2060)
RTX 3060 £328.99
RTX 3070 £629.99

If your motherboard is up taking a faster processor then you should be able to upgrade that if you are not satisfied or just want a bit more screen render and export speed.

All the figures that I have presented were run on images on an old SATA SSD connected to one USB3 port or another (on one machine or another) via a SABRENT adapter and all the outputs were going back to the same drive

My own machine is
i7 4790K overclocked to 4.4Mhz
RTX 3060 (was originally GTX1050 2MB)
24GB of RAM
2 SSDs (C:\ and E:),
4 HDDs (8TB, 6TB, 4TB and 2TB)
1 NVME (1TB) used for data not booting

with additional backup 8TB USB3 drives attached.

The machine is essentially copied by my Test machine which has the same configuration but with a GTX 1050Ti 4GB graphics card and that contains a copy of all files from the Main machine.

Hope that helps and sorry if I have caused any confusion along the way.

Thanks, I’ve ordered the rtx 3060 from scan.