Help choosing an eGPU for PhotoLab 4

It’s my understanding that PhotoLab 4’s DeepPRIME processing benefits greatly from having a good GPU. This is in contrast with PhotoLab 3’s PRIME, which benefitted primarily from having as many CPU cores as possible. I’d appreciate some guidance, preferably based on real-world speed testing, on which eGPU will most cost-effectively accelerate DeepPRIME performance. In particular, I’m considering OWC’s AKiTiO Node Titan with the RX 580 or RX 5700 XT cards. I’d like to know how these would affect performance on my Mac Pro, a 16" MacBook Pro, and a Mac mini.
Two years ago, I bought an 8-core 2013 Mac Pro for it’s PRIME processing speed. Now, I’m hearing initial reports from others of much better performance than I’m getting, thanks to better GPUs. My export times are virtually the same whether I select Auto, CPU or GPU for processing. My Mac Pro has dual FirePro D500 GPUs. Would I be better off with, say, a 16" MacBook Pro or a 6-core Mac mini?

With DeepPRIME, GPU power is indeed a lot more important than CPU one. PhotoLab does indeed support eGPUs for DeepPRIME (and only there at the moment) on macOS. Regarding speed comparison we don’t have enough eGPUs but @wolf should be able to give a part of the answer.

Regarding 16" MacBook Pro or Mac Mini, the main difference is the AMD GPU already embedded in the MBP which gives already quite good results. Both support eGPUs if you want more power. So obviously just don’t take a Mac Mini alone.

Hi Jacques,

We have no fair comparison figures between the three GPUs (FirePro D500 vs RX 580 vs RX 5700 XT). These comparisons are very hard to make because they also depend on the CPU, which handles everything except DeepPRIME, so we would have to connect them as eGPUs to a 2013 Mac Pro… which is not officially supported and therefore seems to be inappropriate for benchmarking.

All I can provide are absolute figures for my personal configuration: a 2018 Mac mini with 6‑core Intel Core i7 CPU and an RX 5700 XT connected as eGPU (in a Sonnet enclosure). When exporting a 20 Mpx raw image with preset “Optical corrections only” + Denoising, I get the following times:
HQ: 4 sec
PRIME: 25 sec
DeepPRIME: 7 sec

Note that usually I apply more adjustments to my images: Smart Lighting, ClearView, Local adjustments etc., which add up to the export time, independently from the denoising method, so that the real-world difference between the three methods is smaller than this test may suggest.

Maybe @Lucas could try with the RX 580 in his iMac for comparison.

Best,
Wolf

Here are my times with a 2019 iMac, i9 9900K, embedded Radeon Pro 580.
Same settings and image as Wolf.

HQ: 3 sec
PRIME: 19s
DeepPRIME: 8s

My CPU is better as you can see in HQ and PRIME times, so that could explain the low difference with DeepPRIME.

Thanks to Wolf and Lucas. This is helpful. I have found in the past that additional adjustments add little or nothing to processing time (perhaps thanks to my 8-core CPU). I find it interesting that the DeepPRIME difference between Wolf’s RX 5700 XT and Lucas’ RX 580 is only 14%, despite the roughly 140% graphics power advantage of the latter. I’m left to wonder whether Lucas’ better CPU narrows the gap, but I kind of doubt it, given that my Mac Pro’s relatively high-end 8-core CPU performs no better than my ancient FirePro D500 GPU. Perhaps the Rx 580 represents the price/performance sweet spot for DeepPRIME at this time. Given that PRIME is pretty much the only CPU-heavy operation I do, I might be better served by, say, a 6-core mini or 4-core 13" MBP with an RX 580, and I could pay for this by selling off my current gear. I wouldn’t normally consider such an exchange, but getting DeepPRIME with 1/2 to 1/4 my current wait time for PRIME would make it worthwhile, as this is the biggest bottleneck in my high-volume event photography work.

Should have written “despite the roughly 140% graphics power advantage of the FORMER.”