Is DxO dev team working with Apple Silicon yet?

Rosetta 2 will let us use the software we have today on the brand new  M1 chips.
And that is more than enough until something 100% “made for Apple Silicon” can be done.

It can not be DxO’s problem if people have the need to rush and buy the latest (unsupported) technology.

Personally I will not ask DxO to rush this but to take its time to do it seriously as usual and with peace of mind in order to get the most out of the new hardware that is coming.

:bulb: To calm down a bit maybe we should open an other vote to allocate ressources for “No rush for Apple Silicon support - Please do other things first that are on the backlog” :question:

2 Likes

I have a first benchmark from my just-arrived entry-level M1 MacBook Air. Processing DeepPRIME in PL4, running under emulation (obviously), it is almost exactly as fast as my 8-core 3.3GHz 2013 Mac Pro with FirePro D500 GPUs when using CPU-only and TWICE as fast when using the M1’s GPU. Can’t wait to see how much faster it gets with a native binary.
FWIW, this was a very quick test, so I have no idea whether there are any issues with PL4 running under emulation.

7 Likes

Are you sure you want to edit files on an ipdad??? That sounds like hell, thunderstorm and nightmare at the same time.

There are Photoshop, Lightroom, Illustrator, Pixelmator, and tons of other professional photo editor apps in iPad. So there should be people who actually use this otherwise why spend money develop app if no one use? I also don’t use iPad for editing but only because I don’t have PL on iPad and I have to carry my MacBook Pro everywhere if I go to a multiple day trip, which is a pain in the neck (and back, literally). Actually it is easier to do with Apple Pencil and multitouch. You can zoom in, use tip of pencil to retouch here and there, zoom out, rotate, etc way faster than using a mouse or trackpad. So just because you don’t use that way doesn’t mean people should not use.

3 Likes

I have been working with my new MacBook Air (8 GPU core version) and Deep Prime processes sooo much faster than Prime with GPU enabled, I have gone to Deep Prime as my default NR. FWIW, with my late 2013 27" iMac, PL4 performance is 3-4 times faster overall on my M1 MacBook Air than on that. And since general Geekbench put the new iMacs about 2 times the performance of my iMac, this little M1 machine is quite the beast and is equivalent or perhaps a bit better (Geekbench scores put it above all but i9 iMacs, iMac Pro, and 2019 MacPros).

“Deep Prime processes sooo much faster than Prime with GPU enabled”
I think you meant that DeepPRIME with GPU enabled runs much faster than PRIME (which defaults to CPU). That does not match my experience. On my M1 mini and MBA (7-core GPU), times are almost the same for PRIME (with CPU) and DeepPRIME with GPU. There may be something wrong with your testing.

“That does not match my experience. On my M1 mini and MBA (7-core GPU)”
My 8 core vs. your 7 core. With a D7500 RAW image, switching to Deep Prime from Prime, all settings in image unchanged,

Just did a test, export to Application (Bridge) of a D7500 RAW (14 bit lossless), 16 bit TIFF, uncompressed:
Prime (CPU only) -> 27 seconds
Deep Prime (GPU processing enabled) ->16 seconds!

The only difference is switch from Prime to Deep Prime.

Given that we’re using the same M1 chip, I can only attribute the difference to the files used. I base my benchmarks on a set of five cropped D850 images being used as a reference by folks to report benchmarks in the “Benchmarking PL4” forum discussion. Also, per that standard, I’m applying DxO Standard preset and exporting to JPEG on desktop.

Benchmarks are bench marks and all that matters to me is “real world” performance and I see dramatically reduced processing time which is what matters to me.

Of course, but this exchange indicates to others that their mileage may vary.
Also, my “benchmark” IS real-world performance with actual RAW files. They’re just not MY RAW files. And, I’ve run this test on my M1 MBA (7-core GPU) and M1 mini (8-core GPU).

I had recently done a test of a particular photo with DeepPRIME on my Intel Mac mini. Using CPU it took 2:01 to output the photo. Switching to GPU (a lowly Intel 630 integrated graphics) it took 1:34 however being only partially supported, some photos came out with black bands on them.

I tried the same photo with the same export settings on my new M1 MacBook Pro. 19 seconds. :upside_down_face:

2 Likes

If one were really keen to buy a new Apple ARM Mac, one possibility would be to run PLx in a VM until the proper binaries can be compiled and tested. ARM is a fantastic technology and gives Apple a roadmap that will last them 20 years. The VM vendors (VMware, Parallels, VirtualBox, etc) will have ARM-ready binaries soon, if not already.

No need for a VM. The Intel version of PL runs fine through Apple’s Rosetta 2 translation.

VM vendors will never have ARM-ready binaries to run Intel-based Windows until they switch to “emulated-machine” vendors. The virtual machine only “allows access” to the CPU for virtualized operation system and Intel OS will never run on ARM. They would have to program the Intel emulation tier (like Rosetta) and above this tier, they can run Intel bases OS. But this is an absolutely different level in comparison to “simple virtualization”.

Moreover, PL4 runs really well on M1…but it doesn’t mean we are not waiting for the native M1 version of PL to get the most of its computation power.

I am following this closely myself, and I can say that VirtualBox clearly said not to expect anything ever; Parallels released betas, and VMware said they are working on something.

Basically, the best bet seems to be to just use the native version of DPL. There is nothing better to expect from VM vendors than what Rosetta is doing already.

Hi DXO,

any news on Apple M1 native support, at least some estimation or BETA testing program? Or it will be the in PL5 like other requested features…?

Apple M1 and Apple Pro-Raw please!

1 Like

Hi DXO,@CaptainPO and @StevenL,

please tell the crowd at least something relevant regarding Apple M1 native support. We can bear the truth here, whatever it is :wink: Better to know even the bad news, than know nothing…

4 Likes

Hi DXO,

anybody’s there? We are paying customers so we would like to get and expect some kind of something called customer care. In this case it means some response, comment… anything.

Thanks.

Hi there,
The next major version of PL (v5) should be M1 native. Probably a minor update once the 5.0 hits the market.

Steven.

3 Likes

Thank you very much.

After all exchanges and lengthy arguments, this is all I want to hear in the first place.

Actually all explanation and arguing aren’t any necessary at all had staff actually replied like this. The thread will be cut short by a lot.

Looking forward to buying PL5 soon.

2 Likes