I went to university to do a degree in computer science in the early '90s (I was in my late 30s at the time). One subject we studied was IKBS (Intelligent Knowledge Based Systems) - a forerunner of AI.
We learnt that part of the process of creating knowledge systems was called knowledge elicitation - basically gathering knowledge from experts to feed into the system, to enable the system to make decisions based on that knowledge. At the time, one of the perceived problem was that experts were not going to be willing to contribute their expertise, only to be made redundant by the machine that would replace them.
What several of us had problems with, and often discussed strongly, was the difference between a knowledge base that could be consulted in decision making and the algorithm for an actual decision making process.
All current computers rely on binary logic (either yes or no); something that can prove rather limiting in the real world in which we live. There are often decisions that have to be made to which the answer isn’t a simple “yes” or “no”. Sometimes the answer can be “maybe” or “it depends” and, although we can create “degrees” of yes-ness or no-ness in the same way that we can have multiple shades of a colour in a digital image, the result is always either yes or no at a certain level; it is impossible to represent just a little bit more or less than a precise value.
Also, a neural network needs “training” in how to make the right decisions. But what happens if the decision can be equally valid in either of two possible directions? How do you decide which of the decisions is right? The example of a self-driving car comes to mind once again; should the car avoid a collision with the vehicle in front, which has suddenly braked, at the expense of hitting a pedestrian, or should it “take the hit”, risking injuring its own passengers and those in the car in front, in order to save the life of the pedestrian?
Any “feature” in software that claims to be “intelligent” has to have a knowledge base on which to make its decisions and the algorithm that makes those decisions has to be written by someone - whose idea of yes or no may not coincide with ours.
One of the assignments on my IKBS module involved writing 3000 words on “can a machine possess intelligence?”. I discussed the nature of God, the nature of man, quoted from the Hitchhikers Guide to the Galaxy and postulated the need for ternary or even quaternary logic. In the end, my conclusion had to be no, a machine cannot possess intelligence; at least as it is commonly accepted at a human level.
Do we really need more and more AI in DxO? Maybe, or maybe not, it all depends on the purpose.
I would argue that DxO does a pretty amazing job with features like Smart Lighting and ClearView +. Are they truly AI, or more the result of an awful lot of real intelligence from programmers who know their domain very well?
I class myself as a photographer. Part of my 50+ years of experience meant learning how to expose as perfect a negative as possible in the camera, knowing the end result I wanted, and knowing how I was going to develop the negative to achieve the best possible density and contrast to achieve the best possible print by drawing up a printing plan for dodging and burning under the enlarger, finally developing the paper, washing it to remove any residual chemistry and carefully drying the print in a dust-free environment.
All that takes real intelligence - in other words, practice and making lots of mistakes on the way - also known as learning.
Nowadays it seems that photographers no longer want to learn their craft. Instead they want to be able to take thousands of pictures, let a computer decide which is the best, let a computer examine the picture and decide how to make it better and, one could argue, abdicate responsibility for the finished result to a piece of software.
Personally, I don’t feel that AI is anything other than a marketing buzzword, designed to give people the impression that they don’t have to do any work - it can all be done “by magic”.
No, PhotoLab should not offer things like sky replacement - the has nothing to do with photography and everything to do with a poor photographer who is not willing to admit that not every picture is possible and may need a return visit. Here’s an example of a large format image I made several years ago; it took five return trips of 150 miles before I was able to get the sky, the lighting and everything else that makes the picture. Graduated filters used on the lens at the time and no digital post-processing involved, apart from removing the dust spots from the scanned transparency.
DxO should continue to do what it does extremely well - providing the best RAW processing software out there. Sure, if using AI on the blurb makes more sales, go ahead. Should DxO spend inordinate amounts of time and effort dreaming up ideas to justify that term? No. There will always be those that want a Swiss Army penknife tool to do all the work for them. Just continue to be the best single purpose tool a photographer can find