When will PL have "super resolution"?

Hi Joanna. W :flushed:W over an hour? What’s your download speed? I get 12-15 MB/s here and it downloads in 3-5 minutes. Just wait Sharpen is 3.7 GB! :thinking: :grin:

I hope after that wait it performs for you up to what I said. I did my comparison some years ago probably with PhotoRAW 2018 also. It was on a small JPG about 1280 px on the long side and I could most definitely see a marked difference in favor of Gigapixel.

As far as Sharpen goes, don’t expect miracles, but it can rescue small amounts of motion blur and missed focus and it does a very good job on a soft file. It does not do as well as DXO Lens Sharpness if you have the RAW file. I use it mostly for treating my negative scans(JPG) after I’ve used Denoise and Gigapixel on them. It brings back the original crispness of the film IMO.

Be careful with the AI algorithms(it has a ton of them). It has a tendency to choose to use the stronger ones if you let it choose automatically. Strengths in decreasing order of strength are: Out of focus, Motion blur, and Too soft. Subcategories include: Very Blurry and Very noisy. Watch for over-sharpening!

I have had the best luck recently with their new Standard Mode which is more adjustable but also includes Motion blur and Lens blur as subcategories. Unless you really need something very strong, I suggest that you use Standard mode, Lens blur with sharpness dialed back to about 10. You can set NR and artifact reduction as you see fit.

Please let know how you get on. :smiley:

A quite decent software for superresolution upscaling is Pixelmator Pro. Unfortunately it is Mac only. Furthermore it cannot be fine-tuned with different parameters for different photographic scenes. I did not have the chance to compare results to the solutions from ON or Topaz. It would be great if superresolution would work directly on RAW files. I would expect improved quality. Maybe Deep Prime can be extended in such a way that demosaicing interpolates onto a grid with double the base resolution. It would not require a complete rework of the algorithm, but definitely new training of the machine learning model.

As far as I read in other places, Adobe super resolution can improve (worm buster) Fuji conversions too.

For a while I played with the idea to ask “for what do you want to fake a resolution the image doesn’t have or is too weak photographed to get back lost details?” But then I thought “leave the believers in artificial intelligence, if they need DeepFake to be happier with their images, then why not? For sure we need this function before the yet-not-working functions are working like they should…”

Hi Mark

Well, I found that the modem for my 4G internet connection (living in the countryside, it’s all I have) needed rebooting and that I was also trying to download the “online” installer, not the full installer. In the end, it took about 30 mins.

I was a bit disturbed to see a prompt to login. is this because it is the trial version? Nevertheless, I didn’t and proceeded to load a 6Mpx image from my old Nikon D100. The default was for a 4x magnification, which seems to take ages to update or to download a “model”.

I won’t be trying to use 4x for 6Mpx again - the results weren’t particularly good and, working with a RAW file, even at only 400 ISO, the noise reduction was less than impressive.

I have just “tweaked” the RAW file so that PhotoLab can read it and opened it to apply adjustments and DeepPRIME, then exported the file to TIFF. Then I opened the TIFF and set the magnification to 2x. This time, I was more impressed. It has to be said, hoping to turn a 6Mpx image into a 96Mpx image wasn’t ever going to end up a happy experience but, at 24Mpx, it’s definitely doable.

I think I’m going to have to wait for the ON1 equivalent before making a decision to buy it or not but, so far, it’s looking promising.

1 Like

Marginally in my experience and in those situations C1 still did/does a better job.

In those images with bad/noticeable ‘worming’ I personally think it’s…

C1>LR (enhanced)>PL

However before resorting to either C1 or LR enhanced a play with sharpening in PL (still a little too heavy imo) can make things better and tbh the only picture I have that it’s ‘obvious’ is a poorly exposed shot with some moss on a branch and even then we are talking pixel peeping 200% zoom.

In pure editing terms I go PL>LR>C1
For no other reason than I like PL and it gets the results I want more often than not. I use LR to backup and give on the go access/editing or masking (if not happy with PL)/merges.
C1 is just a back up and I’ve never committed the time to learn.

I think we all have different expectations and needs with regard to future features of PL depending on our individual workflow and projects. Still I think that one of the main selling points of PL is the excellent algorithms for RAW development leading to images with a high degree of details and low noise levels.
When taking it very serious, you could further refine your question and ask “why do we want to de-mosaic our RAW files and generate images with a fake color resolution (in RGB space) the camera never captured?” That has nothing to do with weak photography. I for my self believe that many people here in the forum enjoy the opportunities that machine-learning-based algorithms such as Deep Prime provide. When applied and tuned appropriately, they can generate very convincing results while revealing very low levels of artifacts. Instead of considering these algorithms as DeepFake, I would rather see them as advanced interpolation methods.

Example: In my next photo project I would like to photograph the moon. As a hobby photographer I cannot afford a 50MP FF camera nor a 600-800mm tele lens that would be ideal tools for that purpose. So with my limited gear (max 300mm equivalent lens) the moon will only cover a small portion of the frame and I have to crop heavily. As a consequence I am very happy having a software that can squeeze out the best possible details out of the RAW image. Expanding the opportunities of my limited gear would be my personal motivation for super-resolution tools while I am fully aware that I can never reach a quality level comparable to the professional FF equipment mentioned above.

3 Likes

Hi @Joanna ,

I forgot to mention that there limitations to the size that GPAI can produce. In my initial(and only) comparison with PR 2018 I was only enlarging from 1280px to 3000px so it was less than 3X. I have since been able to achieve excellent results anywhere up to about 4.5X for small JPGs using my Topaz workflow which I will describe below. With my negative scans my sizes are anywhere between 1.3X and 2.3X depending on how I choose to crop them. I very much like the results that I’m getting.

My workflow includes three of the Topaz AI programs: Denoise, Gigapixel and Sharpen. I bought these over time, not all at once. Yes they are expensive but if you can wait until Black Friday they do offer substantial discounts on bundles.

My workflow is to start with DNAI using the most aggressive NR model that leaves detail. You can compare four of the models at once and choose the best one. Once the model is chosen allow the program to set the defaults(you can modify these to your taste) and let AI do it’s thing.

Next is GPAI. I usually don’t use the multiplier but generally have a size in mind, chosen by pixels on the long side. Generally 6000px on the long side(~24MP) is enough for printing, but occasionally I do 9000px when I can get away with it(depends on the file and cropping). I usually use the Standard or Low Resolution models, though with small JPGs the Very Compressed model can help. Be sure to turn on (anti)Color Bleed and Facial Refinement for portraits. Again you can compare 4 of the models and choose the best, then let AI do it’s thing again.

Then on to SAI and as I’ve previously written, I use the Standard model with sharpness dialed back to 10. This workflow seems to produce excellent results IMO.

I should mention that for this type of workflow Topaz recommends turning off redundant corrections i.e. turn off sharpening in DNAI, turn off both sharpening and NR in GPAI and turn off NR in SAI. While on the surface this advice sounds logical and like a “No-Brainer”, I have tried this and prefer leaving these redundant corrections on. My guess is that they kind of cancel each other out and producing a better result IMO with them left on.

Lastly, as you have already observed turning 6MP JPG into 96MP TIFF isn’t going to work very well, no matter what program you use. None of these Topaz products are able to perform miracles, but they do pretty well if you don’t try to push them too far. The only postprocessing device that even comes close to what I would call a miracle is DXO’s one and only Deep Prime.

2 Likes

@chris43 you’re right - when starts “improving technical shortcomings?” like sensor noise and when starts “manipulation of an image which never had triple or quadruple the amount of MP?”

Personally I make a big difference between “what has the sensor delivered?” and “what is added artificially?” and that “super resolution” crap is pretending to deliver details which were not there, had to be invented by AI - and why using a camera then anyway? Instead of saving all that money for lenses, bodies and acessories and go the virtual way to render the landscapes, add the clouds, bring in a second sun? Sorry, but “super resolution” is typical Adobe BS to justify their subscription model with features. It will not “squeeze out” more details than your piucture already shows, and btw. you would not see the footsteps of astronauts with a FF as there’s so much dust, mist, dirt and light pollution in our atmospherethat you gain more details just by carefully scouting your location and wait for cold, clear sky.

I tried the Pixelmator Pro Super Resolution feature with two pictures, except it went from 300 to 900 dpi, I could not see any benefit at 100% between “before” and “after”. I will try it with some lower MP and maybe see a difference, but to me it’s completely overrated.

My situation is that I have to produce files that will print at a minimum of 1m40 x 1m and be viewable from about 1 metre. Some club members have been known to submit beautiful images but that required cropping and, thus, can be too small to print at their original resolution. Thus I need such software.

Normally, they would print fine without needing resizing at A3+. But, for external exhibitions around the town, we don’t have a choice.

I agree that there are algorithms generating really artificial scenes that do not have much in common with the original scene the photographer intended to capture, e.g. when you completely replace the sky. In my opinion, these “super resolution” algorithms are not that bad, but their name is misleading and strongly driven from marketing. It would be much better to call them “advanced upscaling” as basically they increase the number of pixels without too much loss in perceived sharpness in contrast to classical upscaling algorithms, such as bicubic upscaling. They do not increase the details in an image, i.e. when a letter is not resolved in the original image, it cannot be resolved in the upscaled image either. My experience with Pixelmator Pro is that with menu item “image => image size” the super resolution feature yields reasonable results with an upscaling factor of approx. 140% corresponding to an increase in number of pixels by a factor of two. For an increase of 300% per dimension (default for the tool), the image looks waxy and artificial. This shows that these methods have quite some limitations, but for some use cases (Joanna’s application) they may be quite useful.

I tried Pixelmator Pro again with a JPG exported for Mail - there I could see real benefit, the JPG artefacts went away. Two decades ago I had kind of a JPG enlarger based on vector curves, splines. Cam’t recall it’s name, w as Windows only and did a good job for stuff too small to print. These days I tend to overreact on AI based stuff, I don’t think AI is the solution for everything. At least not as long as devs sometimes can’t understand what their algorithms are doing.

I’m curious: what required cropping? Were the images beautiful only with cropping or did the paper size not fit the image proportions?

Btw. I just remembered the name of the JPG enlarger 20 years ago: S-Pline, and I think it is still in business as “photo-zoom” (PhotoZoom Pro Alternativen und ähnliche Software - ProgSoft.net). It was really based on splines :hushed: and I needed it to enlarge some brand logos we only had tiny GIFs or JPGs of to use them in 3d-visuals.

One of our members uses a 500mm lens for bird photography but she has 1000mm eyes :nerd_face:

She definitely needs the new (and probably 2025 widely available) Nikon S 800/6.3 PF E :eyes:

Yes, I’ve seen that. Too rich for me. And for our bird photographer, she made the choice to buy Canon :roll_eyes::stuck_out_tongue_winking_eye:

nobody’s perfect… :woman_shrugging:

1 Like

Id also appreciate to see such a feature in DxO PL in the near future. Dont want to change over to Topaz

I believe if DXO could leverage their DeepPRIME training database and/or their IP/algorithms to produce a super-resolution, gigapixel AI type of capability I’m pretty sure they would destroy the competition in this space.

2 Likes

if DXO could leverage their DeepPRIME training database and/or their IP/algorithms to produce a super-resolution, gigapixel AI type of capability I’m pretty sure they would destroy the competition

This also seem to me to be an area of core competency for the DxO programming team. Easy to add in, doesn’t even require a separate module. Just do enable it on export. Could be sold as a separate module, but let it integrate well with PhotoLab then (like ViewPoint).

1 Like