Correcting a lens optical distortion and/or perspective while developing raw vs to a 16bits tiff file

I would like to know which are the advantages regarding the final quality one can get for an image if correcting the optical distortion (just the optical distortion, not sharpness not anything else) of a lens used while processing the raw file, in comparison than if done after exporting to a 16 bits tiff file. What would the shortcomes be if the correction is done to the tiff file ones processed the raw file.

There are some camera+lens combination that might not be covered by DxO. I want to know how much do I loose in the resulting image quality if the optical correction is done after processing the raw file with for example a program as PTLens (that can work with 16 bits tiff file) that might have the profile for that lens.

I would also like to know (just for theoretical reasons because I can do this within PL), if the resulting quality of an image will be better if correcting perspective while processing a raw file than if doing it after exporting to a 16 bits tiff file using DxO ViewpPoint (for example as a standalone), which I do have.

If DPL supports your camera/lens combination, the best bet would be to do it all in DPL.
In order to check support, go to this page: https://www.dxo.com/dxo-photolab/supported-cameras/

If DPL does not support your camera/lens combination, you will have to do with what you have.

Theoretical evaluation
When a an image is calculated from raw sensor data, output pixels are generated from what the file provides. A 24 Mpixel sensor has 12 green Mpixels and 6 Mpixels each in blue and red, hence, interpolation is needed in all cases where the output image must have 24 Mpixels. No interpolation is necessary if every output pixel were averaged from 4 raw pixels, which would give you a 6 Mpixel image. One way to do this is in a command line interface: DCRAW with the -h option.

Correcting perspective and lens distortions introduce two more steps of interpolation. Pixels have to move to a different place, but pixel locations are fixed, which means that new pixels must be “invented” (calculated from surrounding source pixels) and pixels around the edges will be thrown away, moving the resulting image further away from what was really there.

When all interpolations are done in one app, the app designers have an opportunity to create an algorithm (combination of interpolations) that might be better than a sequence of interpolations. Whether such an algorithm is actually better than the sequence of interpolations remains to be proven though.

Bottom Line
Check your combination of apps and see what you get. Be warned that a comparative test necessitates some nerd level thinking and pixel peeping. Will it be worth while? Be your own judge.

1 Like

Thank you Platyus. Very didactic and what you said in the paragraph I quoted makes a lot of sense.