PhotoLab3: wish-list

you still don’t understand what is flashgun for sir - flash light is for filling shades not replacement for crap light as most thing.

Most cameras have bult-in flash or you just use small one that weights 100g. Standard accessory that improves your picture quality. If you carry 1.5kg camera body and 2.5kg tele-photo lens…flash is your last problem :smiley:

I do not have time for preparing scenes as most do - I must act quickly where is lighting condition just for 2 seconds :smiley:

What I have no time either is pimping crap pictures in post-processing. What is not there cannot be changed.

I would love to see some proper AI (ML etc) when it comes to e.g. WB.

The software should be able to define when the photo was taken and what were the lightning conditions. Based on that and the camera/lens profile, it should apply a spot on WB.

I am surprised that this hasn’t been introduced yet by any tools on the market.

This will allow us to get rid of grey cards and VERY MUCH decrease the time to do basic corrections to 100 or 1000+ RAW files.

Like a Auto natural WB?
To neutralize light colorcast?
It’s there but not, yet, in DxO.
Wb correction.Earlier post

White balance depends on a lot more than time of day. it depends on :

  • the direction you are facing in relation to the sun
  • whether the sky was cloudy or clear
  • whether the atmosphere gave colour to the sky at sunrise/set
  • whether you were stood in proximity to a sunlit coloured wall
  • the altitude you were at, which affects the level of UV
  • etc, etc, etc…

In addition, I would ask why anyone needs to process thousands of RAW files in one go. Try :

  • taking less pictures (they can’t all be keepers)
  • transferring them from your camera as soon as you have completed a shooting session
  • sorting them for content and deleting the non-keepers before processing them
1 Like

The color picker is useless if there isn’t a white/greyisch spot to use for WB correction.
I used the SP AAWB and ANWB a lot for correction purposes and if i have a imag which is off i use it still to get the numbers for DxO
i find it a weakspot in the WB tool that there isn’t automated neutralizor to get a balanced WB when its off.

spray and pray isn’t my thing but i seldom use a manual wb setup, no use for RAW.

A wish list for PL3 is a bit late since in all probability they are mostly fine tuning the beta at this point and will certainly not start adding or updating any major features just before its release. With regard to updating and adding significant enhancements in the periodic point releases over the next year, that is always a possibility. However, although that was the plan for PL2, it never materialized. Perhaps the wish list should more properly be for PL4.

Mark

Why was this never realized? I remember when I bought 2 I thought the upgrades were light but there were promises about .x updates that would address the UX and keep upgrading thins along the way. Now I will have to buy PL3 to get those updates since the .x updates for PL2 were mostly camera/lens updates and bug fixes. This makes me more cautious about upgrading the software immediately.

I hope because of this PL3 will be an amazing big update that addresses many things out of the gate. If again there is 1-2 new/upgraded features and lots of ‘future’ promises, I won’t bite this time.

6 Likes

How do you create a tool which detects the WB in an image that doesn’t contain any information on the “true” colour temperature of the light at the time of shooting?

Which raises the question, what is a balanced WB?

When I shoot 5" x 4" film, I use a colour meter to tell me which correction filter(s) to use to achieve a “neutral” colour balance; i.e. one that matches the colour temperature of the film emulsion (usually 5600°K). This necessitates a range of around 25 filters, of varying intensities, to match the lighting temperature to that of the film and to give me a “true” colour rendition. This is very much a manual process but it does guarantee me the correct WB, as shot.

With digital cameras, shooting with manual WB records the temperature that you set, in the EXIF of the RAW file, but that temperature is only “advisory” and that temperature can be changed in DxO later.

I always shoot at 5600°K, regardless of the lighting conditions. This then gives me the equivalent of a correctly exposed sheet of daylight transparency film. Then I can alter the temperature in DxO if I want to either warm up or cool down the appearance

This is very important when I shoot around sunset or during the “blue hour” (after sunset). If I use the pipette to read the temperature of a cloud in an image, taken during the blue hour, that I know should be neutral grey/white, I get a rendition as if I had taken the shot in full daylight, losing all sense of how the light actually felt when I was there.

On one shot, taken during the blue hour, using the pipette gave me a temperature of around 14,000°K and gave the impression that the shot had been made during the day - not what I wanted at all.

So, my question to you is: how do you expect DxO to decide what is the “correct” WB for the prevailing light when the shot was made?

Normally you would hold a neutral grey card right in front of where you subject is/will be in the correct angle with regards to how the lighting falls on the subject. You would then measure this with a lightmeter.

So to do this autmatically the software can use models to calculate the angle of light and position of the subject to have an estimation (but there is a lot of ifs involved). I don’t believe this is easy because of all the things you mentioned earlier that can influence it. Also many photos have mixed light sources which more or less require a manual adjustment to land somewhere in the middle or emphasize one over the other.

I do belief that deep learning/AI will crunch 80% of this problem over time if they have a good dataset of professional photography which correct white balance (which is always subjective to an extend).

Please don’t take the following as in any way insulting :blush:

I find it fascinating that “young” photographers now look to AI and other “automations” to “correct” their images, rather than previsualising and calculating how to obtain the best result in the camera.

I have been involved in photography for over 50 years and teach at our local photo club. The first thing we try and cover is how to relinquish automatic mode :stuck_out_tongue_winking_eye:

We even have some people who believe that the camera knows best, only ever shoot jpegs, and feel that what the camera gives you is the best you can get :crazy_face:

I also get criticism for referring to how I used to do things with film, as if this is totally irrelevant to digital, but totally missing the point that the laws of physics for light have never changed and that a sensor is only a reusable sheet of colour transparency film.

They don’t seem to realise that, just like film, sensors have their limitations in terms of dynamic range, noise/grain and colour rendering. These are all things that we “oldies” learnt about at great expense (because film costs and has to be developed before you can see the result).

Last year, I made an image, which, I believe, no amount of automation would be able to correct :

To start with, I used an independent spot meter to measure the brightest point in the sky and the darkest point on the land, where I wanted to see detail. I know that my camera (Nikon D810) can theoretically cope with 14 stops of dynamic range but, after testing it in real life, I prefer to limit it to 13 stops.

Since the readings indicated a dynamic range of around 15 stops, I realised that I would have to use graduated filters to reduce the brightest areas; so I placed a graduated 2 stop filter over the right side of the top of the image, from the horizon at the edge to about ⅓ of the way from the left, on the top of the image. I then placed another 1 stop filter over the sky in the top left of the image to further equalise the contrast.

I have also determined that, in order to maximise the dynamic range that the sensor can cope with, I need to take a spot reading from the brightest area and open up by two stops (ETTR)

Only then can I be confident that the image in the resulting file can be manipulated without resulting in either blown highlights or blocked shadows. We used to do the same kind of thing with B&W film using something known as the Zone System, over-exposing and under-developing to compress a wide dynamic range into the more limited range of silver bromide printing paper.

Of course, the jpeg preview image on the back of the camera looks horrendous :

… but, just as with film, I can be confident that, after working on it in DxO, I am assured of a quality image.

Now, the problem remains, how do we transfer all that knowledge and experience into a “one-click” automatic tool? :nerd_face:

5 Likes

This is exactly why I mention AI and ML (Machine Learning). :slight_smile:
In theory, it should certainly be possible with the technology we have now.

We have seen deep learning tech in Nvidia’s image reconstruction (allowing you to fully reconstruct a face, with only 10-20% of the data), turning sketches into photo-realtistic images, recovering an image with 90% noise to a noiseless image (DxO Prime noise is nothing compared to Nvidia’s deep learning examples, https://news.developer.nvidia.com/ai-can-now-fix-your-grainy-photos-by-only-looking-at-grainy-photos/)) etc.

Same with some tech from Adobe.

Working with AI, IoT, ML etc, there are almost endless possibilities. Editing images will be one of the easiest among them. :wink:

What is balanced wb?
-no unwanted color casting by reflection
-no unwanted color shift by luminance of non white light .

  • more a DR thing but still WB, true whites and true blacks and greys in one image.

I was used to use when the camera 's AWB didn’t deliver Silkpix Auto absolute WB or Auto Natural WB to get a startingpoint. In the above linked example we tryed to get that look “we” humans see and or remember. Sunset and aftersunsets blue hour will be filtered in our brain and show us what we like and the ship/boat we still see “white” even when we really see a blue glare on the hole.

The AAWB and ANWB are calculating a prediction of the colors if it was cast by “white” sunlight.
Probably by looking for a global “mask” of a color temp which it sees as the light color and neutralize that to “grey”.

And between those to colortemp numbers, neutralized and original camera wb would be your desired WB look. Touch of blue light and orange sky.

That’s what i am missing in DxO 's WB automated systems, as i siad the colorpicker isn’t acrurate enough when there isn’t a proper spot to click on.

You example is great, you didn’t just point and click but used old school lightmeasuring and presetting camera to get what you want.

Shooting raw means, frame, get exposure right wile choosing the desired Aperture and shuttertime and done. The rest can be done in post. :wink:

I disagree with you Peter. Junk in = Junk out. The better the raw file that you put into DXO or any other software the better the output. If you have the technical skills to get an optimum raw file the easier it is for the software you use to get the best out of it. Every piece of software can only go so far and has its limitations

Sigi

2 Likes

@Sigi, i agree with junk in is junk out. I overstated a bit.
But exposure of the sensor so no blown highlights or blacked shadows just right is far more important then wb, color.
DoF can be narrowed but not be made deeper in post . So right Aperture is needed.
Same with motionblur , tremble or object motion, you can unsharpen and blurring in post but not get a blurred image sharp.

My main argument is if you got the time to seek the right manual WB setting you probably have the time too to setup the right A , SS, and iso for a proper exposure thus the outcome is good.
So i let AWB do it’s thing knowing i can correct if i want. All colors temperature are availble in post.
Contrast, raw file doesn’t have a low contrast setting only ooc jpeg. And high key low key can alter shuttertimes thus it effects the exposuretime of the sensor.
Thus it can be interesting to use things as auto DR(idyn by panasonic) when you shoot raw plus jpeg, to create a auto form of exposure correction on high DR to avoid blown highlights.

AUTO MODES are usefull as long as you understand the limitations and keep control.

When do we approximately expect PhotoLab 3?

I’m sorry but this simply isn’t possible. You might be able to blur parts of the image but that will not give the same effect as truly limited DOF.

You can increase DOF in post-production by taking multiple exposures at different focus points, followed by the use of stacking software to combine them into one image.

Why would you ever shoot Raw + jpeg? the RAW file already contains a jpeg preview image and a full jpeg can be easily constructed from the RAW by exporting from DxO - what do you do that you feel needs both?

should come anytime in the Fall.

i didn’t say you can make in post the same silkysmooth bokeh as a pro lens, i wrote: you can create blur in post but you can’t sharpen the image outside the DoF range :wink: that is technical impossible. out of focus is out of focus.

yes called focus-stacking, that’s a bracketing function (rawfiles) or 4K stacking video which can be stacked in camera or in post. But never in a single image that’s a one way street. adding blur to parts by blur slider (local adjustment or the "blur"tool of Filmpack or “miniature effect” tool. creative in post so to speak.

If you shoot raw + jpeg you have access to all features in the camera even the jpeg only one’s. And some features like idyn auto effecting also the raw file by lowering exposure 1/3 2/3 1 stop when it detects highlight and large DR. (as you know lifting shadows is better then lowering highlights from blown area’s) You also can use “zebra’s” to detect overexposure béfore you release shutter but this idyn feature in auto is pretty handy to help out. also in raw not available is panorama modes and creative menu’s. And for me an other reason is i look my images on wifi to my tv before i develop the RAW files and this needs ooc jpegs. (give me a idea of the keepers rate) and the raw-thumbnail is very low resolution
So i throw all jpegs away which i don’t need developing if i have the raw file and keep the solo’s.

Many. For example

  • you have a quick result to share
  • some cameras can only display the photo at 100% if you take a JPEG. Important to check for focus.
  • in camera filters and effects are only applied to JPEGs. It can be fun to use them occasionally. Still it is good practice to have an unprocessed negative = RAW.

That said I solely shoot RAW + JPEG on my M43 systems.

2 Likes

I shoot RAW to my main card and JPEG to my backup card.

1 Like