Improved "AutoExposure" compensation estimation

That gives a soft lightning. But doesn’t enlarge the distance the light has to go to reach the subjects.
Bouncing can give you more depth if that’s what you want.
There’s a relation between power of receiing light and distance as you wrote. But there’s also a relation between the distance of the subjects. Assume 2 people standing 1 meter after each other and you use a flash at 1 meter the that relation is the ratio between the two subjects and the flash. In this case the first will get 1/1 and the second 1/2. Whatever value you use. Important is nr 2 get half of the value as nr.1. But now you enlarge the distance between flash and nr 1 to 10. Then the ratio will become 1/10 and 1/11. Nearly no difference. And that you get when you bounce.
Did you look at the link to Neil van Niekerk?

George

Unfortunately this particular venue wasn’t ideal. The ceiling was only about 9 feet high. So really not much additional distance by bouncing. The key was to just get a nice diffuse light.

With a high ceiling you’re right you can get more consistent results with bouncing as the subject distance (bounced to the ceiling) varys less.

Use the wall, even on the back of you. Make use of corners. Play with it. But don’t expect that a program can correct what you don’t like automatically.

George

This isn’t a hard concept. I’m looking for a mode that adjusts the exposure of a captured image the same way AutoISO would adjust based on the scene metering.

Matrix metering is complex I think.
Should probably involve specific AI conception and training.

Matrix metering has been around for a long long time. My camera can meter a scene 60 times per second… no doubt there is some intelligence but I wouldn’t call it artificial intelligence. No neural network involved.

Although there are neural network type tools out there that automatically edit photos based on your style. And they work. I’m not asking for that. But I would LOVE that. A friend uses one for his wedding stuff. He literally fed 5000 edited photos from 15 weddings he did and “taught” the service what he likes. Now he feeds wedding shoots in and he gets automatically culled and edited files back automatically. He shoots 3-5 weddings a week and this is a HUGE time saver for him.

But that’s another feature request.

Call me old fashioned but I personally would rather make my own adjustments than to trust an AI program to do it for me.

1 Like

So don’t vote for it.

Y’all are missing the point. When editing a gallery of 400 images I want the tool to get me to a closer starting point so the adjustments I have to make are fewer.

Perhaps you guys are all artists that process a handful of images at a time.

Some of us do volumes of images at a time. Time is money and I want to work faster.

1 Like

I didn’t…

1 Like

Can you describe the capabilities you want from the new tool? Your expectations? The word “smart” is often used for things that I find to be not smart at all. AI algorithms need to be fed, but so is RI…

Example expectation(s)

  • Raise exposure in a way that makes a night shot look like a night shot and not like HDR
  • Locally increase fine contrast in low contrast regions
  • Decrease fine contrast on faces

@MikeR have you thought of using high ISO and no flash?

1 Like

Yeah. This venue was really dark and I would have been near iso 20,000 with 1/200s shutter speed and f2.8 glass.

It seems it took some time to camera brands to make it efficient (Nikon has been the first to implement it in the 80’s I think - I think it was the FA).
But I’m not sure you would like the 5 zones FA matrix metering system :wink:

They even can demosaice, denoise, apply corrections, scale down images, compress to jpg to up to 30 frame/s now with 50mpxils images (Z9). No demosaicer can do that.
They have very dedicated processors with decades of development.

But who knows, maybe I’m wrong and this is easy to acheive now …

And indeed, it could help profesionnals who do big volumes of images to be more efficient.
But I suspect this could be a feature which would need continuous improvment as technology evolve.

I’m pretty sure you wouldn’t like it without face detection.

Face detection would be nice.
DXO already has the algorithms to detect faces.

Between knowing if there is a face in a photo and recognizing a face among others, there may be a difference.

One of the challenge of today’s AF systems (jumping, not jumping, which one to stick on).
This really involve IA (modern matrix metering).

Point is that auto exposure correction needs some extra settings in the tool.
Blackpoint, whitepoint. I drop the eyedropper on a place and auto exposure react accordenly.
And how difficultcwould it be to have the possibility to
1 Multiselect images.
2 set auto exposure correction to all.
3 after a quickscan to see if most benefit for say a +2/3ev on top of the initial auto correction. (like your EVC on your camera: AE sets 1/125sec and by dailing + 1 EV you get 1/250sec. Wile auto exposure still measure as did before you pushed +1EV.

Why?
Depending on your point of measuring most camera’s are aiming to 80% of saturation so highlights have stil some wiggle room in jpeg. So a raw file is most of the time underexposed.
When you have time you EVC this on the spot. And sometime’s you forget that you set EVC to say -2/3EV when you did a shot on highlighted and the next 100 shots are slightly underexposed. (yes you should see that and correct but say you didn’t.

Then i can use DxOPL’s Auto Exposure Correction with option to set a offset of +2/3 or +1EV.
Idealiter the present Auto Exposure Correction would correct my mistake but most of the time i need to correct the auto correction. And by getting a possibility to steer, influence the algorithm which outputs the autocorrection value i can easier correct multiple images in one go.

I think that’s what the OP @MikeR likes to have.

As i said i have often enough time and much less images to process so i go image by image when i have the need to correct auto exposure correction.

Thanks. To be frank there is so much room for improvement here.

Typically I crop and straighten first. Center weighted average doesn’t even look at the center after cropping.

So much could be done without AI (and various levels of processing.)

Faces can be detected. Sharpness can be measured. The “in focus” region of the image can be found. This “focus center” could be used as the center to calc the weighted average.

The histogram could be analyzed. Local histograms could be generated and those could be analyzed.

And at a perhaps most advanced level, a neural network could be trained to adjust exposure the way I have adjusted exposure on previous images.

The fact is, the autoexposure tools in Photolab are the most primitive of all the major editing tools. And no. Smart lighting isn’t an autoexposure tool.

I’m not knocking Photolab. I love it. But there is so much opportunity for improvement here. (Whether one thinks they would use it or not.)

I second this.
As an event shooter, like Op, in places which have a lot of colored and changing lights, no camera brand can make, as for now, an accurate metering calculation in these situations (at least neither recent models from Sony, Nikon or Canon).
AutoExposure features from LR or C1 works great for 90% of pictures in 2 clicks.
Photolab has no similar feature, so this means doing job manually. If you multiply it by 300+ pictures, this mean 1 additional hour of work by contract. And as these contracts are mostly “cheap” ones, well this is usually a significant amount of time to meet average customer expectation.