PhotoLab 5, sharpness and focus

I think that is misleading. The dSLR will set the focus and exposure before the image is captured, so there is no “blinding”.

I accept that older cameras may have all sorts of issues, including the ones you mention, but I think we’re talking here about cameras with the quality of the D850, or even my D750.

None of these cameras are mind-readers. It’s up to the photographer to decide which part of the image should be in focus. On the other hand, I love the ability of the Z9 to follow-focus on an eye. If the Z9 wasn’t so big and heavy, I’d be more interested in it. Electronic shutters also seem like a useful addition. The Z9, and the new Leica M11 have that ability. Faster, and quieter. I think.

Please explain - I don’t know enough about this to understand this.

@mikemyers
This us what Google say about Nikon’s DSLR 3D “tracking”:

“Some Nikon DSLRs such as the Nikon D750 are equipped with a 3D-tracking mode, which uses a predictive system that utilises special algorithms to calculate the position of the subject. In this mode, the camera automatically moves the focus point to track the moving subject , ensuring that it is always in focus.”

Since the mirror in a DSLR is flapping in sometimes 15 times per seconds and the aperture is opening full for measurements and close the same number of times there is plenty of time during each image cycle a DSLR “is blind”. I guess thats why they have come up with a technique to predict by analyzing camera and/or motif movement.

A Sony top notch mirrorless like A1 today, taking images at the double speed of a top notch DSLR today, always open at the present aperture selected by the user or the system and continously measuring both AF and exposure at a speed of 120 times per second. That’s why these cameras have no “black outs” or “flickering” in their viewfinders. That’s why these systems doesn’t have to use “predictions”, because they know by the constant measurements.

Not even Canons mirrorless R-cameras are “mindreaders” but they are governed by some AI that use reading of the photographers eye movements if I have understood it right and even the Sony Real Time Tracking will automatically lock on any eye closest to the camera that appears in the motif. My A7IV doesn 't read my mind - it really has a mind of it’s own governed by it’s AI and all modern mirrorless high end bodies use a closed loop hybrid AF that always locks when it is in focus as registred on the main sensor and not like in the DSLR:s that use a separate focus sensor that and phase detection AF that lacks a closed loop solution. That’s why a mirrorless always has a potential to focus more accurately than a DSLR.

The photographer can always overide the system though by targeting another person, bird or animal present in the view or if the photographer temporary disable Eye Focus to track something else instead. Eye Focus is default but sometimes you just don’t want it active of different reasons. Disabling Eye Focus some uses if they want to replace the old school use of static preset focus points with more smart and flexible dynamic ones.

All these new pretty fantastic AF-features are now making a lot of photographers changing their photo behavior. Together also with another Sony-feature called Auto ISO minimum Shutter Speed the photographers can “always” rely on a feature always securing short enough shutterspeeds regardless of active focal length to avoid shutter shake.

It’s like having a very smart and adaptive modern version of Kodak Click. Very boring if you are fully into manual photo but it will let you focus entirely on taking your pictures instead of your camera knobs and settings and it will very rarely miss to capture technically very good pictures. Even this automatic feature can easily be over ridden if you know how to configure the camera properly.0

Several videos on this - most I didn’t watch to the end, but this one implied that this might be a useful tool:
https://www.flickr.com/groups/2682619@N24/discuss/72157665437128413/

I don’t think I’ve ever used 3D Tracking. Thanks for the information. It would be nice if it worked for “BIF” which I now know means Birds In Flight. :slight_smile:

Not at all, @Stenis is right and you should pay more attention to the word “burst”. During a fast series of shots and flapping mirror between the single shot the DSLR camera will occasionally loose focus. Imagine yourself standing behind a window with blinds opening and closing in quick succession. If you watch a moving subject with so many interruptions you will sometimes loose it.

1 Like

Thanks for your remark @JoJu . Sorry for my English, I’m swedish. Writing a little i n DXO Forums will hopefully make my english a little better by the time.:slight_smile:

A DSLR is blind all that part of the image cycle between the moment when the first image is taken until the aperture opens fully again, measurement takes place and the next image is taken X 14. That’s a fact.

A DSLR blinks but never a Sony or Fuji mirrorless. I don’t exactly know how Canikons R and Z mirrorless works but I would be surprised if they do it very much differently than Sony at an image sequece rate of 30 a second.

The way Sony does this had severe draw backs earlier since focusing was affected when stopping down. Earlier I think the fast A9 just went down to f8 and Sonys low light best A7III was a little better with f11.

One pretty fantastic thing is that the AF low light performance in Sonys latest A7IV now is very very much improved even compared to A7III. So these earlier AF-locking problems when stopping down are not much of an issue anymore. AF is now working down to -4 EV and maybe Canon R is even better. Now we can stop down even to f22 and still get for example Eye Focus to lock in very low light when taking stills at least. I’s a very big improvement that few seems to have noticed yet.

This vastly improved AF-performace has given my old A-mount Sigma 150-500mm 5-6,3 a new life. In the same conditions my A7III could not manage to set Eye Focus oven 400mm despite a bigger aperture and fully open lens. The A7IV fixed the same task at 500mm with f22 selected to my big surprise. The newer cameras like A7s III, A1 and A7IV seems to have a much more sensitive AF than earlier generations. Maybe I will still get a new Sony 200-600mm zoom despite these improvement because that old lens is far to slow chasing birds in flight with.

Of course! Soon the birds will be back again after the winter. It’ just to get out and “shoot them”

I’d like to ask this question. When we capture an otherwise very nice image, but it’s not as sharp as we wish it was (or, as sharp as I wish it was), before sending the image off to Topaz SharpenAI, as I currently do, what are the better tools in PhotoLab 5 that may help improve the image?

TopazAI works with different tools, baed on blur from camera movement, subject movement, not focused, and another tool I don’t remember. Does PL5 have tools to deal with sharpness, as well as the DeepPRIME tool can be used to minimize noise/grain? I suspect that it’s impossible to “fix” a blurry image, but I’m convinced now that software can improve the image.

Any advice?

Technically, I’m sure you are right, but this hasn’t been a problem in all the times I’ve used it. Of course, for me, it’s a short burst.

I’m not going along with my impulse to buy a Nikon Z9, as I don’t want yet another camera so big and heavy, but from everything I’ve read, I think I would otherwise enjoy it. Of course I’ve never seen one, so I’m just speculating here. For “only” $5,000 or so, in 2022 dollars, that’s about what I paid for my D2 and D3 cameras. Of course it will be replace in the reasonably near future, and its value will plummet. From that point of view, I’m better off buying Leica gear, which sometimes sells used for as much as it sold for new, likely much more.

Also, and very important here - 95% of my photograph is “static”, with no need to stop movement. But for the size and weight, I would buy a D850 and be done with new Nikons. Having said that, I’ll maybe walk along a nearby causeway, and stop to photograph pelicans fishing for food… but the last time I did this was over a year ago.

I wish more people here were posting photos, so I can see how your gear and PL is working out for you!

Citat

I’d like to ask this question. When we capture an otherwise very nice image, but it’s not as sharp as we wish it was (or, as sharp as I wish it was), before sending the image off to Topaz SharpenAI, as I currently do, what are the better tools in PhotoLab 5 that may help improve the image?

TopazAI works with different tools, baed on blur from camera movement, subject movement, not focused, and another tool I don’t remember. Does PL5 have tools to deal with sharpness, as well as the DeepPRIME tool can be used to minimize noise/grain? I suspect that it’s impossible to “fix” a blurry image, but I’m convinced now that software can improve the image.

Any advice?

Citat

There is an old tool called “Unsharp Mask” that you shall avoid unless you use lenses unsupported of DXO profiles. Otherwise there is the newer “Lens Corrections”.

Topaz Sharpen might have an edge over Photolab - maybe some one else knows more.

When it comes to denoise tools I think Deep Prime might be the best. There is also an export sharpening tool and “Bicubic Sharpen” is very effective but not so suitable for landscapes with tree leaves that might get over sharpen.

There is mostly impossible to do anything magic with really blurry images even in Topaz Sharpen and Photolab.

I prefer to use Photolab since all tools there works with RAW but PL Deep Prime just works with RAW while Topaz even works on JPEG which might be a plus for some.

With Topaz as a plug in I think you have to use an intermedia format like DNG or TIF. I’m not all that fond of that.

@mikemyers
I can give you a link to images taken with the “Sony Click”-configuration using Auto ISO Min. Shutter Speed with Sony A7III in Essaouria town and it’s pretty busy fishing harbor in Morocko. All processed with Deep Prime. Even in images like these it can be useful since the sky can have some noise despite quite good light.

With this configuration I was able to just concentrate on what happened around me instead of thinking about how to configure my camara. The only thing I concentrate on in A-aperture mode is really what aperture to use. That’s it together with zooming in the right composition all it takes generally. This way the timing get so much better and I rarely miss an image I want to get.

I happened to be a rather slow photographer in the analog days when I lost lots of images because of that. Of that reason I hated to have to take pictures of the mercury quick toddlers in my family over the years but nowadays I have no problem with that any more since I got dynamic Eye Focus and Auto ISO Min. Shutter Speed.

https://sten-ake-sandh.fotosidan.se/viewpf.htm?pfID=386184

I have used Sony 24-105mm/4 G and Sony 70-200MM/4 for these images.

Just to give you an idea how bad I used to be you can just look at this little girl that I tried to photograph in a afghan pashtoon kuchi nomad camp outside a small village in the Indian part of Kashmir 1978. First her mother and the other women in the camp were very friendly and didn’t mind me taking their pictures but when standing there trying to get the focus right during minutes the girl got just wild and started to crying for her life. I also think she got scared because I have very light blond hair and still has and that scared her stiff because she hadn’t seen strange people like me before. Not even the indian cows liked me and a few times they started to hunt me.

If you take pictures to 95% of static motifs and know your gear as you seems to do after all years, you probably are a far better photographer than I was then without my modern digital gear to support me. You will probably be fine with what you already have and you might enjoy manual slow photo much more than using some new gear and or a systemsupported Sony Click-photo like process instead of slow photo with your Leica gear.

Sony Click is not all that fun to use for most people but it’s very effective when it comes to taking pictures of high technical quality with optimal timing, but I don’t mind since I’m just interested in getting as good rawmaterial as possible. For me the great fun is rarely taking the picrures but to postprocess them in the truly fantastic Photolab 5.

Finally since two month back, I’m exactly where I believe I always wanted to be, with my gear and photo processes. The only thing is that it took 17 years, a lot of swearing over terribly poor DSLR-cameras and some bad lenses too and it costed several 10th of tousands of Euros that I shouldn’t have spent if the gear I have now had been possible to buy back then in 2005. This journey hasn’t even been fun so it’s a wonder I’m still around doing and experiencing this late technical break through.

The few young people entering the modern system camera platforms are technically blessed but most of the young couldn’t care less since they are far too happy as is with their smartphone cameras. I guess this technical break through came far too late for that generation!

@Joanna posted something earlier, showing how DeepPRIME can indirectly help create sharper images. One of the main sources of a blurry image is movement - either the subject moved, or the camera moved. One way to minimize this blurriness is to minimize movement, as in use a much higher shutter speed. To achieve this, one can use a much higher ISO speed, which in turn would result in digital noise, but now that we have DeepPRIME this can be used to keep the noise under control.

This won’t help with an out of focus image, but if a person isn’t holding the camera steady enough, or isn’t using a high enough shutter speed, this will allow us to reduce what I’ll just call “movement-blur”.

I hate to say this, but right now that is so obvious to me! How could I not have known to do this in the past - but until I read it from @Joanna, I was oblivious to doing so.

Bumping up the exposure 5 “stops”, say from ISO 200 to around ISO 6000, will allow us to use those 5 stops to bump up the shutter speed, say from 1/25 up to around 1/800th, giving us roughly the same exposure, but with that much less blur from “movement”.

Now, it sounds so obvious to me. Yesterday it was anything but “obvious”.

I will read your link, but I suspect that for me, I always try to consider my camera settings. Everything in photography is a compromise, that is usually changing as I go around taking photos.

Regarding what you wrote about scaring the people, one of the best ways to minimize that is for me to use my Leica or Fuji X100, instead of a big dSLR. Another is to take a photo, and show it to the people you’re photographing, who, especially in India, will quickly warm up to you.

I just viewed your India photos. Reminds me of photos I’ve been taking there since the 1980’s. One suggestion - sometime try doing it with a wider angle lens, meaning you will get much closer to the people. Instead of taking photos “of” the group, you will be within the group. I’ve found it to be a huge help to have a local person walk around “with me”, which re-assures people that I’m OK to have around. And also to walk up very close to the people, just so they can get used to me. Gee, I am SO homesick for India right now - haven’t been back there since the virus started. I’m hoping that I can return this coming summer.

Another “trick” I found was to use my Fuji X100 cameras, which briefly show me the captured image in my viewfinder, so I know if I need to make changes in what I’m doing. I would never want to use my big dSLR, as that seems to bother the people I’m photographing. The tiny Fuji, with no noise, doesn’t annoy anyone…

Regarding the “Sten-Are-Sandh” photos, my favorites were when you got down on the ground, to their level, rather than shooting down at them. I used to be much better at this when I was younger - nowadays, standing back up is an effort!! :frowning:

Just wondering - my iPhone 12 has a 12Mpx camera with a 4.2mm f/1.6 lens, which is the equivalent in full frame of 26mm f/9. Why not use that as a street photography camera?

Focus and thus sharpness is also effected by shuttershock.
The mechanical movement of the parts that move in order to get a image on the sensor to read and store.
For instance the g7 plus 14-140mm f3.5-5.6 has in a shutter range vibrations which give blur inperfection. (Around 1/120sec if i recal corect.) It’s succesor the g80 got a completely new shutter mechanism and the shuttershock is reduced for 80%
The movement of a mirror causes much more vibrations in a body due it’s weight and motion distance. So i suspect the shuttershock in dslr’s is worse then in mirrorles.
So not only the IBIS give you more stabilisation and thus lower handheld exposure time also the less mechanical movement in the body.

Well i have a tripod who cares? You would say.
Doesn’t matter,the slap of the mirror does cause trembling. Vibrations. Same as your finger pushing the shutter… Or the tram drive by, or a heavy truck.

That’s why electronic shutter is provided, to overcome that vibration by waiting time.

An other part is on the fast side of the shutter, a mirrorles can have faster mechanical shutter speed due the only part is moving a curtain is. And it can automaticly changes to electronic without losing preview and AF meatering wile a mirrorcamera does slap the mirror (loses AF detection) en after that it has the same mechanical/electronical feature and shutterspeeds.

Shuttershock is part of weight(more weigth less vibration) and distance of the ISlens from the shutter.( a hanging, floating IS lens does react on the vibrance so more traveling distance means less effecting.)
Ibis is also effected by shuttershock. Floating sensor.

Thus every system has a window which shows a form of shuttershock and most wil be in the handheld shake of you own so you don’t see it as shuttershock.
And every system is combined with a lens which has it’s own window of shuttershock influence.
I think when you mount a camera plus lens to a steel table with a vice in order to have no tripod vibrance. And then shoot every time with a lower shutter speed and watch the images closely for “blur”. That’s the shuttershock.

What is sharp - part 2

I would start by saying I believe “sharp” is a perception, not a mathematically defined concept.

in the “good old days” of film, sharpness of an image depended on the type of film used, grain size, etc. When you focused a print, you would look for the grain being sharp rather than a feature of the image - if the grain was sharp and the image wasn’t, it meant that the negative was soft and you would need to get into techniques like unsharp masking, which involved the production of an intentionally blurred second copy and the precise positioning of both in a sandwich in the negative carrier of the enlarger.

The main problem with unsharp masking, which is available for digital images in PL5, is that it produces an effect something like this example I copied from Wikipedia…

… where the lower area has had USM applied and has a contrasting and complementary “halo” on either side of a transition from dark to light.

When using PL5, we now have lens modules which are capable of applying appropriate sharpness according to the module.

However, this is still based on creating the illusion of sharpness by enhancing contrast on a transition from one tone to another.

Here is a screenshot at 600% with no lens sharpening applied…

Capture d’écran 2022-02-06 à 12.14.40

Then I applied the default lens sharpening from the module…

Capture d’écran 2022-02-06 à 12.14.45

You should be able to see a slight “halo” on each side of the transition. To make it more obvious, I raised the sharpening to +3…

Capture d’écran 2022-02-06 à 12.14.53

Now it becomes more apparent and starts to look like what the traditional unsharp masking tool would give. The main difference being that the lens sharpening tool is primarily engineered based on known softness in a lens and may take account of known differences in softness over the entire image area.

The unsharp mask tool is very much a blunt stick and is applied globally and, with its four sliders, can be a lot more difficult to apply subtly.

So, whether we use traditional film or digital, sharpening is all about increasing the local contrast around a transition in tones. If the transitions in our original image aren’t sharp enough, all we can do is fool the eye with this local contrast trick.

Sharpening in camera vs PL5

Now we know how difficult it is to truly “sharpen” an image in post-production, it should become apparent that we need to get it as right as possible in the camera.

We know that DoF is all about acceptable sharpness on either side of a point of focus. here is a slide from one of my courses…

Depending on the distance to the focus point, the defocus blur gets larger the further away you get from that focus point - and the shorter the distance, the more the front and back parts of the DoF become equal.

Acceptable defocus blur is usually calculated as 30µm for a full frame sensor and is designed to give an acceptably sharp 10" x 8" print, printed at 240ppi, held at arm’s length, not when inspected at 200% at 18" to 2ft from a screen with a resolution of 110ppi.

Then, on top of defocus blur, we need to consider the further softening effect of diffraction, due to too small an aperture.

Even though we may be within acceptable defocus blur limits, too small an aperture can add diffraction blur, as demonstrated but this slide…

What we can do, which will improve perceived sharpness, is to calculate an optimum aperture that suits the DoF, without diffraction, that we want to achieve.

Blur Spot Diameter

Blur Spot Diameter is defined as a combination of defocus blur and diffraction blur.

The methodology I picked up from George Douvos’ writings, to determine the blur spot diameter, is to concentrate on the distance between the pixels on the sensor and double it - this gives us enough area to record a dot that might appear at the junction of two pixels.

I use the Digital Camera Database site to find the details for a particular sensor and, for Mike’s D750, the pixel pitch comes out at 5.95µm (call it 6µm) - this gives us a minimum possible blur spot diameter of 12µm.

So, if you want the sharpest possible image without diffraction, you need to feed this diameter into TrueDoF-Pro, or whatever DoF calculator you use (as the CoC) and this will give you the smallest possible aperture that will not incur diffraction.

Using TrueDoF-Pro, 12µm gives me an aperture of f/6.3. The problem is that f/6.3 gives a very limited DoF. So we need to choose an acceptable blur spot diameter that is a compromise between diffraction-free sharpness and DoF.

George Douvos recommends 20µm as this gives an aperture of f/10 and produces a marginally sharper image than the default 30µm but still allowing a reasonable DoF.

I leave my aperture on f/10 unless I want to expressly limit DoF or when doing macros shots where DoF can be as small as a couple of millimetres and a compromise becomes necessary, unless I use focus stacking.

The end goal of all this is to give PL a fighting chance of using its lens modules to provide you with the sharpest of images without having to resort to too much USM or other tricks.


After all that, please try this stuff out and let me know what you think :grin:

2 Likes

Joannas explanation of how you can handle low light together with Deep Prime is very much on spot. With my new camera the limit for me is ISO 12 800 i think and that is pretty extreme comparing to the image of the crying little nomad girl which was taken with dia positive Agfa CT 18 DIN (Deutsche Industrie Normen) equal to ISO 50.

You are so right describing taking photographs in India. I had a secret wepon taking these images from a village that had preserved a mediaval like life style until 1976. The village is called Kirtipur and is situated on a high hill in the Katmandu Valley in Nepal. My secret weapon at that time was my girlfriend. Travelling with a woman seemed to make the women of this villag a lot less nervous of my prescense.

Kirtipur was after the millenium declared a world heritage but sad to say it was partly destroyed after the latest major eartquakes in Nepal.

https://sten-ake-sandh.fotosidan.se/viewpf.htm?pfID=380608

I don’t think these images could have been taken in most muslim countries or areas without any problems. I have several times had to leave when people starting throwing stones after me.

Usually I nowadays use a the supertiny Sony Zeiss 35mm/2,8 on my A7r (prefer that smaller body really but the AF is very poor), A7III or A7IV for street foto. Then back in the seventies and eighties I just had two lenses on my Pentax ME and it was a supertiny Pentax SMC 40mm/2,8 and a 100mm/2,8 of the same type.

The best with 35-40mm is that you have to get close physically. In fact when I just had my SZ 35mm on a trip I realised I didn’t really need anything else.

First, because I prefer to see through a viewfinder, and compose my shot, and second, because the iPhone messes with photos it takes, where a “real” camera is more likely to capture what is there. Apple designed the iPhone to make pretty photos. I have videos that illustrate this. Finally, habit. I prefer taking photos with what I consider a “real” camera, where I’m in control, not a series of routines built into an iPhone to make a pretty photo. Also, because when I get my images off the iPhone and onto my computer, I don’t feel they are technically as good at what I capture with my camera (usually the Fuji).

Peter, if we are talking about things like “candid” photography in India, all that may be true, but I have a different reason for going “mirrorless” - smaller and les impressive camera, less noise, easier to blend in, and so on. This means leaving my dSLR at home and carrying my Leica, or much more likely my Fuji X100f.

When it comes to “normal” photography, being able to watch an optical view of what I’m photographing is FAR more important to me. I can do this with the flip of a lever on the Fuji, and the Leica does it naturally. Since the Fuji can do both optical and digital, I far prefer seeing the real scene with my own eyes (optical) rather than essentially viewing a tv monitor (digital).

I’ve used digital viewfinders for my racing photography, and while I tried to accept them, I never enjoyed them. If I was shooting what I’ll call a “landscape” (which for me includes machinery), I far, far prefer to see what I’m photographing with my own eyes, directly, and not on a computerized display. I haven’t yet seen an electronic viewfinder that looked and felt “natural”.

Sharpness is only in the focus point. I think you must involve the lens characteristic for that too. The calculation with 2 or 2.5 times the pixel pitch is used for when unsharpness becomes visible. Dof/diffraction calculators use this calculation. Diffraction Calculator | PhotoPills

I don’t have TrueDof-Pro but any dof/diffraction calculator will give me for the D750 with f/6.3 an airy disk of 8.46µm and diffraction becomes visible at f/11.

PhotoPills distinguishes between 100% viewing on screen and printing.

George

Mike i think you missed the point. it’s non system depending information about a effect on sharpnes, Shuttershock is in all mechanical shuttermechanism. the effect on the image (motionblur) depends on the configuration.
EVF or OVF both are no influence on shuttershock as in the function. only thing is a mirror has more weight and there for moving weight then a mirrorless system. most body’s are more weigth and i think they added as much vibrationreduction as possible but stil it’s a factor. a factor you bumb in to? probably not if you didn’t notice it.

When I talk about sharpness as a general term, it is always acceptable sharpness. It is a total waste of time to think of absolute sharpness, otherwise we end up with zero DoF and the inability to take images of anything other than a sheet of flat paper.

Exactly and that is just what I have been talking about.

George Douvos has written extensively on diffraction, with all the necessary formulae and calculations.

PhotoPills might assert that 30µm is good for printing but the truth is that only applies if you are printing to 10" x 8" for viewing at 10" according to their blurb. Anything bigger than that and you need to start reducing the blur spot diameter if possible.

All this is based on real world practical experience of printing exhibition prints for many years, not theoretical cogitations that only ever end up being assessed on screen.