@Pieloe, well, if you don’t understand the request, that’s fine with me. You stated that users are not willing to share raws they struggle with. And as this topic is not about ugly browns or the art of Photography, but about highlight recovery, I think it’s legit to post a picture, even if it is everything else than a masterpiece.
Thanks @Joanna - when I go hiking with my family, I take some pictures, edit them and I am fine. I am totallly aware of lens flares and again, this is not the topic.
My attitude my be different, too. I don’t go out for photography or to take “the picture”. I go hiking with my family and take my camera with me, not the other way round.
I just want to recover blown out areas in a similar way and in a reasonable amount of time like a x years old piece of software (LR 6) does.
But as this does not seem to be feasible, as some other users stated, too, that’s ok. Let’s see what future versions of PL will bring.
You’ve got the point
Where are higtligts details in this image apart in clouds and the sun stars ?
Too much time for me to post image extracts.
But you can try it:
1- “DxO Standard” preset and Shadows +25 for image balance = 10"
2- ClearView +50 for more details and highlihts recovery = 10"
3- (FilmPack) fine contrast highlihts +75 because you want a lot of details = 20"
in this case you reduce ClearView (+30) for more natural effect = 20"
In a minute time, I have the image like the testimony of this great day.
The point is - photo editing software is not magic. It can’t recover something that is not there. The images you are showing as examples include the sun, which means that you will have flare and lost highlight detail which cannot be recovered. The best any software can do is try to improve the contrast at high levels (ClearView+) and highlights contrast if you have FilmPack.
But, whichever software you use, it simply cannot recover details from a pixel that has been blown out, full stop, period. What you seem to be finding is that LR6 does some “magic” that pleases you with one slider that takes a bit more knowledge and a different algorithm in PL.
Stop taking photos that include the sun!!! You risk damaging your eyes. Even if you use live view instead of the optical viewfinder, you risk burning out the sensor in your camera because the sensor is exposed for as long as live view is switched on.
Take time to learn the tools that PL provides. They are not the same as other apps but, as @Pieloe points out, and I can confirm, it takes very little time to process an image once you are familiar with how PL does it. I personally could not go back to Adobe stuff, not just because of the price but simply because PL does things so much better.
Yes, you’ll certainly damage your eyes if you look at the sun thru an optical viewfinder - - but, that won’t be a problem for an electronic viewfinder, nor for the sensor, as - being a digital device - it will simply ignore signals that exceed the capacity of the “signal-capture” electronics (recording them as 255).
You might find some useful info over here - - with practical examples.
Regards, John M
Errr… exactly the sun beams and the sky is, what I was talking about, but nevermind…
Thanks @John-M, I am following that thread closely, too
That’s out of question - the amount of details PL is able to squeeze out of raw files is unbelieveable. Even for a a shoot-against-the-sun-redneck like me
Did you tried my settings ?
It is very disappointing to give time without feedback
@Pieloe, I will try tonight, thanks. Don’t have the filmpack, though.
In that case, you could try using the tone curve.
Here’s a shot with a straight tone curve :
… and here’s the palette :
Here, I’ve modified the tone curve and upped the exposure :
… with the palette for that :
The difference is subtle but there is actually more contrast in the highlights, due to the part of the curve that is more vertical
ok i did a quick run on your image:
first 25% smartlighting box on fog up line and box on tree to establish a range.
then let center weighted average exposure comp take it’s part. (-0.58)
clearview global 15%
highlights -28 on both tone and contrast.
shadow 16 and 19 contrast.
local adje left corner graduated filter til edge of ridge
final tone curve top and botom correction.
test raw DSC_0326.NEF.dop (6,9 KB)
ended up with space in left and right of the histogram.
it’s a bit “saturated” and dramatic but i would be happy with this shot because any more “detail extraction” would be screwing the facts to pulp and you get a unrealistic HDR kind of image:
detail extractor of nik Color efex pro 4 v1 31%.
Of those images, I like the first one of both PL and LR edits. I think those overly sharp contrasts take away depth in the images. Moreover, there is a halo on the ridge. The ACR version seems to be brighter, too.
If all channels reach maximum values, no algorithm will help unless some artificial intelligence :). The question - how algorithms treat the situation when, for example, only one or two channels are saturated. What can be recovered from this residual data? It can be assumed that there is nothing there, and you can also try to recover something, even on the basis of surrounding areas and the residual data from one or two channels. I have the impression that LR can cope better (and easier) in such “extreme” situations.
I already have quite a lot of experience in PL and I really like it, but I am still convinced, that highlight recovery in LR is simply much easier. Getting similar, naturally looking effects in PL requires a lot more work and treatments.
Peter, thanks for being the first to give this a try, but I guess I have been as clear as, well, the fog in my photo about what the assignment was.
I posted this image as a “highlight recovery” test, not as a request to create an pretty photo. I already have my own interpretation of this shot and it is not any of the images I posted. That shot and your work (sorry) are irrelevant.
The assignment was to compare highlight recovery in PL to other programs. Several people have claimed that they can recover their highlight details with just one or two sliders in LR( @geno, @mrihooper1, @nemo). OK, let’s see what happens when presented with a tough highlight recovery problem.
Ideally, I would have created a synthetic RAW image. Software for doing this doesn’t exist, at least for the general public, so I chose to use an existing image. Yes, the sky is totally blown, but just below it is an area with a lot of detail that, by default, looks like pure white (or gray or whatever).
For my camera, the sensor appears to max out at 14335. Forget the image as a whole and focus on just the area below (if I could crop a RAW image and still have a RAW image, I would just posted the cropped area).
Note the circled area. The red channel is not maxed out, but the other channels are. Moving the cursor near the circled area, I find again that only the red channel has any information, but it does have information—this is pretty much the definition of highlight detail. Can your program (a program other than PL—I’ve already shown what PL can do) recover this detail?
Admittedly, in a synthetic RAW image, I could have ensure that the information wasn’t just in the red channel. But @nemo claims that LR actually works better than PL in these situations. Ok, @nemo, here’s your chance to prove it!
If you look at the ACR versions, you can see that none of them managed to recover this level of detail. The latest LR may be better.
Since people don’t seem to understand my request, I magnified the area where the circle appears and annotated some of the detail that appears there:
There is a distant hill that is just at the edge of clipping. Then there is a bit of fog that, surprisingly, is still not clipped (it is very close). Then the bottom of the same hill shows up. In the foreground, there is a nearer hill. At the base of this hill, we see a brighter area that come from the waters of the Columbia River meeting the land.
If you go back and look at my best ACR rendition of this same area, you can barely see the nearer hill. The more distant hill doesn’t appear, nor can we make out the whitecaps hitting the land.
The shot, by the way, was taken in the Columbia Gorge in Oregon, USA.
I engaged this image as getting as much detail out the landscape as possible. Using the tools i have.
detail and exposure on the front and in the middle of the valley. The fog was or mute greyisch white or flashing white. Any detail extraction and correction cause artifacts and blotchy pixels.
The Histogram was cut off on the left and right side so detail in the fog on the upper side is nearly impossible. So this image was my most “natural enough” looking thing i could get with out go for hours of trying.
i can give it a try on the same one in Silkypix 5pro which has a nice highlight tool.
Using your point of interest as particulair point to target. Aldoh i don’t think it will give clear view of the hill
The problem with fog is it doesn’t have much detail from it self bright or not. So no program can get things out that foggy white blur i am afraid. So i concentrated on getting as much image information “seeing things” as possible in the hole image without creating a “monsterly looking” overcooked image.
This image is few years back used in a other “test” also a highlight test for serveral applications
maybe this wil do too as benchmark
Sorry, but I can’t get what you mean. Shadows +25 ? Where do I set “Image balance”?
No. I just suppose.
I do not know what algorithms are used by both programs. I’m not sure if they’re trying to reconstruct / rebuild or not. It probably is and will remain a mystery for us. But on the basis of many cases, I can only conclude that LR produces better results in this aspect. And certainly much faster and easier to achieve. Overall, I can achieve similar results in both programs. But in PL it costs me much more effort. And that’s it. This only applies to highlight recovery. In general I find PL much better than LR.
Selective tone, shadows +25