Off-Topic - advice, experiences and examples, for images that will be processed in PhotoLab

A lot of halo’s. Besides I think darkening the background is counter productive to what you wanted to gain.

George

I see what you did, and it may be a more “pretty” photograph this way, but the two workers no longer stand out so much, which I thought was the most important part of the image. The way you changed the sky is fascinating to me - something for me to remember in the future.

I see things that I don’t like - I guess “halo’s” is the right word, and the clouds at the bottom are distracting too. The two workers - something seems un-natural, but now that I’ve seen it, I had that in the original image, only not so much. The right side of the worker at the right has a white “halo?” around his shirt, which is present in the original image, only not as much.

Hmm, at first I had mixed thoughts, but I now realize in every way, your version is better, and it does look more “crane” than mine.

It’s frustrating to me that something that now seems so “obvious” was anything but obvious when I was working on it. I may never have thought of your local adjustment changes, but I should have given the cropping more thought, even though I thought I had it right. At least nobody is likely to see this as “photojournalism”, so that’s good. And @Joanna will smile because it was done with my “best” camera.

…added later - I wanted to see if the image would be stronger if I added the top part of the crane, which is the full image as shot by my camera, but while it gains more detail, the “impact” is less. I think @Wolfgang’s version is still the best. In the full version, the workers barely show up, and I now think that is the heart of the photo. Will copy it here anyway, but I’ve given up on that idea.

This thread has been far too quiet, and have an update to the AI discussions. I used to use ChatGPT, and another app for creating images - did a lousy job on sailing ships, but the software had no idea what a sailing ship should look like. Now, Microsoft has included an image creation tool to their “Bing” and “Edge” browsers. Late at night I read about it, updated my Microsoft browsers. Apparently the “Dall E” software now works with the browser. So, wanting to go to sleep, I gave it a test " create a picture of a a family of sandhill cranes walking around in new york city.

I’m not sure how realistic some parts of the resulting image are, but the image of the larger sandhill crane that Dall-E created looks just as realistic, maybe more so, than my photos of Sandhill Cranes. The more I enlarged the image, the better it looked. It’s almost scary to think of what the software will be capable of as it continues to improve… I didn’t tell the software to make the background gradually go out of focus - it did that on its own.

As for me, my Leica M10 is getting dusty, my M8.2 was repaired and updated by Leica, my Fuji is often my walk-around camera when I’m not planning on photographing something, and I’ve bonded nicely with my D780. I don’t have any images good enough to post here, so I haven’t.

I’ve also started to use the “stars” in PhotoLab as a grading system, to remind me in the future which images I liked the most. I need to ask sometime if PhotoLab has a function like Lightroom, where I can view all of my “five-star” photos saved on my computer.

Hope all of you are doing well - on the one hand, I miss all the discussions, but I haven’t been taking enough photos lately worth discussing…

I’m afraid the AI is getting very frightening. One thing I read the other day was that they are starting to use live cells in robotics and it seems that they are able to reproduce.

It reminds me of my first computer called a packet communicator which they called PacCom for short. It was all hexadecimal with an hexadecimal keypad and read out. I remember thinking at the time the hexadecimal numbers were like our brain cells, storing information.
The more information it stores the more it can start to think for itself.

Point of pedantry…

Edge is a browser but Bing is not a browser. Bing is just a search engine you can use it from any browser, just like you can use Google or any other search engine from any browser.

Thanks for the correction. Until now, I only have been using Safari, but I wanted to check out the image tools, so installed Microsoft Edge, and as you are saying, the search tool there that I have been using is Bing. When Dall-E gets smarter, and starts creating images that are the same resolution as our cameras, it might get to where the finished image may not be distinguishable from an image from a camera.

If you look at the image I just posted, Dall-E is already thinking for itself. I didn’t ask for birds flying in the sky, or the little pigeon (?) on the ground behind the SandHill cranes. It decided to do the depth of field effect. I doubt the location is a real street - I think it just created a street. Looking at the head of the large bird, to me, that looks as realistic to me as my Sand Hill Crane photos.

So Edge is the browser.
Bing is the search engine.
Dall-E is the image software.

If I email this image to people I know, they are going to assume I captured it with my camera.

I agree about AI being a combination of Exciting, and Scary. It’s getting better, too - compare this image with the images of ships I posted not all that long ago.

I found that ChatGPT understands the functions in PhotoLab 4. No idea if it can emulate them yet.

I very recently found a YouTube video that “fits” me, not that I’m good enough to create such beautiful work, but it rang a bell for me. One camera, no viewing screen, and a few lenses.

He says he doesn’t do much post-processing. Everything is done in the camera.

Absolutely, totally awful!!!

This has nothing at all to do with photography, let alone DxO’s software.

Photography

Drawing with light

This image was not drawn with light, it is more akin to a painting, but using electrons instead of paint.

And why on earth would you compromise your computer’s security by signing in to a Google or Microsoft account?

You have got to be joking. The “depth of field” is so obviously fake. Real blur doesn’t look or behave anything like that. And cranes don’t grow as tall as the perspective would suggest.

Once again, this is nonsense. AI does not understand as we do - it simply conducts thousands of web searches for matching terms but cannot come up with the same kind of definitive or nuanced description of functionality. This is as delusional as stating that Leica cameras take better pictures. Talking of which…

The man’s got more money than sense. Any camera can do what he uses a Leica for, it just happens that he is a good photographer who has bought a Leica and is paid by them to promote their brand to the gullible.

But, amongst the publicity, he does say something very important - he works slowly


I use a Nikon D850 with 28-300mm lens

  • in manual mode
  • ISO 16,000
  • manual exposure, set once for the whole session
  • without looking at the rear screen
  • B&W conversion (Fuji Acros 100) and minimal adjustments in PhotoLab

Sunday evening, Bluegrass music festival in a poorly lit barn behind a country bar…

4 Likes

Wouldn’t have thought there was a big bluegrass following in northern France, TIL differently :grinning:

Central Brittany has been “invaded” by quite a few British or other anglophones. With a bit of effort, you can avoid speaking French at all :crazy_face:

Chez nous, sur la côte nord, on parle le français presque tous le jours.

1 Like

British playing banjo? Rather unusual, no? :banjo: :notes:

If anything is/was faulty, it was/is me, not Dall-E.
I only asked Dall-E to " create a picture of a a family of sandhill cranes walking around in new york city.

I didn’t ask it to “use the style of Ansel Adams” or another photographer.
I didn’t ask it to simulate a photograph - just a picture.
I left it wide open to come up with something interesting.

I don’t know if “thinking” is the right word, but I feel it used a lot of computer imagination to create what it did, regardless of the technical errors you noted. It did exactly what I wanted, and while I hesitate to use the word “imagination”, I can’t think of a better word.

ChatGPt 4 seems to understand PhotoLab 4. I don’t know yet if Dall-E can do a lot of things. I find it enjoyable to work with it, and learn how it does things, and I can improve my question asking, based on what I learn.

Honestly, I don’t think the image it created is awful. I like it, despite the limitations. You could give me all the cameras, and computers, and processing time I wanted, and I never would have come up with such a nice reply to my question… I think it uses enough “tricks” to create an effective image. The details can be improved by my asking a better question.

It’s only been a few months now, but if we revisit this discussion in a few years, I think the software will have caught up with what can be done now - and maybe done better. AI is growing at such a rate that anything may be possible.

I’m not so sure about that - a gazillion web searches would not have found an image like what Dall-E can create. It does far more than searching, and they are constantly improving it. Let’s test it - you suggest a subject that I can submit to Dall-E, and we’ll see how well it does.

Ken Rockwell, and others, have demonstrated that even the simplest cameras, in the right hands, can create superb images. I agree, there is now way to say that “a Leica takes better pictures”, but that doesn’t mean the Leica design sometimes allows photographers to capture better pictures. If you watched the entire video, he described why the Leica works better FOR HIM. :slight_smile:

While I haven’t used my Leica cameras in a month or two, I do understand where he is coming from, and I accept that many people will consider it “rubbish”. I agree with what works best for you, and I agree with what works best for him. I suspect that if you spent a year with only the M system, you might feel differently. As for me, thanks to everyone here, I can create similar images now from any of my cameras, but doing so FEELS very different based on which camera is in my hands.

Looking at the five photos you posted, without considering the difficult shooting situation because of the light, I enjoy all of them, but the two photos with the hats blocking out the details in the shadows, I find annoying. I like to see eyes.

Did you just not look at the rear screen, or did you have the review setting turned off? Beautiful photos, and the last one is my favorite - so much to see. I’m guessing they were thrilled when you send them copies of your photos.

Thanks again - I enjoy it when you not only post some of your photos here, but explain the settings, and why you used what you did.

The world is rapidly changing, with AI being incorporated into the browser and related software. I installed the latest Microsoft software, so I can find out for myself what is going on. Google has its own software for AI, but I haven’t tried it yet.

Please elaborate. My computers have been connected to Microsoft and Google “forever”, and for many years now, connected to Apple. Similarly, my iPhone is equally connected. Without that, my computers would be expensive paperweights…

And French playing the bagpipe. :grin:

George

Bagpipes are traditionally used in Spain, Brittany, Ireland and Scotland, more or less all over Europe, but banjos are American origins

Spain in Galicie when I’m right.
Just one bagpipe more and then back to Mike.
The daily blowing of the Last Post in Ieper, Belgium. In remembering of the Great War. Ieper was complete destroyed but was never occupied by the Germans, I believe. Since the 20th they blow the Last Post every day under the Medem Gate. Not during WW2 off course.

George

Joanna, I very much like your approach, looks fantastic. Just learning from you and downloaded .nef and .dop. How did you manage to correct the perspective? Did you use the parallel-lines-tool or which? And how can one paint the mask with the retouche-tool so precisely? Is there a trick or just a calm mouse-hand? Thanks in advance.

I don’t agree with @Joanna when she says, “…compromise your computer’s security…”. To me that conjures up images of viruses and the like being able to reach your PC because you are logged in but that’s not how malware reaches your machine. The ‘problem’ with logging in to Google, MS, Facebook, Twitter, etc. is that their T&Cs allow them to harvest vast amounts of data about your activities and that allows them to ‘profile’ you and use that to sell their services to ad. companies.

Yes, that was all that was needed.

Just zoom in to the affected area, which allows you to use larger brushes; it gives you far more control than trying to fiddle around with tiny brushes on the full image.

To me, allowing the likes of Google and MS to harvest my data is just as much a threat to my security as any virus.

1 Like

Perspective Correction - once I learned in this forum about the importance of using some camera’s built in tools to warn me if the camera isn’t level, I do (did) that a lot more, moving forwards and backwards as needed. As to the pigeon, I screwed up - I should have obliterated it, but my mind was otherwise occupied. One thing I’ve learned from @Joanna is to try to eliminate anything and everything that otherwise I’ll be yelled at, but to be honest, my mind was elsewhere, and the pigeon problem was invisible to me. I’m still doing things incorrectly - 99% or the time I use the two vertical lines tool, but then Joanna used the tool that corrects both vertical and horizontal. I should have learned my lesson.

Harvesting my data… If Google and Microsoft want to harvest my data, that’s part of the price I pay for using their software as much as I do. I have no desire to change from GMAIL as my main mail system, and I have a lot of Microsoft tools as well. I would like to update my Microsoft Outlook, but that means switching to Microsoft 365 or whatever it is. I use Microsoft Word and Excel all the time. If I was younger, maybe I would switch to Linux… but I dumped most of my stuff in favor of Apple software.

AI - several well known image processors use AI in their software. DxO doesn’t claim to, but the DxO noise reduction seems to me really be an AI tool. To keep up with the competition, I think DxO and PhotoLab are going to incorporate a lot more AI. Based on this discussion alone, why should I be guessing as I carefully use the image straightening tools, when the software could potentially see what I’m doing, and do it better than I can manually? In a way, the camera correction tools, and lens correction tools, already are AI to me.

Let’s stay with you for a minute, I’d rather listen than talk. How did you get such a perfect photo, just walking out in front like you did, and getting an absolutely perfect image!!?? Everything about that image just “works”! I wouldn’t have had the guts to walk out in front of the parade like you did, but it sure did work big-time for you. Perfect timing, perfect cropping, perfection. Lovely!

Your photo of the street player with the bagpipes reminds me of Lincoln Road, here in Miami Beach. In the evenings, people come out to play, with a dish in front of them for donations. I used to take photos of them, but haven’t done so in a long time now.