Unwanted virtual copies

Someone had suggested sorting by virtual number and that works like a charm as it puts them all together so they can be removed in one action. Now if only I could view all subfolders together it would be even easier!

I didn’t know it did this - but it makes sense; I guess it’s in-built protection from potential for a user to attempt opening multiple concurrent instances of PL on the one database.

That would be a great pity (if you were forced elsewhere) … but I cannot think of any other workaround !

Whilst not exactly “quick” ! … a convenient way to make this tedious process a little easier is to use the Sort option - by Virtual Copy Number … which groups all VCs together so you can [Click + Shift-Click] to delete them all in one go.

John

Yes, it would be sad. I chose to get PL5 because I was so impressed with how it rendered my raw images from my Sony RX100 compared to all other options I’ve seen. But I do have to weigh this against this significant usability problem and it’s certainly is a big one. Here’s hoping for an eventual real solution as my desire is to make DxO my choice going forward.

1 Like

…unless you deliberately use virtual copies.

In this case you might want to cite Mark Watney (“The Martian”) if you realize that DxO added virtual copies to hundreds of images and there is no easy way to get rid of them.

1 Like

Ah, yes … I forget that some/many(?) users actually use PL within their image storage folders (!) … which is a recipe for sluggish performance, etc.

I operate using a work-in-progress approach;

  • I move new images into my WIP folder, in batches, and work on them there
  • When finished, I move the processed images (along with the RAW source and sidecar files) into storage folders
  • Then I repeat with the next batch, etc.

In fact, @danwdoo - - this might be an effective workaround for your “unwanted VCs” problem.

John M

Divide & Conquer Versus Divide & Complicate:-

The following resulted from a desire to reduce the render (export) time for photos which with my current processor (i7 4790K) and graphics card (GTX 1050) falls into the 132+ seconds to process the D850 images column in the Google spreadsheet.

With graphics card prices currently over-inflated an alternative, in my case, would be to use both my machines (both 4790K, one with a 1050Ti 4GB and the other a 1050 2GB) and split the load between them!!!

However, I already have some directories stuck with errors (another forum post needed) caused by mixing PL4 and PL5 processing and the worse that could happen would be unwanted virtual copies which have been written about at some length in Unwanted virtual copies (i.e. this post).

EDIT:- I need to retest this in the light of what I have discovered since I did the original test!?

20 RAW files were copied to a directory the first 10 of which were accessed by PL5.0.1 on one machine and the other 10 by PL5.0.0 on the other, across the LAN. To be honest that test appeared to work well and I did not have any issues with virtual copies etc…

But it all went very wrong when I tried to use the ‘Tag’ to differentiate one group from the other and Virtual Copies started popping up!

End EDIT

The problems that I have experienced working between PL4 and PL5 have alerted me to some of the dangers and I wrote the following in a response to @uncoy in Database hell between PL4 and PL5 - #30 by uncoy.

I was also under the wrong impression that the database did not hold an up-to-date copy of the editing data and that the DOP was the sole custodian but that was a very wrong assumption. Effectively the database is the prime custodian and PL goes to some lengths to protect the database entry, creating a ‘Virtual Copy’ whenever it is in doubt!

So what have I “discovered” so far

  1. Changing the Uuid at the end of the DOP does not cause an instant VC.

  2. Changing the Uuid located just after the ‘ShouldProcess…’ field causes an immediate VC. It is also probable that the file date timestamps are changed by the software I used to change the DOP and that alerted PL to the occurrence of a potential change.

  3. Changing the keywords in the DOP has no effect, PL5 currently appears to ignore the change and quickly changes the DOP back to the original value.

  4. Adding a keyword to the DOP and changing the Uuid (from 2 above) will result in a virtual copy with the new keyword.

  5. The ‘Name’ line in the DOP is not used as a checking mechanism, changing it will not have any impact and PL5 will change it back quickly (as part of a DOP update).

  6. I did cut and paste the editing from one photo DOP and pasted it into another DOP leaving the top and tail of the “old” DOP intact and managed to get PL5 to accept the new editing (with no VC) but using a preset is a lot easier!!

I would conjecture that a DOP will only be imported into the database if there is no entry in the database already otherwise it will come in as a ‘Virtual Copy’. What plans DxO have for the keywords in the DOP I am not sure but a command that allows for its import would be useful in the event that it is the only data available to a user.

I personally would like more controls to allow virtual copies to be “promoted” to the [M]aster status and the old master either to become a virtual Copy or be replaced entirely. However, such a facility is potentially complicated by keywords. It is possible to copy the metadata from one VC to another and similarly with all or some of the edits, e.g. from [1] to [M] then delete [1] when [M] is now effectively what [1] was! Deleting [M] currently deletes [1], [2] …

Others have complained in the past that PL effectively imports (ingests etc.) a directory as soon as it is selected by the user and that it would be useful to prevent/control that import process, i.e. have a preference that prohibits automatic import and provide a ‘directory read’ command . In fact once one PL database has imported a folder any DOP that is encountered that does not tally with the Uuid identified in 2 above will cause PL to “protect” the database entry and create a VC.

As I mentioned above I encountered this a number of times and in one (now abandoned) database I have a whole directory that has effectively been “red” flagged and which cannot be exported. Unfortunately the exact path to that particular disaster has gone from my memory. My solution was to save the database and start afresh but that will not be possible if I start using keywords etc. unless I make sure that all keyword data is updated to the ‘xmp’ sidecar files or the JPG etc. and/or make sure that I do not use PL5 to assign any keywords!

Currently only a few of my photos are keyworded but that is with ExifPro and though I can now force PL5 to accept such keywords PL5 won’t export any keywords back to the JPG (and until 2018 all my photos were JPGs!) that have been touched by ExifPro!! I would need to push the photos through some intermediary program since I have failed to get DxO to fix what is probably a one off issue!

I have not yet run a test to see if PL5 compares keywords in the DOP with those in a sidecar file or embedded in a JPG etc. but since it doesn’t compare them with the database I suspect not (at least currently) (but I will test that assumption soon).

The resolution to my “damaged” directory would require a PL feature that allowed a directory (including subdirectories) to be expunged from the database or a ‘directory (re)read’ command that would be executed even if there are database entries for the directory contents already!!

Thanks for the suggestion. The problem is that even simply adding a tag to an image without any other edits results in that image producing virtual copies on the others. So this workaround won’t help if I actually want to get any real use out of tags, which was one of the main reasons I bought the software.

This is a rather strange and offending answer: What does the storage location have to do with using virtual copies?

Besides this:

You don’t state what you consider an “image storage folder”. Are you thinking of an unsorted pile of “all scrap ever shot in one folder” or a well-organized date/topic folder structure, in conjunction with a DAM?

And your workflow suggests that you process all new images immediately and finally. That’s not a suitable workflow for everyone. I spare myself to explain to you use cases, where it is different.

Offence wasn’t my intention …

Location, per se, has nothing to do with implications for using VCs … but I certainly do not recommend pointing PL at folders containing “hundreds of images” - because that is bound to cause sluggish performance as PL wades thru the process of rendering all images therein.

Mine’s a well-organized date/topic folder structure. More specifically, I have my images organised by date/topic, along with their corresponding RAW & sidecar files in an immediate sub-folder (I don’t rely on the PL database at all) - and I apply descriptive filenames to each image.

Not at all … It’s a continuous work-in-progress process.

John M

the problem doesn’t result from hundreds of images in one folder. Think about hundreds of images with unwanted addition of virtual copies, mixed with real, valuable virtual copies, distributed over your directory tree. That’s a real nightmare.

2 Likes

@obetz your point about “legitimate” VCs versus “corruption” VCs is well made.

To summarise my excessively long post above the “problem” is the Uuid that I identified in 2 in my post and there will be one of those for the ‘Master’ and one for each VC in the DOP. My testing was done by changing fields in a DOP directly and assessing PL5s reaction. I haven’t tested changing the Uuid of a “legitimate” VC while leaving the [M]aster Uuid intact to see how many VCs I wind up with but I suspect an additional VC for the original VC!?

So if anything happens that results in a mismatch between the Uuids identified in 2 in the database and in the DOP that it finds under the title of (photo-id).DOP PL5 will protect the database entry as [M] and preserve the DOP entry as [1] etc…

Currently there are no options to force input of a DOP that does not match to replace the database entry or a VC to become the [M]aster or the database to “forget” a directory that is already in the database (except by forcing it to forget all directories by abandoning the database).

PS. I am sure the final Uuid is there for a reason but during my tests PL5 did not seem to react when I changed that field (or so it seemed). @sgospodarenko any guidance you can offer us with respect to this topic.

PPS. I used the test output from another test where I had created a VC which was a black and white image. I changed the first Uuid after a ‘Should Process’ and saved it and got a copy of the [M]aster. I then changed the second Uuid after a ‘ShouldProcess’ and saved the edited DOP and got a copy of VC [1].

DB versus DOP:-

I cleared the database and navigated to the Tulip images shown above and took a snapshot of the DOP elements of interest and the Database. The 00 on the ends of the Uuid fields highlighted came from my “hacking” that caused the results above. Accepted as normal fields when the new PL5 absorbed the DOPs into its empty structures.

I am afraid I do not know where the other Uuid fields come from nor what they are for but I do know if the ones highlighted get out of step with the database then VCs are the result. The ‘StoreSettings’ field holds the edits either created during an edit session or taken from a DOP as appropriate.

[Additional material I had tacked onto this post has been removed because the problem I reported appears to be related to the re-use of the same (or very similar DOPs) and needs more investigation. Tests on “real” photos and their DOPs (actually created by DP4) did not exhibit the same behaviour!]

1 Like

Has PL6 made any impact on this most annoying issue with unwanted virtual copies being generated across machines with a shared photo library?

Its funny you should ask that!

I answered here and then moved it to Unwanted Virtual Copies and Moving DxPL edited images (DOPs) between System - Revisited rather than continue to add to this topic!