Cache file

If you switch MD sync on, DPL will do whatever it does, without giving you any chance to intervene.

If you want to be in control, switch MD sync off and read or write MD when appropriate.

  • Manually read MD if the image has MD added by an other app (e.g. DPL on your other computer)
  • Manually write MD after editing MD and you want that edit to be seen by other apps
  • Decide which edit is more important - if you accidentally edited MD on both computers

I’ll leave syncing off for now and see how that works. If everything saved to the database should be ok.

@Riverman I don’t know what platform you are working on, Win 10 or Mac. I thought there was an issue with the Mac that the database location could not be specified but that is possible with the Win 10 and may have changed on the Mac (anybody?).

John-M made a comment with respect to the risk of the creation of Virtual Copies in any such “enterprise”.

The Win 10 Preferences option allows the database file to be specifically defined but that does not move the cache files which will remain in C:\Users<Username>\AppData\Local\DxO\DxO PhotoLab 5\Cache where ther are two subdirectories a Previews and \Thumbnails.

In two posts Synchronise PhotoLab - #26 by BHAYT and How to use PhotoLab on multiple Apple Computers - #58 by BHAYT I describe the issues that can create Virtual copies and also how it might be possible to avoid them.

Edits are stored in DOP files and these are essentially copies of data held in the database. If PL5 is being run on 2 systems there will typically be 2 databases one on system A and the other on system B and these will effectively point to the files on system A and on system B respectively. If any files are moved from B to A or vice versa (A to B) along with their respective DOPs then a Virtual copy will be created on the receiving system iff (if and only if) that system already has an entry in its database for that file in the given directory and the DOP does not have a unique identifier (Uuid) that matches the receiving database!

If this occurs then the receiving database preserves (protects) its existing data and it is marked as the [M]aster copy and the incoming DOP is used to create a Virtual Copy [1], in this way PL5 preserves both sets of data but there is now the issue of “cleaning up” the database to remove any unwanted Virtual Copies.

The best way to handle this is to avoid VCs if possible!

Essentially moving an SSD between systems where both the PL5 database and the files and their DOPs and any xmp sidecar files all reside should not cause any VCs to be created but to avoid many duplicate entries in the database the way each system refers to the database and the files should be the same, i.e. it is possible to have the same photo in the database many times over providing they are held in different directories and that will happen if System A has the SSD mapped as drive E:\ and System B as F:. In your case it would be useful not to have this situation but rather to have the same drive letter assigned to the SSD on both systems. e.g. they both “see” the SSD as F:\ for example.

Arguably you cannot relocate the cache file and theoretically it should not be a problem just moving the data (database, Photos, DOP, xmp sidecar) should allow rediscovery to take place or the elements of the cache will age and the new data will stored. Unfortunately I have seen a situation where PL5 showed an incoming directory with the same thumbnails as a previously deleted directory but this vanished on restart. If the drive letter is the same on both machines it may well simply re-use items from the cache, albeit there will have been updates of the other system etc. I have not done much work with the cache so I cannot give a definitive response to questions about it, i.e. about the cache.

There is a problem in choosing not to ‘sync’ metadata @platypus. Please note that PL5 will update xmp embedded data for JPGs (no xmp sidecar will be created) and create an xmp sidecar for RAWs) it will not update embedded metadata within the RAW files).

The ‘sync’ option effectively causes PL5 assigned metadata and externally assigned metadata to be merged. If that option is not set then metadata can be read into PL5 where it will overwrite any entries in the PL5 database or write the metadata where any metadata externally assigned (not in the list from PL5) will be lost. Please write in your response how you intend to manage your metadata, e.g. all in PL5, all externally or a mixture of both. There is currently no way to “merge” metadata except using the ‘Preferences’ ‘Sync’ option! I have been requesting such an option for months @sgospodarenko.

Now we come to the potential problems with PL5 itself. If you use ‘Ratings’ there has been a change between PL4 and PL5. In PL4 the ‘Ratings’ data was held and read from the DOP. In PL5 it is stored in the database and written back as xmp (sidecar or embedded as appropriate) to the photo (or xmp sidecar etc.). It is also written to the DOP but is not read back from there. Hence, you should treat the photo = photo + xmp sidecar at all times.

With the Tag there is a problem explained in one of my posts here PL5 Tag Field not read from .dop file - #93 by BHAYT. Basically the Tag is a piece of PL5 metadata but not xmp related, hence it does not belong in the xmp (embedded or sidecar) data. As such it is stored in the DOP and was read and written successfully in PL4 but in PL5 it is not being read back from PL5 DOPs (this is a bug).

I have included the above two items even though they should not really impact your data move because the data and the database are essentially staying together; all that should be happening is that a PL5 program on machine A or on machine B opens the database on the attached SSD and uses it as if it is the only machine responsible for that data and database. Please do not run PL5 without the SSD correctly attached because it will ty to make a new database!

It is possible that I have missed something please update the above @sgospodarenko

1 Like

On a Mac, the database can be moved or copied to a different location (when DPL is not running) and the respective preferences file edited to match the new location. It has worked in a test, but I’d not expect this mod to be sustainable under all conditions.

Manually reading/writing/syncing sidecars is not easy and can’t possibly prevent unwanted results, specially with the current implementation of what info is transferred by DPL. It takes some concentration too to do the right transfers. I ran a few tests which mostly worked as expected, but I’ve not run systematic repetitions and have not completely documented the steps because I’m still managing my images with my SPOD, which is Lightroom. I’ll stick to Lr until DPL gets proper DB management. Maybe we’ll get there in 2-3 years.

@platypus Thank you for your update. I reran a previous test which is similar but not identical to the @Riverman proposal, namely I made an update to a photo on my Main machine (A) closed down and opened on my test machine (B) and changed the database name to the database on A, i.e. mapping the drive containing the database and data across the LAN.

Closed and restarted PL5 on B and found the updated I had made on A and shut down. Opened on A an applied a preset to the next photo and shut down on A. Opened on B and there were Virtual copies for both photos with the first showing the same thumbnail odf the preset applied on A originally but the second photo showed the original as [M]aster and the version with the prest applied as [1].

I had this before and cannot understand why this happens when effectively only one database is responsible for the management of Uuids. I will repeat this tomorrow (sorry later today) with cleared databases and a small test directory and track Uuids in the database and DOPs to see what is happening.

Bye for now"

@platypus @Riverman I have abandoned the test that caused VCs i.e. B sharing A’s database and photos and will investigate that later.

Instead I tested the scenario that @Riverman really wants to run, i.e. transferring the database and the photos and “baggage” (DOP etc) on an SSD between two systems.

Disclaimer: I don’t work for DxO nor do I know or understand the working of PL5 beyond what a lot of tests and results have taught me (providing I interpreted the results correctly - of course). I did nothing special with the Cache because there is no way that I could! The tests were reasonably short and sweet so I would suggest that you repeat them with larger quantities of data just to be sure but it appears that what you want to do works O.K.

Test scenario:

  1. 2 x Win 10 systems my Main machine PL5.1.3 and my Test machine on PL5.1.2
  2. Re-format an old SSD connected via a Sabrent SATA cable
  3. On the main system I used Zentimo to configure the SSD to always connect as T:
  4. Added photos to T
  5. Opened PL5 navigated to T and configured PL5 to use a database on T:
  6. Restarted PL5 and checked database open on T, made edits to 1st and last of 5 photos.
  7. Closed PL5, stopped T in Zentimo and disconnected Sata SSD
  8. Connected Sata SSD to USB on Test machine
  9. Set drive to be T: on Test machine with Zentimo
  10. Opened PL5 and navigated to T: and configured database to be on T:
  11. Closed and Restarted PL5 and navigated to T:
  12. Repeated editing on Test then Main a number of times, all edits visible and no VCs.
  13. Started PL5 on Test while Sata SSD connected to Main and PL5 (on Test) defaulted to the default database location and that was shown in the ‘Preferences’ database location!

Please Note: All edits were to make the photos stand out in the thumbnails not to “improve” their appearanace!!

Essentially it works fine but how you set the fixed T: (or whatever) without Zentimo I do not know and Zentimo is good for forcing the closure of attached devices. It is using features in the operating system so other utilities or the commands in Win10 will be fine (I have been using Zentimo for a long, long time and forgotten how to do it “natively”!). If you are using a MAC then …there will be the equivalent but I do not know what they would be.

The testing was not without incident I had PL5 dump on my Main machine and restarting PL5 said the database was corrupt and it was going to fix it! During the final test of B sharing A’s database and photos I had PL5 open on both machines at the same time (unintentionally at first but every time that PL5 on A (Main) updated DOPs new VCs appeared on B(Test)). So I suspect the dump and corrupt database came from that situation.

After that everything went as @Riverman hoped for a number of disciplined transfers between the two machines!

First and last edited on Main, second and 1st from last edited on Test, T: back on Main machine:-

In Windows, right-click the Start-button and select the Disk Management utility … then right-click the storage device that you want assigned to T: … and select Change Drive Letter.


May I ask for the short version, Bryan;

Were you able to have two machines working with the same/shared database and same/shared file-system without PL reacting by creating Virtual Copies to resolve (what it deems to be) duplicate sets of corrections?

And, were you generating sidecar/.dop files in the process?

John M

@John-M, @platypus, @Joanna. @sgospodarenko
Shared directories in a shared database do not work because the PL5db stores then as separate entries but the DOPs then overwrite each other and … chaos and an abundance of Virtual Copies.

The tortuous workings out that led to this “obvious” conclusion.

I have always thought that the test where A (main) hosts the Photos, DOPs etc. and the database and B(Test) accesses the Photos etc. across the LAN and is configured to use the same database as A should work. I have tested this previously and then again last night/early this morning and got Virtual Copies.

Hence, answer to your question, both machines are reading and writing DOPs and after the initial parts of the test I start getting Virtual Copies.

I need to test again with multiple snapshots to make sure both machines are “reading from the same hymn sheet”.

Then I need to see why there appear to be “clashes” between the database and the DOP (other than the obvious - that I have not got the setup right) particularly in light of the Sata SSD test which appears to work. Other than a BHT error the main difference is complete separation of the two environments, i.e. T is either attached to A or B not A but B can still see A but why would that make any difference.

Thinking about it ----- the Sata Test has both A and B seeing the drive as T:, the Lan test has one seeing the drive as F: and the other mapped as V: but … I need to attach the Sata SSD to A, share it across the LAN and configure both machines to access the database on T: and repeat the tests, starting with T: photos in their current state with DOPs and then add another directory with just Photos and check out as many scenarios as come to mind, snapshotting all the time and making sure only one PL5 is active at a go.

If the tests fail I may have to upgrade T to 5.1.3 but I will do that only as and when!

I actually ran the test below and now believe that there are two sets of entries being stored in the database and the writes are back over the same DOP!

Hence, Virtual Copies are inevitable because PL(A) and PL(B) are sharing the same database but not the same entries for the “same” files but they are then overwriting each others DOPs and therein lies chaos!

The chaos on PL5(B) with the directory from the original test.

Test Scenario:-

  1. Configured sharing on T: attached to A
  2. re-configured PL5(B). i.e. database on B, to be T: on A
  3. Restarted PL5 on B and navigated to photos on T(mapped to T: on A)
  4. Navigated to folder and all O.K.
  5. Changed an edit on first photo and closed PL5(B).
  6. Opened PL5(A), checked using database on T:
  7. Instant VC on all photos in the test directory the first photo shows original as [M] and the change(B) as [1]. The others have [M] and [1] the same.
  8. Set up more test directories on T: with only photos present
  9. Opened and closed PL5(A), no DOPs created
  10. Opened PL5(A) and forced DOPs on all photos and closed on A.
  11. Opened PL5(B) chaos with test directory from above test.
  12. Navigated to new directory and forced DOP export.
  13. Closed PL5(B).
  14. The DOPs do not match so VCs are assured!
  15. The database shows entries for the drive being ‘local’ or ‘network’!!

Many thanks for the “executive summary”, Bryan :grinning:

That confirms reports by others I’ve seen on problems encountered when attempting to share databases amongst multiple devices.

So, conclusion for @riverman; you could run multiple devices against the same set of images - BUT, you would need to delete the database before each invocation of PL - and use sidecar/.dop files to retain your correction details … (This is my approach, even tho I don’t use multiple devices.)

The downside is that you would lose ability to retain keywords - and you cannot define Projects … as these details are not held in sidecar/.dop files.

John M

The answer is no, deletion is not necessary, according to my tests!

The @Riverman proposal is to carry the database and photos etc. from system to system, not reach across a LAN or copy files etc.

So I carried out a lot of tests for “How to use PhotoLab on multiple Apple computers”, wrote a procedure for synchronising between computers based on ideas others use about deleting and replacing images documented at Synchronise PhotoLab - #26 by BHAYT.

From the tests I have carried out (standard Disclaimer at this point)

  1. Copy files and DOPs etc. from one PL5 system to another where the directory/file combination is unknown - O.K.

  2. Copy files and DOPs etc. from one PL5 system to another where the directory/file combination is known - Virtual Copies

  3. Open a database and files across the LAN and attempt to edit different sets of directory/file combinations on each machine accessing separate combinations but never simultaneously on more than one machine - O.K. (I surmise but haven’t tested)

  4. Open a database and files across the LAN and attempt to edit the same sets of directory/file combinations on each machine accessing the combination but never simultaneously on more than one machine - Virtual copies (Tested above and in the multiple machine Topic)

  5. Accessing a database and files on a NAS - failed with PL5 permissions issues with my DS220J NAS

  6. Accessing a database and files on a third machine from two other machines i.e. the third machine is acting as a NAS but without the permissions issue - O.K but following my results from the test above in this topic I need to check under what exact circumstances that appeared to be the case!

  7. Using a delete before add strategy used by other forum user posting to the multiple machine topic and then a related strategy using directory name changing and documented by me (reference above in this post) - O.K.

  8. Carrying the database and files from machine to machine on an external drive (e.g. a Sata SSD) - O.K. in my tests above in a post in this topic but with each machine mapping the external SSD to the same drive letter - need a bit more investigation i.e. check the database to make sure that the test is not “faking good”, what happens if the drive letters on the two machines are not the same etc. I can guess at the outcome but I have been caught out before!!)

PS please be aware of a bug reported by others and recently retested by me where the ‘Tag’ (accept/reject/untagged - green/red/grey) which is metadata unique to DxPL is not being read back from PL5 DOPs documented here PL5 Tag Field not read from .dop file - #93 by BHAYT.

Based on your findings, Bryan - - if db is not deleted then won’t PL generate VCs, as its reaction to resolving duplicate sets of corrections ? … That’s what I understood you to be confirming - No (?)

John

You are absolutely correct if you have a duplicate set of data which is actually duplicake, i.e if any situation arises where the same file in the same directory that is also found/presented/navigated to etc. where the Uuid does not match between the DOP and the database then both copies are kept by virtue of the Virtual Copy mechanism.

So all my tests and my summary above and my posts elsewhere have been about avoiding this situation at all costs because as you rightly say when it occurs there will be VCs but only in the replicake situation where the k represents a non matching Uuid (and potentially a “shed load” of changed edits elsewhere in the DOP).

But people are keen to be able to takes images out into the field to edit during their down time and also take new images and edit those and then bring all back home and merge them together and take them out into the field on their next trip and … repeat the cycle. Is any of that possible and the answers are yes “sort of” and the “sort off” is the results of my tests summarised in the post above.

Your approach of deleting the database is perfectly acceptable if PL5 is only used for RAW editing when the DOPs hold all you need.

But are there any alternative ways and the answer is maybe if you can live with the workflows I have tested and found to be O.K., iff (if and only if) you can trust my tests and my testing procedures.

If not (particularly if the alternatives are too easy to make a mistake) then live with VCs or delete your database which suits your and potentially other PL5 users’ workflows.

Please note there is always the database backup command - start each “merge” cycle by backing up the receiving database. If is all goes wrong then use PL5 to import the backup (Restore) and do it all again! …and again…and again…until you get it right or decide to change the strategy

I think this is what I do. On my laptop I keep the current and last 3 years photos with all kept on my desktop. When away I edit (not using deep prime as that’s hopeless on the laptop) and use Photo Supreme. On return I copy over the new image’s and do any final editing. Photo Supreme adds any new keywords etc. when I add them to it and PL is no problem. I keep all the image’s including any changes to those from the laptop up-to-date between the two when at home. I do not have any projects and use Photo Supreme as DAM. So far the threatened permeant history has happened to windows so I don’t know what might do to this system. But I have never had a problem but clearly I am not using one data base but two and even within them I delete them every few weeks anyway as they slow up PL loading and are of no use to me as my system relies on the dop’s and not using pl as DAM.

Yes - that’s the simplest and safest approach … assuming one has no interest in keywords and the concept of projects (neither of which is held in the sidecar/.dop files).

John

I think projects can be created in Photo Supreme and copied over between PC’s along with keywords via the xmp files (pl reads the keywords and geo tagging) so could well be other DAMs would do the same

@John7 and @John-M you are both avoiding the problem in your own way and there is nothing wrong with that and you have a workflow which neatly avoids the problem.

My alternatives (providing I have done the testing and describing correctly) also can work.

However, as I have written elsewhere @sgospodarenko, more than once the real fix is to extend PL5 slightly to provide facilities to allow replicake images to be reintroduced to the database without causing VCs, i.e. a warning and override on a

  1. individual
  2. group
  3. directory
  4. session
  5. preferences

etc basis with options to

  1. Make replicake the database entry
  2. Keep the existing entry and reject replicake
  3. Use a date timestamp with all, … to inform the user and prevent the wrong decision.
  4. Allow both to be kept selectively, i.e. the classic VC situation
  5. any other options I have missed

In the meantime keep deleting the database and avoiding the problem but there are (just) other options.

PS In the event that this happens or a user has been using VCs for evaluating editing possibilities it would also be useful to have a “promote” function to allow a VC to replace a [M]aster copy and also to allow swapping and re-organising the VCs to create an evaluation order for VCs.

@sgospodarenko Help requested not to fix a problem but to comment on a test which has me puzzled .

In my response to @John-M and @John7 in response to an original query from @Riverman I put together a list of ways that I had investigated to avoid the “unwanted Virtual Copies” situation.

However, in the light of my results for the test @ Cache file - #20 by BHAYT where the two directories were being held separately in the database leading to DOPs being overlaid and VCs resulting I was concerned why my test @ Cache file - #18 by BHAYT had worked!

i.e.

So I tested the situation and it continued to work even when I changed the drive letter on one of the machines, please note that the change of drive letter would affect the location of the database as well as the files!

While I am glad that this works and means that a hard drive containing the database and images and DOPs etc can be transferred from one machine to another I do not understand why PL5 is recognising the directory whether the drive is mounted as T: or Q: and realigning the database in line with that discovery?

Test Scenario:-

So the Sata SSD was already connected to the Main machine (A) as T: and a new directory was “discovered” and the Tag was set on all files in a directory to ‘Reject’ (‘Red’), PL5 was closed and the T: drive was also closed and detached from machine A.

The Sata SSD was then attached to machine B and PL5 started but that caused a problem because iPL5(B) was configured to access T: across the LAN. PL5(B) reset the database location to the default so the location was changed in the ‘Preferences’ to point to T: and PL5(B) closed and restarted.

Navigating to T: located the original database entries with the Tag set to ‘reject’. The noise reduction on the two RAW images in the test group were set to DeepPrime and PL5(B) was closed and the database showed that the drive was opened as Local Disk T:.

The drive on B was then changed to be the Q: drive and PL5(B) opened, it defaulted to the default db location, but that was then changed to the Q drive and PL5(B) closed and restarted. Navigating to the directory now on Q: still located the data that had just been on T: as hoped but not necessarily as expected - no VCs were created!

The database shows that the entry for Local Disk (T:) has been changed to Local Disk (Q:), therefore PL5 will pick up the old records!?

The Sata SSD drive was closed and detached on B and attached on A: and opened with PL5(A) on drive T: with everything intact and no Virtual copies!

and the database was once again showing Local Disk (T:)

While I am glad that this works and means that a hard drive containing the database and images and DOPs etc can be transferred from one machine to another I do not understand why PL5 is recognising the directory whether the drive is mounted as T: or Q: and realigning the database in line with that discovery?

Good morning!

@BHAYT this behavior of creating VCs is a known one and expected.

But we agree that in some cases it’s not clear for the user why you these copies are created and there should be a way for the user to cancel their creation. We have some ideas but let me draw @StevenL to your suggestions:

Thank you
Regards,
Svetlana G.

Let me ask @alex to assist here.

Regards,
Svetlana G.

@sgospodarenko thank you for your response. I don’t want the feature to be changed but I am confused (not an unusual situation these days but …) and it would be nice to have that confusion resolved.