This image cannot be processed since the source file could not be found ( After I cloned the Drive )

@platypus & @roman Yes indeed and that is a consequence of whatever DxPL is doing when it opens a disk drive. However, the Uuid’s shown do not even vaguely correspond to the drive serial number of the two drives involved and I could find nothing useful in the config file.

So I started a clone operation using Farstone and this is what I found

Which shows the Volume id and that is the Uuid shown in the ‘Folders’ structure!

I now have two clones apparently, allegedly including the “drive id.” (whatever that is)!?

It is this that is causing DxPL to ignore the database entries and add the new drive as a new entry in the ‘Folders’ table. Essentially the database copy has “lost” all its information because it is pertinent to an id. that is no longer in use @sgospodarenko.

I don’t have an issue with the photos / images, it’s when you use “keywords” to locate or search for your photos, that’s when the problem occurs, the database disassociates it’s entries with the DOP files, in the {sources} table there is a column CafId e.g. C1234C, in the database also the DOP file for each photo has this tag for that photo. I have not been able to locate there the Drive serial number is stored by PL5

@Roman I don’t think it is the Disk Serial number it is the Volume number (?) as shown in the two snapshots above. The database is not dissociating itself from the DOPs the change to PL5.3.0 means that it will take metadata from the DOPs, it is dissociating itself from the Volume and the old entries in the database, i.e. the entire database is essentially in “Limbo” (useless) and PL5 is simply discovering the “new” entries which are the same as the old entries but …

The entry in the Folders structure is the one that is at the “top of the Tree” and if you look at my snapshot of ‘Folders’ we now have two trees, identical names but different Uuids which are the Volume ids of the respective USB drives I am using (both are the only GPT drives I am using all the rest are old style MFT).

The old entries are arguably redundant with PL5 rediscovering data as you navigate the directories but the old entries are still there and will be “discovered” from time to time as they are being found with the search.

Arguably if I return the old USB drive to the system PL5 should reconnect successfully i,e, the mechanism is intended to cope with external volumes that can come and go!?

Which is not what you want!?

PS:- Sorry I should have thought about this before the Uuid = Volume id (still trying to discover how you can actually get to that) so that the same labelled folder can be used from two different drives, i.e. when you have USB drives attached to the system and use PL5 to directly access them (I believe).

In your case and during my tests we wanted the new drive to replace the old but I believe that PL5 is built to handle the situation where USB drives can come and go, potentially with the same name but they will each have a unique Volume Id and hence a unique entry in the ‘Folders’ structure and hence “chaos” for what you want and what I was trying to achieve in my tests!?

So it is not so much re-syncing that we are looking for as “replacing”, you are attempting to replace one volume with another leaving projects, image directories etc. in place but now active on the new “replacement” volume!

On win 10 the utility I use for “peeking” in the database is DB Browser for SQLite

1 Like

Well, whatever happens and why can be interesting from a technical point of view.
From a user point of view, the current housekeeping implementation is sub-par imo.

PhotoLab has one good housekeeping function:

  • Bildschirm 2022-06-23 um 11.26.17
    Point this to the root of your photo folder structure and DPL will happily ingest the whole lot…which can take a while. BTW, this corresponds to the “import” function that many commenters loathe.

We’d also need a few more though

  • Remove a folder (structure) without moving it to the trash
  • Move a folder (structure)
  • Find a missing folder (structure)
  • plus the corresponding functionality for files or groups of files

We can drag and drop images or folders (Win only) in Library view, but a set of dedicated functions would look more professional/serious imo.

I’d much prefer if DxO added such functions (soon, please) than us having to a) delete the database on such occasions (which might be necessary NOW to remove the junk that has amassed in the database due to lack of housekeeping and automatic ingestion) or b) hack database entries.

2 Likes

@roman but my tests and results are for copying. Now both drives are out of the box I will test cloning later today. Do you effectively replace the original drive with the clone and retain the drive letter or does that change as well; actually it depends if the Partition Id is preserved in the cloning operation and then on DxPL’s reaction to the “new” (“old”) drive.

PS:- At which point the Drive Serial Number might come into play but I can’t find that stored anywhere @sgospodarenko.

On macOS, the UUID is assigned by the OS, when a volume is created.

I took an external SSD and reformatted it a few times and never got the same UUID.

@platypus The following is simply navigating to the two cloned copies in PL5 showing that they have been allocated different Volume Ids. by Win 10!

@roman your clone may be a perfect copy but if the OS allocates a different Volume id then DxPL will treat them as separate drives. It is not a bug (I believe) but a way of managing USB image drives that can be offline at any given time - useful for some but not for you!

I apologise for “lying” and getting the wrong alternative disk configuration, most of my drives are MBR not MFT(!?) I have one other GPT drive as well as the two new ones!

@platypus the various features that you and I and others have requested for DxPL all have their place but they do not really address this particular issue this is essentially a direct replacement request, i.e. that one drive should replace another.

The risks for getting it wrong are fitting a database to the wrong set of directories/images etc. the rewards for getting it right is that @roman and others will be happy.

Essentially there needs to be a way of requesting that DxPL effectively replaces the Uuid of drive A with the Uuid of drive B and then deletes drive B from the Folders structure!

The actual procedure is simple to code and execute, the GUI certainly doesn’t exist but could be attached to the ‘Files’/‘Recent locations’ as a sub-menu @Musashi!? The alternative is a simple utility available to users to perform the operation and keep DxPL ignorant of any such activities!

There is a big benefit to having such a procedure because what is currently happening with @roman’s database is that it is being rendered worse than useless at every swap, throwing up failed searches (and projects if they are being used)!

But there are also risks associated with the “fix”, namely if the DOPs do not actually match the database that is being “forced” into place as the replacement (i.e. B) there could be “wall to wall” Virtual Copies, potentially worse than anything we have seen happen before!

I have tried hacking the database and failed but I will try again later to see if it even possible!

Only if you have identical filenames in an identical directory structure and if you managed to create than without it being a deliberate copy your file management system is screwed anyway.

I have never seen an application which thought a full file path needs to be qualified by a volume unique identifier. The idea is bizarre.

Coincidentally the root of my main photo storage directory structure is an NTFS junction linking to a different drive. The database contains the unique id of the volume containing the junction so I have the ability to move that directory structure to another drive without messing up the database.

1 Like

@yonni but @roman’s second hard drive is a clone of the first so the structure will be identical. In my case I have two USB drives (not the new ones I am using for the tests but two others) that are essentially mirror copies of each other and of the hard drive, certainly that is the case for my photos! so I could use the same database with any of the three “images” if the feature was available.

Not if you look at my attempt to “fool” the system, this is arguably me bringing a USB drive back online when I believe that PL5 should be able to “continue” using it with the data already in the database. My test case has the same drive label mapped to the same drive but “PL5” is not fooled, unfortunately the opposite of what I was hoping for!

Not a concept I have heard of before but I did notice this when using Zentimo to change drive letters, i.e. ‘Mount drive as folder’ or is your technique different?

If you have the time please explain the whys and wherefores of your strategy and can it be used in this case for @roman’s situation, i.e. images on a usb drive which is cloned and then the new clone being attached to do a “stint” before swapping back again, I presume for the purposes of wear levelling (and backup).

My images sit on my main drive but it might be useful to be able to switch the database to use one of my usb copies in the event of an emergency (the database resides on yet another drive) without losing anything in the database.

Actually, that’s not the case … as explained here.

That’s correct.

Yes, but you can edit/change this Id in the sidecar/.dop file (you can change the name of associated image within the sidecar/.dop file too) and it will have no affect.

It’s only the file-name of the of the sidecar/.dop file that matters (it must match the image it’s related to) … the internal references within the sidecar/.dop file will be updated the next time PL sees them.

John M

This happens when an usb drive is connected. I just saw it lately. Photlabtest F, an external usb drive is mentioned twice. So is the memory card K.

George

I occasionally clone a volume and then boot from it and I don’t remember having seen the issues you describe…

1 Like

@John-M Thank for confirming my conclusion (my brain hurts).

@roman @Yonni @platypus As a result of @Yonni’s post above (thank you) I decided to try the ‘Map to folder’ option being offered by Zentimo (other software is available, I am sure, and there will be a native windows command but don’t ask me where that is because I have Zentimo!).

So I

  1. Mapped J: to “F:____Mount Point” and opened PL5.3.1 and navigated to “F:____Mount Point” (NOT J:)closed PL5.3.1
  2. I then unchecked J: as ‘Mount as a Folder’ and repeated the exercise with K: and opened PL5.3.1 and navigated to “F:____Mount Point” (NOT K:). DO NOT GO NEAR THE DISKS THEMSELVES only the Mount Point (guess who got that wrong!?)
  3. There is a single Folder in the database and providing the DOPs are identical (guess who got that wrong) you should avoid Virtual Copies which I got the first time!!

I clone and restore the operating system disk frequently with no problems but that couid be because it is the operating disk that is being cloned and it is the only thing that is monitoring itself (or rather can’t when a clone is being restored)!

Here we have PL5 “remembering” what has gone before! Using the above technique “fools” PL5 if you REMEMBER to use the mount point in PL5 and the disks are truly copies of each other!! They only need to be copies not clones but they must be copies otherwise you might get Virtual Copies as I did (only on one occasion!).

To summarise:-

  1. The features @platypus and others (including) me have asked for are still required, in particular the ability to remove database entries while leaving the data intact of the disk. But the issues raised by this topic show that the database entries for the directories are not “standalone entities” and searches and projects can point to directories that may be removed etc. i.e. there needs to be a way of putting the database back together (re-synchronisation).

  2. DxPL remembers and uses the Volume Id as part of its ‘Folders’ table which means (I believe) that it can cope with offline directories and their return to the system but inhibits using copies (even clones) of the original data because each copy will be recognised as a different and new Volume (Volume Id), effectively rendering the existing database entries as null and void.

  3. DxOL needs to be “tricked” into using a ‘Folders’ table entry that can be re-used, namely mapping the drive to a folder when a single entry is created in the table. Providing the data is identical then different copies of a drive can be mounted and dismounted to the single mount point. As soon as a DOP is discovered on one of the mounted mapped drives that does not match the database then Virtual Copies will result! I used Zentimo to handle this neatly and cleanly (I have used the product for years, but not this particular feature, until my tests above) or try How to map a local folder to a drive letter in Windows | Computerworld or a web search to turn up other utilities etc…

Please remember to access the data in DxPL via the Mount Point folder not by accessing the drive directly! Navigating to the directory on the original drive will simply create the chaos that was happening originally, i.e. the reason for this topic raised by @roman. The original drive should be off limits to DxPL.

I feel that the procedure is simple enough but I made a suggestion earlier for e.g. a simple utility etc. that might help simplify the process.

Hope that this helps!

NTFS supports hard links like unix file systems. Creation and management is not well supported in Windows. This shell extension

https://www.schinagl.priv.at/nt/hardlinkshellext/hardlinkshellext.html

Offers convenient management. You can pick a source file or folder anywhere and drop it as a link or in the case of folder a junction somewhere else. For almost all purposes that source folder then appears to exist where it is and where you dropped the junction.

When PL indexes the junction it uses the unique id of the volume the junction is on.

I fitted a larger SSD which had enough space from my main photo directory structure. I moved the structure to the SSD then dropped a junction to it on the spinning rust drive where it used to be. I navigate through the junction and nothing is aware the whole directory structure is actually on a different drive.

(sorry I posted this by mistake while still composing and had to edit the rest in).

2 Likes

@Yonni I originally misspelled you post name in the above post This image cannot be processed since the source file could not be found ( After I cloned the Drive ) - #35 by BHAYT, sorry.

Thank you for providing the additional information I will read and try to understand the workings later.

In the meantime using a product I already “own” I was able to create the situation that I wanted to create thanks to your post. It will provide @roman with what he wants and I will test it with my copies just in case they are needed in the future. Currently the lifespan of my PL5 database is less than 1 day, sometimes just minutes!!??

Once again thank you for putting me on the right track.

Regards

Bryan

1 Like

I purchased 2 identical HD 2TB, the original was 0.5TB I cloned the 0.5 to the 2TB then extended the size to full. this was the first time I lost the “Keyword” search ability. I manually corrected the “keyword” search took me about 8 hours. the 2nd new drive is still in the box unopened, after about 3 to 4 months did the cloning as backup, ( now both drives are exactly WD 2TB drives, same PN# you cannot tell them apart other than the SN#). I cloned the brand new drive to mirror the exiting drive. once completed ( takes about 10 hours), shut down the PC removed the original drive, installed the clone, powered up, every application ran fine, then tried DXO PL5, yep! all the images are there, able to retrieve them. Then the following day, did a “search” it did fined the images, BUT could not access the image " ! " showed up, had to manually perform " FIX IMAGE PATH". Used SQLlite, to look into the database “Source” table had a column unpopulated , once I corrected many files, and looked again the missing information was restored in the source table. I initially ask DXO about this problem, they asked me to upgrade to PL5 ( originally had PL4 ) didn’t help. Then I found an article on page 241 of the manual, relating to this issue. Opened another ticket with DXO, sorry no fix for it now, but will open a ticket with the development team. Guess for now I clone my drive, but reinstall the original until such time DXO has an option for this issue.

1 Like

@roman Please see my post at This image cannot be processed since the source file could not be found ( After I cloned the Drive ) - #35 by BHAYT and the summary that followed!

I am not sure why you are cloning particularly because it is taking so l o n g. I maintain backup copies by using Beyond Compare (other products are available) and these are “clones” but not clones, i.e. just about everything (except my test directories) are synchronised between my machines (actually that includes my test directories) and my USB3 drives.

The only cloning that I do is between SSD’s attached via fast USB 3.0 connectors for cloning the OS drives as backups! and they are 256GB at maximum and I object to having to wait for 30 minutes or a bit above for that to complete!

All the software I use is happy with synchronised copies rather than clones (except PL5 when considering the problem we are looking at). My machines have 6TB + 4TB drives and my backup USB 3 drives are 8TB so something has to be left behind, cloning is not an option it would take way, way, way, way too long and I am an old man so time is precious (it always is but …)

IF you want to continue cloning then … but if you mount the drives to a folder and use that folder in PL5 then you can successfully exchange drives if you follow the ritual I outlined above.

Because my main and largest drive combinations are in my machines then there they stay and the other drives, USB3 or mapped network drives, are used to keep backup copies.

I go on about Beyond Compare in my posts, I do not get any commission but I discovered it many, many years ago and it has been a “life” saver throughout. It doesn’t just allow me to make copies but allows me to check if things are in sync across my systems and backups.

I check before I copy that I have what I thought I should have before I start “corrupting” backups! Any anomalies, particularly unexpected ones are checked before I decide which way I should be copying the data!

Please check what I have written above and consider what is the best course of action for you!

My advice would be to get some good compare and copy software, use one dedicated drive attached to the machine and regularly update the “spare” copy with changes and keep it safe just in case.

I personally will be considering using the Folder mount point so that I can maintain my database (when I stop throwing it away for testing) and attach a backup if I need to (which I hope that I never will)!

I clone, due to 1) many years ago, a family member allowed a virus to enter, corrupting all my files resulting in wiping the drive clean reload windows XP, then took a week to reload all my applications, then reloading the licences, finally restore all my backup files. consequently purchased the family members each a new laptop. 2) had a hard drive failure, didn’t know a few sectors were bad, lost a weeks worth of data, then the drive completely failed 2 days later. decided it was time to upgrade my PC it was over 5 years old, also purchased a new HD the exact version in the existing PC. reloaded all the application, then restored the data from my backup, ( realized after a few data files were corrupt ) lost a weeks worth of data. this is when I started cloning as to replace a drive that’s corrupted, only takes me an hour or two, not a week, ( only had time in the evenings as I was working getting home after 7pm ). Now I have a RAD drive for my data and backups from my PC, along with a cloned “C:” drive with all the applications.

@roman I am sorry that you have had a rather chequered past with hard drives! I run partially the way that you do and partially the way that I described. My Main machine is a small tower system (built by me as have been all my machines since my Amstrad PC(!)) and it is one of 3 such machines + a laptop (my wife now has my oldest machine, still an i7 but “ancient”). Two of my machines are i7-4790K’s and the other a very old water cooled I7-2700K which is hardly ever switched on (which I must do today and update the backups to it).

All machines boot from a SATA SSD the C: drive and have another drive for yet more software, when needed, the E: drive, which is also a SATA SSD. The C: and E: (where necessary) drive(s) are cloned fairly regularly to protect the installation and then all the other drives are various combinations of 6TB, 4TB and the odd 2TB, many were bought as USB drives to be dismantled, others were replaced with larger USB drives and then dismantled. All the HDDs are backed up using copying not cloning.

The NAS is not configured as a RAID just as a JBOD with 3TB and 2TB drives and holds copies of personal files, the software library (installation files and keys) and a 1920 x 1440 library of all the photos taken and an additional copy “sorted” by the various gardens we have visited since having a digital camera, i.e. 2004 onwards. That library was also held on the main machines but space is getting tight so it now sits on the NAS, the USB 3 backup drive and a portable USB 3 drive, only 4 copies instead of 7! Holding the images as 1920 x 1440 gives a quick performance on phones and tablets over wi-fi and an acceptable quality on larger screens and is a much more “portable” entity.

All the drives are fairly ancient but I have been fortunate enough not to have a virus damage the files and drives do start to fail with age but most of the old SSDs are still working and I lost a NAS drive about a year ago.

Depending on how the corruption occurred a clone will produce an exact copy of corrupted data unless that corruption is actually damage to the disk. The advantage of a clone is that it reads either the whole disk or the in-use sectors in order to read the original data, “exercising” either all the disk or a large selection at the same time.

I use Beyond Compare not just because I can backup with it but because It helps me track what is happening with the files!

For example why do I have 2015 flagged up? 2022_Q3 is obvious, i.e. I have taken photos that are not yet backed up!

Beyond Compare does have various options available for the comparison! I generally only compare the files names looking for Missing or misnamed files between the copies, i.e.

and I can the use it to compare images and metadata (here comparing an original JPG with a 1920 x 1440 “copy”)

and I do not get any commission from Beyond Compare for mentioning it here, but I “need” the ability to look into my disk files and investigate anomalies, contents etc…

The 2015 “anomaly” was obviously me using DxPL to "experiment with a photo which left a DOP, which now needs to be deleted or copied to my backup(s)!