Photo Corrupted by Software

Good morning all,
and that’s why you should not rely on the Raid alone, as the Raid controller can also have errors and thus permanently write faulty data to your Raid hard disks without you noticing it. Please just google raid vs. backup.
This is just a hint for people who are more concerned with photography and less with IT. :grinning:
Here is an extract of an article:

A RAID can never replace data backup

But regardless of the number of hard disks and the RAID mode used, data loss can still occur. If data is written incorrectly, if a virus rages or if there is a software error, these problems are also transferred to the copied data and may be unusable despite multiple executions. In addition, files that have been accidentally deleted cannot be easily recovered. Therefore, despite RAID, regular backups should never be neglected. Only regular data backups can guarantee recovery in an emergency. RAID ensures (especially in the higher configuration levels) that work in the network can continue unhindered if one (or more) disks should fail. These can then be replaced during operation without necessarily having to stop working on the data. However, it is not recommended to do without a backup because of RAID.

Make your day

@MikeCross That is a very worrying comment, I believe!? Is this RAID on the machine or is it RAID in a NAS?

How can windows fix problems that only occur when the data is read and where data does not appear to be corrupt on the disk drive. Your comments about problems that “fixed themselves” in PL5.4.0 would suggest that there is an intermittent “read” problem or a problem with one RAID drive but not the other (depends on the RAID configuration you are running) so reads are taking the data from alternate drives - what “fix” did Windows undertake that “fixed” this problem, what fix could Windows undertake that could ever fix such a problem!?

I would suggest multi-bit errors are occurring that are “creeping” past the parity checks to deliver corrupted data to the Windows application without the I/O being rejected, the last time I saw that was on a mainframe and the database software was complaining about checksum errors! The logs showed lots of failed and retried I/Os but some were getting through undetected by the hardware but not by the database software!?

But I await your comments about the “fix” that windows applied and the type of RAID that you are running, i.e.

  • RAID 0 (Striping) RAID 0 is taking any number of disks and merging them into one large volume. …
  • RAID 1 (Mirroring) …
  • RAID 5/6 (Striping + Distributed Parity) …
  • RAID 10 (Mirroring + Striping) …
    and whether that is on the PC or in a NAS.

Hopefully I am simply being alarmist but @Guenterm’s comments are important.

I run a NAS but as JBOD and it holds copies of data I want to be able to access when my machines are switched off - loss of the NAS means loss of convenience not data! The idea of running RAID “scares” me, when 1 drive fails you are down to one drive if running RAID 1 and have a potentially compromised system with other RAID options until the “hole is plugged”, I have watched RAID systems on on-line mainframes being re-silvered and …

RAID was being used for performance and to maintain up-time for a 24/7 system but it was not without its issues.

Yes and no!

It is too easy to backup data that has become corrupted and overwrite a perfectly good backup copy. Versioning is useful, space permitting!

I use comparison software but the comparison is limited to size and timestamps (by me) to keep the comparison times reasonable. But this is itself a “risky” procedure and I could overwrite all of the 5 copies I keep, one of those copies is on portable HDDs and are supposed to be kept in a tin-box (EMP proof!!??) or better in a fire-proof safe, in the garage or garden office or shed to provide an “air-gap”!

1 Like

I hope you have setup a schedule to check the raid at least once a month whether it is in the computer or in a NAS.

Dear Bryan,

that’s correct and my short description should only show a short overview.

Lot of people trust in using a NAS and not only private persons also small companies or craftsman.

And it’s always the same story…by talking about incremental backups, versioning the number of storage media and so on, you will see rolling eyes and $$ signs in the faces.

have all a nice weekend without dreams of data loss :innocent: :star_struck:

I’m a new Photolab user and have noted permanent image corruption on three or four occasions - it always manifests as a solid black bar, horizontally or vertically, and has occurred on both RAW and TIFF images, verified by viewing the files in other software.

In my case, I’m convinced my processing/ redundancy/ backup strategy is the cause, NOT PL or hardware issues.

My system was initially configured as follows;

  1. Photos folder automatically synced both ways (mirrored) with NAS over the LAN.
  2. Photos folder automatically synced both ways (mirrored) with external Thunderbolt USB SSD drive when plugged in.
  3. Photos folder on NAS shipped to cloud storage automatically at stupid-o-clock in the morning.

I believe my mistake was setting up 1) and 2) to run in real time whenever changes in the monitored folder occurred. I use PL, Photomechanic and Topaz at various stages in my workflow any one of which could be writing or reading the file at the same time as each other PLUS the NAS sync agent PLUS the SSD sync agent and so on - on reflection, a recipe for all sorts of trouble 🫤

1 and 2 are now triggered manually when I’m done with photo editing, and I make sure the first thing I do is create a virtual copy and work on that rather than the original whilst I review whether the problem has gone away.

Be aware of using synchronisation.
Damaged files, changes and alterations - wanted and unwanted - will also get synced and can then result in loss of data.

Integrity checks after copying and file revisions after changes is two important points not to neglect.

1 Like

Yup, totally agree. Daily versioning on the cloud backup has recovered a couple of the borked images I mentioned before I stopped the on demand sync jobs

This issue has morphed into something else following my initial tests. I now believe it may be caused by software (either PhotoMechanic, PL or a conflict between both) though have not yet ruled out an issue in the backing hardware so won’t hang my hat on that yet. Sorry about the detail though this (highjacked) post will probably be my runbook and outcomes list - if the OP/ mods will allow it :slight_smile: :

  • I have ruled out the syncing, corruption still occurs in front of my eyes with it disabled.

  • I have ruled out having both PhotoMechanic and PL open at the the same time, corruption still occurs when they are used separately.

  • I have ruled out corrections and adjustments, corruption occurs in images which are unprocessed EXCEPT for metadata

  • I have ruled out the Virtual Copy process, identical corruption occurs in both copy and master

  • I have ruled out any kooky colour rendering/ GPU hiccups/ colour space mismatches, the RAW file IS being modified (I have verified that with file hashes)

  • I know that the file modification will include metadata ALWAYS, XMP data is written directly into the RAW container for Leica CL images - I can’t generate a separate XMP file however hard I try

  • I have NOT ruled out the backing storage (Apple M1)

  • I have NOT determined which software has “last touch” on an image before corruption occurs

Next stages:

  1. Work in PL only for a good while (a week or two), using the native tools for metadata and DB/XMP/DOP reads and writes set to OOB defaults and see what happens. This is not my normal workflow but if I can replicate the issue it may help focus my efforts - this will include changes to images across two distinct storage media (local and external SSD)
  2. Update metadata “for fun” in PhotoMechanic for both images worked on in stage 1) and unworked images, from both storage media - opening PL from time to time to see if anything breaks when changes are picked up (e.g. this will test PL sync/ sidecar options, which I know splits opinions in this forum)
  3. Review output and decide next steps. I have RAW images which I can upload though am not yet sure who needs to see them (CameraBits / DxO/ Apple)

I’ll keep this updated with progress. I need a new project like a hole in the head but if I can’t get to the bottom of this it’ll haunt me :smile:

Thanks for reporting back on progress.
Have you tried to extract the preview jpg from within the raw to see if that’s good or bad?

https://exiftool.org/examples.html

1 Like

The preview JPEG is loaded by both PhotoMechanic (PM) and FastViewRaw (FVR) and that seems to be fine - saying that, that might be pre-corruption cached versions so I’ll need to pull the file apart to be sure - thank you for the link, I also have RawDigger which will come in handy at some point in the analysis I’m sure

Indeed, DPL5 writes keywords as XMP Subject and XMP Hierarchical Subject into .dng files…but not into JPEG files, unless they are exported. DPL5 did not write any .xmp sidecar files (and neither did Lightroom Classic).

Tested with a few sample images from dpreview (thanks!) and
PhotoLab version 5.4.0.72 on macOS 12.5.1 on a 2019 iMac.

1 Like

@danielfrimley It is late and I am a tired old man and a Win 10 user and I do not understand this statement! Which software are you referring too PM or PL5 or … ?

Photo Mechanic can be configured to write to the embedded data but can also be configured to only write to a sidecar file when handling RAW files (I believe I have managed to configure it to write to both, by accident but …).

PL5 writes to the embedded metadata for JPG, TIFF and DNG and only to sidecar files for RAW, either automatically(‘Edit’‘Preferences’ setting) or manually via ‘File’‘Metadata’‘Write to image’ and to the embedded metadata for JPG, TIFF and DNG for exports (unless deselected in the export options).

There is nothing wrong with PL5’s setting metadata and detecting metadata changes other than the fact that the detection has failed from time to time in some of my tests! The main controversy has been over what PL5 does to the format of the keywords. Typically PM and PL5 play nicely together and all my tests have been conducted with both programs (and FRV and Zoner and IMatch) open at the same time and me making changes in either (or any) at any given time!

If they don’t break under such circumstances then …

I believe that using the ‘R’ option in FRV (FastRaw Viewer) is actually showing a rendered copy of the RAW data, the ‘I’ option will select the embedded JPG. Using this it should be possible to monitor the state of the RAW image at all stages of any process. FRV is fine with ‘Rating’ but no good for keywords!

PS On Win 10 I would use Nodesoft Folder Monitor to track down the last program to touch the file, is there an equivalent piece of software on the MAC?

1 Like

@BHAYT Hi, no problem, thanks for your help. I too am old, always tired and if it’s not too late, it’s too early :slightly_smiling_face:

I thought so too - reading here. I’ve concluded that as all my images have been washed through LR and LRc historically (I’ve only just started to use PL), PM is respecting the existing XMP embedded data and updating that.

I’m referring to Leica CL native SOOC DNG files. At this point in the proceedings I’m not considering any other formats but as I learn more am starting to think it might be important. To me, RAW is RAW is RAW, whatever file extension your particular camera slaps on the end of “that whacking great file I want to edit” - I’m coming round to the idea it’s more nuanced than that.

Thanks again, I shall look into that. Appreciate your input.

Adding another bullet to the statement of facts:

  • I use PROJECTS heavily in my workflow. Images that I plan/ am/ intend to edit are added to one and that is my workspace for each session. I do NOT know whether corruption occurs if an image is not in a project

@danielfrimley Projects are just a “convenient” way to organise images within DxPL. They are essentially managed by creating entries in the database tables within the DxPLdb.

The chance of any corruption from them should be -0 (famous last words).

Effectively you have a project record which points to a list of items in the project which then points to the items (images) themselves.

The Project entry:-

and the list of project images pointing to the actual images in Items

image

The Items (Images)

1 Like

It does not matter if we work with folders or projects.

We can see an image directly in a folder or indirectly through a project. Changes are logged in PhotoLab’s database. Upon export, a new output image is created. The output image is a product of the original image and the changes and both the original and the changes remain as is…in many cases.

DNG files have also been designed to contain metadata and DPL (and other apps) therefore write into DNGs instead of adding a sidecar. This should have no effect on the image itself, unless something has gone wrong.

2 Likes