This issue has been observed and reproduced by Anne Marie but does not happen every time and we’ve been unable to understand exactly what state causes it. I think it warrants investigated by developers, though so I’m suggesting it moves forward.
Doing more to help users understand that they are about to lose data makes sense to me and I think the effort of implementing these confirmation dialogs is worth the frustration it likely causes users so I’m suggesting that it moves forward as well. I’d defer to the developers to determine whether all of the suggested checks are possible, though.
This is being reported for the PUI in v3.5.1. I am unable to replicate the behavior that’s being reported either in the Sandbox or in Princeton’s testing environment. The example linked from the ticket returns 10 results [attempted 9/18/24], all of which have “1949” either in the date or in the title. None of them have “1949” in the uri.
If this behavior was being observed in their local implementation in the past, I wonder whether they were indexing the resource/ref field of the archival_object record.
The <unitid> tag is a red herring (it is part of the EAD serialization and holds the archival_object uri, not the resource uri).
The current accessions CSV creates agents and events on import. It is not very robust when it comes to knowing whether or not an entity already exists though (e.g. it works on the exact same string, but not on the exact same string in inverted order (direct/indirect), despite that property being set correctly. One question I have therefore is how to implement this robustly. Is this user envisioning entering a string value, just like with Agent links? I’m wary of adding more data entropy via CSV import. Would uri’s or id’s be acceptable instead?
It looks like the issue may be the lock wait time, which isn’t long enough to handle very large/long-running operations like moving many components. The issue doesn’t seem to prevent records from being moved, as they are in the correct place after the error. Suggested increasing the lock timeout locally.
This sounds like a great idea, although I would like to discuss potential ramifications before recommending passing.