Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

DACS:  https://www2.archivists.org/standards/DACS/part_I/chapter_4/5_languages_and_scripts_of_the_material

Feedback from the Sub-Team:

  • This is a weird history, as people have not encoded language; They have chosen to use mixed content

  • Language of description is not a DACS elements; Only language of material is

  • We should still attempt to export this information in a structured way

  • We’re encoding EADs with mixed content given that the toolkit encoded it as such

  • EAD3 was the intended version for enhanced support for extended language encoding

  • EAD2002 is far more restrictive in structure

  • Next steps: Reach out to Corey to determine if there was any intent here, and if there was, determine what the next step would be for the Sub-Team

Another Ticket: https://archivesspace.atlassian.net/browse/ANW-805

  • This was raised by Dev. Pri., but the process for forwarding this to the Metadata Standards was not in place

  • Issue was how the “published” status should work, it seemed to be that there was a lot of discussion in the comments for this

  • The interaction between the “published” status for the Digital Object in ArchivesSpace, and how this is encoded within the EAD

  • This will be added to the agenda for the next scheduled meeting

Ideas from December TAC Meeting

Desire to see us engage with RiC

  • Blog posts, particularly focusing upon our efforts to engage with the EGAD Steering Group (https://www.ica.org/en/egad-steering-committee-0)

  • Invite members of the Working Group in order to discuss their work with the TAC

  • How might we engage with Records in Context in other ways?

    • Daniel Pitti is the current Chair

  • There is also another individual who serves as the representative for the US

  • No members of the Sub-Team have any close relationships with members of the Steering Group

  • It would make more sense to contact Daniel directly in order to engage with them

    • Daniel Michelson volunteered to contact Daniel Pitti

    • This invitation would be for the next joint TAC meeting, and we should check with Maggie first

Making explicit which ASpace versions apply to the import/export mappings

  • We could just work with version 2.7 until we have a completed mapping, and then we could work with 2.7.4 for the next updated cycle on a separate iteration

  • Daniel Michelson: If things change in the upgrade, wouldn’t that be a good way of knowing what they are?

  • Christine: From a development perspective, we would know which issues have been worked on

  • Daniel: Changes to the exporter in the new version would be something which was presumably already tested, and we wouldn’t need to perform a separate test from us

  • Christine: There shouldn’t be additional testing required, but the mapping itself will still need to be updated. Should developers be feeding any of this information into this Sub-Team

  • Greg: We need to think about what we are doing; our pace is slow and this is a very labor-intensive

  • Updating the mappings for “unittitle” elements, as an example, required several variations of “unittitle” elements to be provided for the test import

  • It is actually easier to understand the mappings by looking directly at the code for the exporter

  • It might be possible to have things generated from comments from the code

  • Ruby Yard: This is in place for ArchivesSpace, but the rest of the documentation is moving away from being so attached to the codebase

  • The downside to this is that we are limiting the audience who might be able to contribute to this

  • We might still try this, but we need to be certain that this is even possible

  • Christine: One of the things which the ArchivesSpace did try was to try to extract the information from the code itself, but it could not be successfully implemented. It might still be possible.

  • Greg: For reference: https://github.com/archivesspace/archivesspace/blob/master/backend/app/converters/ead_converter.rb

  • Daniel: For some types of data, there are relatively straightforward cases for testing imports

    • Is there any way to determine which of these cases might be more straightforward?

  • Request to have Greg write up a description for what they found when testing the importation of “unittitle” elements

    • Perhaps invite Laney to the next meeting in order to determine whether or not automating documentation generation from the code base is possible

    • It should be noted that it is more efficient to update the code comments during updates to the code base if this is indeed possible

Documenting our Review Process

  • Kevin drafted a Confluence page (Import/export mapping review process)

  • These directions should prove to be quite useful for providing guidance to future Sub-Teams

  • Perhaps referencing the code in certain cases might be more straightforward for discussion the import process and mappings

Bulk Import Issue (https://archivesspace.atlassian.net/browse/ANW-1002)

  • Daniel: This should go directly to the Dev. Pri. meeting given that this has already been evaluated by this group

Christine created a dedicated test server at http://metadata.lyrtech.org/staff

DACS Tooltips Spreadsheet Update

  • Most of the progress has been quite straightforward

  • The priority of this is significantly less than the Importer/Exporter testing

  • It might be more efficient to proceed by perhaps setting a month for dedicated focus upon this and finalizing it

MARC-XML and EAD Import/Export Sheet Updates

  • Bugs are extremely difficult to test

  • With a massive MARC import, Columbia did not find that many of the more obscure fields were used

  • MARC 342 tag represents a case which is probably a very extreme edge case

  • Maybe something like paring down the imports for this process might make it much easier

  • We should explore defining a core group of minimum supported fields

  • Prioritizing testing by determining which elements in MARC and EAD are used

  • There have been usage surveys to try and find points at which we could reduce the complexity offered by MARC where use cases aren’t relevant

Meeting adjourned at 16:00 EST