Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Overview


At the request of ArchivesSpace Development team member, Susan Pyzynski (Harvard University), Emilie Hardman (Houghton Library, Harvard University) conducted a usability test on the ASpace PUI with focus on navigation and searching within a collection. Tests were conducted at the Harvard Library User Research Center with 5 participants recruited by the Harvard Student Agencies on Friday, October 7, 2016. Participants were sophomore, junior and senior students at Harvard University. 2 students were in the STEM field, one of whom had used archives out of personal interest in the past, while the other student had no experience. 3 students were arts and/or social sciences concentrators (majors) and 2 had used previously archives for class assignments.  Notes were recorded in a grid concurrent with the tests and screen captures were reviewed to fill in detail after the fact.


The Test


A script was developed to help structure the experience of each participant. The purpose of the site was briefly explained and a scenario was offered to guide the activities. Students were told that a professor had assigned a research paper on a person named John Jeffries and instructed them to make substantive use of Jeffries archive as a primary source of evidence. The professor told students that the Jeffries papers were held by Harvard’s Houghton Library and that they should look them up online to determine what parts of the archive they might want to use before going into the library. Students were told they’d gone to the website their professor supplied and entered “John Jeffries,” in a search bar and the first link returned opened to: http://aspace.hudmol.com:7310/repositories/5/archival_objects/8157  This page was preloaded for participants.

...

Prior to testing we also calculated average and mean values for time on task with users who had some familiarity with the ASpace system. This value offers a opportunity to compare an efficient means of task completion with the time it takes for a new user to understand how to accomplish a task within the site. Due to a corruption of video files on transfer from Morae, however, it was not possible to extract all of the time on task values desired.



The Tasks

Task 1

Tasks 2 + 5

Task 3

Task 4


Locate overview of collection contents


Locate non-specific group of materials


Locate biographic information on collection creator in finding aid


Locate specific items within finding aid


Findings and Recommendations


Initial Impressions


Summary

Dropped into an archival object page, a majority of the participants expressed confusion and some frustration that there was not enough information on the page for them to determine what they were looking at and what it meant. Three participants commented on the large expanse (“excessive”) amount of white space on the page and wondered specifically if they were “looking at the right thing” because there didn’t seem to be enough on the page. “Maybe it didn’t load?” a participant asked, thinking the page display was an error. For another participant, the result felt “disembodied.” The minority opinion, expressed by one participant, was that the site had a “nice clean design” and felt useful and orderly.

...

Four participants struggled the prevalence of dates on this page as an additionally confusing element. While most participants were able to parse the title and express understanding that the first range of dates, “1745-1819,” were Jeffries’ birth and death dates and the second range of “May-December 1786” were the dates the diary was kept, some were confused about the breadcrumb date range of “January 1-July 21., 1779” (“what does that have to do with this diary?”) and the creation dates at bottom of the page. In fact, this particular example may have an heightened effect because it’s a nonsensical statement of “Creation: 1786 Creation 1786,” which may in fact be a repeated field error or an anticipation of date range(?). Regardless of the error though, participants were confused by any repetition of date data from the title field.


date.jpg


Recommendations


Rebalance the page layouts to make better use of available space. To contribute to a sense of completeness, include information on each page situating the material within the collection, e.g. present both higher and lower levels of description. Accounting for the fact that a user may enter the collection at any level, it may also be appropriate to consider making the collection’s abstract consistently visible. It may also be useful to consider restyling breadcrumbs to emphasize their existence, to provide users with a sense of their hierarchical arrangement, and to offer some insight into the meaning of this hierarchy.

...

Consider restyling creator dates to visually separate and/or de-emphasize dates. The Wikipedia parenthetical standard may be useful, e.g. Lucy Eldine Gonzalez Parsons (c. 1853 – March 7, 1942).  Date information is present in title of object and in a special “creation” field: presenting it in the creation field may be more effective and sensible to users and may obviate the need for it in the title.



Task 1: Locate overview of collection contents


Summary


In the first task participants were asked to use the site to find a way of getting an overview of all of the contents of the collection. The expectation was that participants would identify the Contents and Arrangement section and skim it to verify that this is an area where they will be able to see descriptions of the collection’s contents. Three people familiar with the ASpace interface also performed this task to provide a comparative measure that could serve as a baseline for efficient task completion.

...

60% of the participants made use of the breadcrumbs in some way during this task. There was, however, no consistency with use: one participant clicked on “Houghton Library” (“because it’s logical to look at the first one in this list”), one on the next level up (a subseries of diaries), and one on the “John Jefferies Papers.” The other participants appeared not to see the breadcrumbs.


Getting to Overview of Collection




Participants

Experienced Users

Difference

Minimum

54.4

12.8

.43

Maximum

240.5

22.8

218

Mean

178.2

19.4

165


...

These data are from the three successful participants and are meant to help contextualize the experiences, not to serve as scientifically rigorous measures of a time on task. Certainly “success” this task was loosely defined, but it should have had a clear and quick path to obtaining a general overview of the collection. At the efficient end of the spectrum we see only about half a minute’s difference between the participant and the experienced user, but at the upper and middle ranges we see differences of almost three to more than three and half minutes. Outside of a testing environment a user would be highly unlikely to spend that much time trying to understand and navigate a site.


Recommendations


Provide a table of contents on every page view a user might encounter. Collapsed views may be sufficient, but very clearly identifiable icons should be employed to indicate more levels follow. These icons, if used for collapsable views, should either be incorporated into the textual link or themselves work to open the additional levels. Currently, the arrows used to indicate more content is available are not actionable and participants tried to click them with no result.

...

As above, this task points to the utility of restyling breadcrumbs to emphasize their existence, to provide users with a sense of their hierarchical arrangement, and to offer some insight into the meaning of this hierarchy. This seems particularly important for users who may come in at a component level and need a sense of how it relates to the collection in which it is situated.



Tasks 2 and 5: Locating non-specific materials within finding aids


Summary


Two different tasks were designed to see how participants would go about searching for materials with a broad theme or function. Two opportunities were provided with different subject matter to allow for some extra exposure to the user experiences in this core component of the research process. The tasks were spaced out through the test, one at the beginning and one at the end, to see if additional experience with the site would make it easier for participants to get results more efficiently. In the first task participants were asked to find any materials about Jeffries’ ballooning activities and in the second they were asked to find materials on Jeffries’ surgical activities.

...

                mousetrack circling white space as participant describes feeling like he’s hit a dead end


Recommendations

Invest in the search algorithm and experiment with ways of returning results more in line with the expectations of users familiar with Google. Several users also recommended autofill or suggestions of other search terms, especially if a search is unsuccessful.

...

Address the sense that a page may be a “dead end” by adding information about how to access the material.



Task 3: Locating specific items within finding aid


Summary


Participants were asked to locate a specific item that Jeffries wrote reflecting on his balloon ascent. There is only one item within this collection that would fulfill the terms of this task, unlike previous tasks described, in which groups of materials would have served the hypothetical research interest. This task was also designed to have participants spend a little more time on an item/component level page because they were asked additional questions about the data presented on this page.

...

Comparing the time it takes to find a specific item in ASpace to the other PUIs tested in previous work, we find that similar users experiencing the Smithsonian or Princeton sites for the first time to locate a specific item in half the time.  These values are for the mean time on task in seconds.

Time_on_tasks__AS01.jpg


Recommendations


As previously, invest in search algorithm and restyle the collection search button to encourage use.

...

Seek new language to describe acquisition data.



Task 4: Locating information contained in <bioghist> / <scopecontent>


Summary


In this task, participants were asked to identify who Jeffries flew with in his famous ballooning trip across the English Channel. This information is prominent in the <bioghist> and <scopecontent> notes for this collection.

...

Of the two successful participants, one first looked at some of the component descriptions in Contents and Arrangement, before hypothesizing that Benjamin Joy Jeffries was the person who made the trip with Jeffries. When prompted to find support for this guess, the participant noted that Benjamin Joy was writing about the one hundredth anniversary of the flight and so actually was probably not a good guess. The participant spent just over two minutes seeking the correct information elsewhere before locating it in the Scope and Content note. The other successful participant’s quote is displayed above and his time to identify was 46.28 seconds. The fastest path to success with ASpace on this task maps to the slowest path to success experienced by first time users of the Smithsonian and NYPL sites, aligning more with the generally less-efficient user experience on Princeton’s site for this task.


Minimum time spent on task by a participant

biog.jpg


Maximum time spent on task by a participant

Time_on_tasks__AS02_max.jpg

Recommendations


Restyle <bioghist> and make it a feature on the component pages. Consider offering this information in the form of a sidebar populated with additional SNAC data and/or Google Knowledge Graphs.


Other observations


Two enhancements were requested during wrap-up by all but one of the participants. The most time was spent across all tests discussing participants wish for images of the material. This was sometimes phrased as a desire for digital access, which is of course not always possible to provide, a fact that was understood, and even preemptively articulated by participants. The function of an image or images seemed to be something that participants thought would help them vet and understand better what the material being described to them was. It was, they emphasized, not just decorative, it would be confirming and help them make decisions about whether or not the material could be useful to them, “even if it was just one or two pages.”

...

The other desire clearly articulated by participants was a “more robust search engine.” Participants wanted the site to provide suggestions for other search terms, especially if search results were not returned. They also were adamant about a more sophisticated search function; the current search eroded trust in the site and damaged participants’ confidence in their ability to make any positive use of the site.





Submitted by: Emilie Hardman, October 19, 2016