Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents


Overview

...

A script was developed to help structure the experience of each participant. The purpose of the site was briefly explained and a scenario was offered to guide the activities. Students were told that a professor had assigned a research paper on a person named John Jeffries and instructed them to make substantive use of Jeffries archive as a primary source of evidence. The professor told students that the Jeffries papers were held by Harvard’s Houghton Library and that they should look them up online to determine what parts of the archive they might want to use before going into the library. Students were told they’d gone to the website their professor supplied and entered “John Jeffries,” in a search bar and the first link returned opened to: http://aspace.hudmol.com:7310/repositories/5/archival_objects/8157  This page was preloaded for participants.

...

Prior to testing we also calculated average and mean values for time on task with users who had some familiarity with the ASpace system. This value offers a opportunity to compare an efficient means of task completion with the time it takes for a new user to understand how to accomplish a task within the site. Due to a corruption of video files on transfer from Morae, however, it was not possible to extract all of the time on task values desired.



The Tasks

Task 1

Tasks 2 + 5

Task 3

Task 4


Locate overview of collection contents


Locate non-specific group of materials


Locate biographic information on collection creator in finding aid


Locate specific items within finding aid


Findings and Recommendations


Initial Impressions


Summary

Dropped into an archival object page, a majority of the participants expressed confusion and some frustration that there was not enough information on the page for them to determine what they were looking at and what it meant. Three participants commented on the large expanse (“excessive”) amount of white space on the page and wondered specifically if they were “looking at the right thing” because there didn’t seem to be enough on the page. “Maybe it didn’t load?” a participant asked, thinking the page display was an error. For another participant, the result felt “disembodied.” The minority opinion, expressed by one participant, was that the site had a “nice clean design” and felt useful and orderly.

...

Two participants indicated that their first impressions of this page was that they were not qualified to use it. One specified that it felt “esoteric” and alienating.  





“I feel off the bat that I’ve landed on something very esoteric.

Really something for people who specialize in this kind of research.”





More specific concerns and questions on the page were also raised. The “Archival Object” icon was seen as actionable by a majority of participants and most tried to click it, assuming it was a button and that it might produce a useful result, e.g. load digital images, etc.

...

Four participants struggled the prevalence of dates on this page as an additionally confusing element. While most participants were able to parse the title and express understanding that the first range of dates, “1745-1819,” were Jeffries’ birth and death dates and the second range of “May-December 1786” were the dates the diary was kept, some were confused about the breadcrumb date range of “January 1-July 21., 1779” (“what does that have to do with this diary?”) and the creation dates at bottom of the page. In fact, this particular example may have an heightened effect because it’s a nonsensical statement of “Creation: 1786 Creation 1786,” which may in fact be a repeated field error or an anticipation of date range(?). Regardless of the error though, participants were confused by any repetition of date data from the title field.


date.jpg


Recommendations


Rebalance the page layouts to make better use of available space. To contribute to a sense of completeness, include information on each page situating the material within the collection, e.g. present both higher and lower levels of description. Accounting for the fact that a user may enter the collection at any level, it may also be appropriate to consider making the collection’s abstract consistently visible. It may also be useful to consider restyling breadcrumbs to emphasize their existence, to provide users with a sense of their hierarchical arrangement, and to offer some insight into the meaning of this hierarchy.

...

Consider restyling creator dates to visually separate and/or de-emphasize dates. The Wikipedia parenthetical standard may be useful, e.g. Lucy Eldine Gonzalez Parsons (c. 1853 – March 7, 1942).  Date information is present in title of object and in a special “creation” field: presenting it in the creation field may be more effective and sensible to users and may obviate the need for it in the title.



Task 1


Summary


In the first task participants were asked to use the site to find a way of getting an overview of all of the contents of the collection. The expectation was that participants would identify the Contents and Arrangement section and skim it to verify that this is an area where they will be able to see descriptions of the collection’s contents. Three people familiar with the ASpace interface also performed this task to provide a comparative measure that could serve as a baseline for efficient task completion.

...

The Contents and Arrangement section was only consulted in the course of this task by one of the participants. The participant read through the three top-level series displayed on the main collection page, clicked on the first series (trying first to click the arrow icon before realizing only the linked words were actionable) and scanned the items in it. He did not return to the collection page to see the contents of the other two series. Two participants made use of the Scope and Content note and indicated that it told them what they needed to know about the contents of the collection. The remaining 40% of participants could not find a way to obtain an overview of the collection. One indicated that he would “just go to a librarian” for assistance.





“I think I probably would have just Googled this is find out

what’s in the collection, but you told me to stay here.”





60% of the participants made use of the breadcrumbs in some way during this task. There was, however, no consistency with use: one participant clicked on “Houghton Library” (“because it’s logical to look at the first one in this list”), one on the next level up (a subseries of diaries), and one on the “John Jefferies Papers.” The other participants appeared not to see the breadcrumbs.

...

Getting to Overview of Collection




Participants

Experienced Users

Difference

Minimum

54.4

12.8

.43

Maximum

240.5

22.8

218

Mean

178.2

19.4

165



These data are from the three successful participants and are meant to help contextualize the experiences, not to serve as scientifically rigorous measures of a time on task. Certainly “success” this task was loosely defined, but it should have had a clear and quick path to obtaining a general overview of the collection. At the efficient end of the spectrum we see only about half a minute’s difference between the participant and the experienced user, but at the upper and middle ranges we see differences of almost three to more than three and half minutes. Outside of a testing environment a user would be highly unlikely to spend that much time trying to understand and navigate a site.


Recommendations


Provide a table of contents on every page view a user might encounter. Collapsed views may be sufficient, but very clearly identifiable icons should be employed to indicate more levels follow. These icons, if used for collapsable views, should either be incorporated into the textual link or themselves work to open the additional levels. Currently, the arrows used to indicate more content is available are not actionable and participants tried to click them with no result.

...

As above, this task points to the utility of restyling breadcrumbs to emphasize their existence, to provide users with a sense of their hierarchical arrangement, and to offer some insight into the meaning of this hierarchy. This seems particularly important for users who may come in at a component level and need a sense of how it relates to the collection in which it is situated.



Task 2: Locating non-specific materials within finding aids


Summary


Two different tasks were designed to see how participants would go about searching for materials with a broad theme or function. Two opportunities were provided with different subject matter to allow for some extra exposure to the user experiences in this core component of the research process. The tasks were spaced out through the test, one at the beginning and one at the end, to see if additional experience with the site would make it easier for participants to get results more efficiently. In the first task participants were asked to find any materials about Jeffries’ ballooning activities and in the second they were asked to find materials on Jeffries’ surgical activities.

...

Several participants were notably frustrated with the search and felt that they were using words that should yield results. When participants pared down to searches like “balloon,” they returned to this theme and wondered why “ballooning” wouldn’t have worked (“does it have to be that specific?”)





“It doesn't work like Google and I’m doubting the algorithm. It's a little bit frustrating that I'd have to waste time trying to figure out how this search algorithm works. I can almost guarantee then that there might be like twenty more results that I'm not seeing. So I don't feel confident."





Two participants also commented specifically on the number of search results returned and the fact that they could not create a more specific initial search to limit returns or search within their results. Though a filter option is offered for dates, the function does not appear work currently.  

...

A participant who returned a list of surgical notes in the collection clicked on one of the results and said he thought he’d found something that would satisfy the task, but that it also felt like a “dead end.” With his mouse, he circled the white space on the page.





“This and other pages feel like dead ends. I want to know: like how do I get it? Can I see here at least one or two pages to see if it's useful? What do I even do from here?”





dead_ends.jpg

                mousetrack circling white space as participant describes feeling like he’s hit a dead end


Recommendations

Invest in the search algorithm and experiment with ways of returning results more in line with the expectations of users familiar with Google. Several users also recommended autofill or suggestions of other search terms, especially if a search is unsuccessful.

...

Address the sense that a page may be a “dead end” by adding information about how to access the material.



Task 3: Locating specific items within finding aid


Summary


Participants were asked to locate a specific item that Jeffries wrote reflecting on his balloon ascent. There is only one item within this collection that would fulfill the terms of this task, unlike previous tasks described, in which groups of materials would have served the hypothetical research interest. This task was also designed to have participants spend a little more time on an item/component level page because they were asked additional questions about the data presented on this page.

...

In an attempt to help the participants make sense of the application, the initial introduction to this test included a statement that ArchivesSpace was “like a Hollis (Harvard Library catalog) for archives.” This meant that participants were more likely than they would have been otherwise to have the concept of descriptive surrogacy available to them as a framework for their encounter with ASpace. More confusion may have been experienced without this guidance, but it did not entirely serve to fix the anticipated issue. It is also likely to be the reason several participants compared this component page unfavorably to a record for a book. It was “like a Hollis page, but really blank” or “missing all of the detail I get to see for a book, especially like a link for it online or something about how I get it, like where it is in the library.” Others also expressed confusion over how to get the material and specifically asked where the link to digital images was. This task seems to have been a tipping point for some participants who, in the midst of it, expressed comprehensive frustration and confusion.





“From the moment I saw the site I was automatically like I’m going to be frustrated and, here, exactly, it’s just not giving me the right details.”




as03_white_space.jpg

screengrab illustrating mouse trail of the participant expressing frustration

...

Comparing the time it takes to find a specific item in ASpace to the other PUIs tested in previous work, we find that similar users experiencing the Smithsonian or Princeton sites for the first time to locate a specific item in half the time.  These values are for the mean time on task in seconds.

Time_on_tasks__AS01.jpg


Recommendations


As previously, invest in search algorithm and restyle the collection search button to encourage use.

...

Seek new language to describe acquisition data.


Task 4: Locating information contained in <bioghist> / <scopecontent>


Summary


In this task, participants were asked to identify who Jeffries flew with in his famous ballooning trip across the English Channel. This information is prominent in the <bioghist> and <scopecontent> notes for this collection.

...

As in the previously conducted tests comparing a variety of archival discovery PUIs, this group of participants did not expect to be able to find, or did not expect to be able to easily find contextualizing historical details on this site.





"Honestly, I'd do a lot of things outside of this website because I don't expect it to tell me anything about the big historical context. So I'd look up a bunch of this stuff to get a better sense of dates and locations, and info about what things I'd really want to look at." [Later, reading Scope and Contents] “...actually it’s giving me good information."




60% of the participants were unable to complete this task. One participant indicated if not for the artificial circumstances of the test, he never would have looked for the information on this site. The non-successful participants looked in a variety of places for information, but did not extract the relevant information.One participant expressed surity that the information was there but that he just didn’t know how to find it because he lacked sufficient background knowledge, while another hypothesized that this site wouldn’t be the one to give that information. The third participant expressed a sense of futility and a clear desire to move to another task.

...

Maximum time spent on task by a participant:

Time_on_tasks__AS02_max.jpg

Recommendations


Restyle <bioghist> and make it a feature on the component pages. Consider offering this information in the form of a sidebar populated with additional SNAC data and/or Google Knowledge Graphs.


Other observations


Two enhancements were requested during wrap-up by all but one of the participants. The most time was spent across all tests discussing participants wish for images of the material. This was sometimes phrased as a desire for digital access, which is of course not always possible to provide, a fact that was understood, and even preemptively articulated by participants. The function of an image or images seemed to be something that participants thought would help them vet and understand better what the material being described to them was. It was, they emphasized, not just decorative, it would be confirming and help them make decisions about whether or not the material could be useful to them, “even if it was just one or two pages.”

...

The other desire clearly articulated by participants was a “more robust search engine.” Participants wanted the site to provide suggestions for other search terms, especially if search results were not returned. They also were adamant about a more sophisticated search function; the current search eroded trust in the site and damaged participants’ confidence in their ability to make any positive use of the site.





Submitted by: Emilie Hardman, October 19, 2016