In August, Harvard Library opened its User Research Center (URC), where library staff can discuss, design, and implement in-person and device-based user experience research.
According to Susan Fliss, associate university librarian for research, teaching, and learning and director and librarian of Monroe C. Gutman Library, this is the next step in a change in focus for Harvard’s library system. “Over the past several years, Harvard librarians and staff have been investing time in developing skills in anthropological survey design and user testing. While we had many people who were undertaking user design projects, the projects were dispersed across libraries and schools.” By creating a centralized Research Center, Fliss hopes that Amy Deschenes, library user experience specialist, and Kris Markman, online learning librarian, can coordinate usability efforts across all of Harvard’s libraries.
Rather than rely on “anecdata”—stories about users that “everyone knows to be true”—Harvard’s librarians are looking for harder information. They want to watch users interacting with online resources and physical spaces themselves. “Evidence-based decision-making is driving change at Harvard,” explained Markman, “and the Center is a manifestation of Harvard’s commitment to this.”
The URC, located in Harvard’s Lamont Library, houses a screen-monitoring station, an eye-tracking device station on an adjustable table, two observation monitors and a wall screen for large-group monitoring, and many portable devices for field work.
Deschenes and Markman wanted to offer a highly flexible set of tools to serve the widest range of users and support the broadest spectrum of research methods possible. To that end, the URC’s devices range from a simple webcam and camcorder, to re-purposing Camtasia as a screen capture program for web-based behavior, to more specific user experience software like Morae (Dell) and Silverback (Mac), to the crown jewels in the collection: eye-tracking devices from Tobii.
In addition to a standard eye-tracker for computer-based testing, the URC has eye-tracking glasses (similar to Google Glass) that allow observers to watch where a participant looks while walking through a physical space. Markman can’t wait to use them to evaluate library exhibitions. “How do people engage with materials in a case—do they skim, do they read the labels, do they look from left to right or look at bigger objects first, etc? This is really hard data to get from other methods; the glasses show you exactly what the wearer is looking at, and for how long.”
How staff use the center
Ideally, library staff contact the URC before a project starts and schedule an initial consult. At such a consultation, Deschenes works with the group to determine research questions for the project and the best methods to answer those questions. (Projects often involve much more than standard website usability testing, including survey design and test script writing.) If library staff are doing the observation and testing themselves, Deschenes trains them on the software and devices and discusses best practices for conducting tests. Some library staff do testing in the URC itself, and others go where their users are with mobile kits. After the data collection is done, Deschenes facilitates debrief sessions to discuss results and relevant clips from observational videos. Staff can also use statistical analysis software at the URC to sort through raw data.
To streamline the early steps in the process, Deschenes said that they’re “working towards developing training materials, templates, and examples of past projects and their results in something like an FAQ, so that people can see if a project has been done before and what the results were.”
User research in practice
One of the URC’s first projects involved usability testing on five research guides on the LibGuides platform (Springshare) to prepare for migrating to the latest version. Markman wanted to answer the question, “Are LibGuides serving the research problems our students actually have?”
During the testing, they found that users generally perceived the guide content to be overwhelming, complicated, and hard to read. Users preferred searching over reading lists of resources, larger fonts, and having a single librarian profile so it was clear who the ‘right’ contact was. “Based on these results,” said Markman, “we created a best practices guide for designing in LibGuides,” including using left-side navigation and larger font sizes.
After some additional eye-tracking tests on two versions of the same guide, Markman added that her “work with the intern this fall is to follow up on these studies to continue to improve both how we design and lay out content on guides, and to inform the kinds of content we include.”
For the URC’s Open House, Markman and Deschenes developed the Research Skills Challenge—a short survey that aims to get a quick glimpse into what Harvard students’ research behaviors are, a little bit about what they know, and what kinds of research support they prefer. Markman said, “I think that sometimes the resources that we provide may not be things that students think they need, and therefore these resources are being unused or underused. We’ve collected data from about 96 undergraduates thus far, but I hope to be able to collect data from different graduate student populations in the future.”
An early external project was initiated by Skip Kendall, senior collection development and electronic records archivist at the Harvard University Archives. “We were trying to decide whether to continue developing our own web archiving system, use a vendor, or take a hybrid approach. Comparing functionality was relatively simple but we needed to know what preferences users might have in the interface.” After Deschenes met with the working group to outline the project, she helped its members decide which methods to use, write a testing script, and determine how many subjects were needed and from what backgrounds; she also conducted a hands-on training session for the group and volunteers. “Amy set up the equipment and software for us as much as possible with very clear instructions on how to use it. I did my testing at the Center but some of the group did it offsite. The tools were top quality and very easy to use,” said Kendall. In contrast, “A couple years ago, some of my colleagues needed to do focus group work on a system interface and had to first learn how to do a focus group, relying on written sources that made the process more difficult and time-consuming than it would have been with an in-house expert helping them.”
The reactions of library staff to observing user testing can be profound and immediate. Said Fliss, “When librarians testing what they thought were their most effective research guides see users struggle in using them, they immediately want to redesign the guides in order to make the learning experience as effective and engaging as possible.”
Deschenes sees the same effect. Whether they’re viewing a live observation or recorded clips, “When system designers watch [someone struggling to use an interface], it creates empathy for the end-users. They’re less likely to say, ‘The user just needs to learn…’ and more likely to note that the button needs to be bigger.”
That same immediacy of response is why a library not at a major university might want to develop an in-house user research service. Markman and Fliss both believe that user research can lead directly to more effective programs and services at libraries. Said Markman, “Our users come from a context. How do we understand the world our users are coming from and how can we use that information to do our jobs better?”
Fliss agrees. “The library’s role…will continue to develop as technology races forward. Libraries are recruiting staff with expertise in new areas to support people in using information in different ways; they offer instruction in visualization methods, data curation practices, and multimedia composition, in addition to teaching people to use online databases and catalogs and how to organize and evaluate information. User research centers will be influential in determining the changing needs for services and the effectiveness of those services.”
In good news for libraries without the resources to create a dedicated center like Harvard’s, the staff says what is needed to conduct in-house research is mostly portable basics:
- Digital audio recorder
- Camcorder and tripod
- Participant computer and/or laptop
- Screen capture/screencasting software (e.g., Camtasia)
- Headsets with microphones
- A variety of mice (trackball, Mac-style, PC-style)
- Portable hard drives to make backups in the field
- Projector and screen for large group observation
Those with additional resources can add:
- Document camera—allows “over the shoulder” recording for mobile devices
- Usability testing software like Morae or Silverback
- Mobile devices to pre-set with apps or tools—iPod/iPad/iPhone, Android phone and tablet
- Wall-mounted monitor in a permanent observation room
- Tablet stands—floor and desktop
- Data analysis software—the URC uses NVivo
If you can acquire a larger grant:
- Eye-tracker for computer-based testing
- Eye-tracking glasses for physical space and mobile device testing