September 30, 2014

Doing the Math: Managing Academic Libraries With Data In Mind

DataDriven1b Doing the Math: Managing Academic Libraries With Data In MindThis past December, LJ teamed up with Electronic Resources and Libraries (ER&L) to dive deep into the use of data-driven decision-making in academic libraries in a series of three free webcasts. The series, moderated by Bonnie Tijerina, head of e-resources and serials at Harvard Library and ER&L conference coordinator—and made possible thanks to sponsorship by ProQuest, Springer, and Innovative Interfaces—­explored a range of strategies academic libraries are deploying as they use data to serve their customers more ­effectively.

How can we use data on key metrics such as circulation and student visits to address emerging trends and challenges? Framing the conversation, Sarah ­Tudesco, assessment librarian, Yale University, addressed “what it really means to be a data-driven organization” in the “What Is a Data-Driven Academic Library?” webcast, the first in the series, held December 4.

From question to story

Tudesco suggested a five-part process for libraries interested in making data central to strategic decision-making: identifying questions, developing a plan to collect the necessary data to answer those questions, collecting the data, analyzing it, and using that analysis to generate actionable recommendations.

First, broad questions need to be broken down into more manageable chunks, like whether a library is more geared toward graduate students or undergrads, which users participate in programs, and how library spaces are used. Establishing manageable questions help anchor efforts going forward, ­Tudesco said.

After establishing the questions, data must be compiled from different sources, which Tudesco grouped into three “buckets”: systems, workflow, and patron input. Systems data is the most straightforward. Tools such as Google Analytics can help track website traffic. And many libraries have systems to track workflow, such as staff records of questions answered or time spent helping patrons at the reference desk.

Finally, patron input—drawn from surveys, focus groups, and social media analytics—is a source of qualitative data that can offer additional perspective on systems and workflow data.

The collection step also involves placing data into a program in which it can be manipulated and examined, such as a spreadsheet or SQL database. Even ubiquitous programs such as Microsoft Excel are powerful tools in the hands of expert users. “I really advocate becoming proficient at Excel,” ­Tudesco said, describing it as a “core tool in your arsenal” and advising librarians to consider taking a class on the program.

At the end of the day, analysis should culminate in actionable recommendations. Tudesco advised viewers to keep in mind that library data can be very specialized and that one key goal is to translate this information for a provost or other administrators. Data visualization can help, but “it’s not just about developing beautiful charts,” she said, advising that users “learn to tell a story.”

Using usage data

The second webcast in the series looked at how greater analysis and more advanced benchmarking are helping researchers learn more about the way students and patrons are using resources. Emily Guhde of North Carolina’s electronic resource consortium NC Live kicked off the panel with a discussion of how the service is working to make usage data more valuable for its member libraries.

NC Live provides usage data to its member libraries but still fields questions about what that data means. One of the most common questions the consortium gets from librarians is, “What kind of use should we be seeing for our library?”, particularly for electronic resources. “In times of tight budgets,” Guhde said, “this is information that can help libraries make decisions about which resources to keep and which to cut.” To give its members a good answer, though, NC Live needed to make sure apples were being compared to apples and that meant setting usage benchmarks for libraries of similar types and sizes.

Breaking down libraries by type and population served let NC Live set those benchmarks for the three types of libraries it serves: public, academic, and community college. That first round of analysis resulted in 20 peer groups. Libraries could see which group they fit into and how their statistics measured up to similar institutions, as well as letting them know how much use they’re getting out of services like Academic Search Complete.

While their research has yet to be completed, the NC Live team has learned some valuable things in the first steps of the benchmarking study. “By gathering information for this study, we found out a lot of our members really are paying attention to usage data,” said Guhde. “Ninety-five percent of community colleges had run at least one usage report from NC Live in the year before the study began.” Knowing that there is such a thirst in the community for that information is driving NC Live to offer more of it, she said.

Next, John MacDonald of the University of Southern California (USC), Jason Price of the California Electronic Library Consortium, and Michael Levine-Clark of the University of Denver discussed the preliminary results of their study on how discovery services impact journal usage. “This is something that’s designed to change user behavior,” said Levine-Clark. “As such, it’s pretty important to look at.” The team’s look at how implementing discovery services changed user behavior, though, showed that the effects are hard to judge so far.

Remeasuring impact

The webcast series came to a close with its third episode, a look at how to use different kinds of data best to measure the impact of a research study, beyond the traditional impact factor. Gregg Gordon of the Social Science Research Network (SSRN), ­Jason Priem of ImpactStory, and Jennifer Lin from the Public Library of Science (PLOS) joined Tijerina to discuss how each of their organizations is working to move beyond measuring citations to get faster, more customizable looks at the impact of research.

Gordon led off, bringing attention to the growing variety of means available for measuring a study’s impact. Of course, Gordon pointed out, all those different methods of citing can worsen the problem of information overload. At SSRN, the problem is exacerbated by the availability of items related to papers, like working drafts and updated postpublication versions, which can result in as many as four different versions of one paper becoming available for consumption. To address these and other issues, SSRN has been working with researchers at the University of Washington to track papers by eigenfactor, a metric that charts not just citations of a paper but other relationships it has to other papers. “We think this creates a bigger, fuller, more beautiful picture of scholarship,” said Gordon.

Priem took over from there, looking at how online publication has the potential for a new revolution in scientific communication—a potential that has yet to be fulfilled, as journal practices have not evolved as rapidly as publishing technology, meaning that online journals are, in Priem’s words, “the same product, just delivered by faster horses.” By making journals truly web-native, Priem argued, “there’s a chance to publish the missing parts of the scholarly record,” from data sets that form the foundation of new research to the conversations about it on social media. These so-called “altmetrics” not only measure different data, they can do so faster than traditional citation tracking, making them more nimble than traditional measurements of impact and allowing researchers to learn about the way studies are moving the conversation in their field in real time, by measuring factors like citation on Wikipedia and conversation on Twitter.

Along with greater speed, pointed out Lin, closing the webcast and series, altmetrics allow for more customization in monitoring a paper’s impact. The head of a university department, a librarian, and a member of the press, she pointed out, may have different definitions of impact for a study that traditional metrics are ill-equipped to serve. That’s why PLOS has instituted a series of measurements called article level metrics. “What article level metrics provide is the ability to reconceptualize what research reach means,” Lin said. “And that really depends on what you’re interested in.” Article level metrics let anyone interact with the real-time data on PLOS articles and can be customized with factors such as author names, research institutions, and funding agencies. Users can see not only citations but more rapidly reported information like bookmarks and downloads, all of which can ­offer varied insights that may be more valuable to a range of audiences. ­“Depending on how you slice and dice the data,” said Lin, “very ­different ­stories emerge.”

Ian Chant is Associate Editor, News, and Matt Enis is Associate Editor, Technology, LJ

This article was featured in Library Journal's Academic Newswire enewsletter. Subscribe today to have more articles like this delivered to your inbox for free.

This article was published in Library Journal's February 1, 2014 issue. Subscribe today and save up to 35% off the regular subscription rate.

Share
Comment Policy:
  1. Be respectful, and do not attack the author or other commenters. Take on the idea, not the messenger.
  2. Don't use obscene, profane, or vulgar language.
  3. Stay on point. Comments that stray from the topic at hand may be deleted.

We are not able to monitor every comment that comes through (though some comments with links to multiple URLs are held for spam-check moderation by the system). If you see something objectionable, please let us know. Once a comment has been flagged, a staff member will investigate.

We accept clean XHTML in comments, but don't overdo it and please limit the number of links submitted in your comment. For more info, see the full Terms of Use.

Speak Your Mind

*