I’m a true believer when it comes to qualitative assessment: give me a room full of people and an hour and I’ll gladly do a focus group, but quantitative assessment based on data and metrics? Not so much. In fact, not at all… until I worked on a survey project with Sarah Tudesco, an organizational performance reporting and assessment librarian at Harvard. Sarah is not your typical data analysis person (or she’s not my idea of a typical data analysis person); although she works with data and figures and reports constantly, she looks beyond them to get an accurate reading about a problem or process. I learned a ton of stuff from her in the process of completing the project in which we were both involved, and I also learned a lot about Sarah’s approach to data. In the interests of sharing her thoughtful (and practical) approach with others, I had a conversation with Sarah about the assessment she does. Here’s some of what I learned:
Sarah is fascinated by data and what it reveals about a library and its impact. She’s captivated, for instance, by the numbers revealed in collections’ usage statistics, and in tying them to the impact of a library on the institution it serves. She notes that most of the studies that are currently trying to map research in institutions are heavily skewed toward the sciences, and therefore she sees a need to study the research usage of archives and primary sources since these are “the labs” for the humanities.
Her approach to quantitative assessment is holistic; she feels the need to maintain a balance between numbers and contextual knowledge. As she puts it, “a single metric is not enough to tell any story accurately.” She’s concerned that some metrics now being used to look at the impact of scholarship don’t reflect enough of the specific scholarly interests of researchers today—these metrics don’t fully reveal how research is changing. She notes that fields of study have become increasingly specific, yet research itself has become increasingly interdisciplinary. Sarah sees the need to monitor how different fields interact to see what new fields these interactions create. She’s also interested in using data to track trends in the creation of new scholarship: how does it come about that people create great new things?
As Sarah tells it, it’s possible to do interesting things with the algorithms used in analyzing data. She finds it especially useful to connect programmers with bibliographic information to discover what data tells us about our assumptions. She emphasizes that analyzing chunk data is an iterative process; you don’t get the right answer the first time you collect the data—reiterating the data collection reveals the limitations of the data, which can then be supplemented by qualitative information. Understanding the limitations of data means you know that data alone won’t reveal the whole picture. (This, of course, is one of the main reasons I trust Sarah’s data analysis—she doesn’t rely solely on the numbers.)
Sarah also sees the need to be critical of data, to have someone involved in the analysis who understands the “entire story” of the problem being studied, someone who can understand the context of the data, especially what’s missing from it. She aims to construct an overall narrative, or story, that the data and other information can tell, emphasizing the need to add information from people on the ground (those who are doing the work that’s being studied, and who work with library researchers).
I was particularly interested to hear Sarah say that, “trying to assess everything according to a single standard is flawed—you need to respect and allow for the complexity of an issue to assess it accurately.” [She then shared with me an anecdote about Frederick J. Kelly, the “father” of the multiple-choice test, who in his later work changed his mind about judging work by such limited metrics.] She further noted that the typical American corporate process of “throwing out anything that doesn’t fit on a spreadsheet” does not give a good overall picture of the issue being studied.
Sarah’s very much a “people person”—she likes to work with a diverse group of colleagues across a wide variety of library operations. She also likes to teach others how to analyze data (she facilitates a library data analytics group that helps people in many libraries do more detailed analysis than they might otherwise be able to do). She’s auditing a data visualization course and is imagining how to use what she learns to “make data come alive” for people). She often listens to music while analyzing data; her favorite music to analyze by is Bach’s Chromatic Fantasy and Fugue, Mozart, and Kanye West (for when the data needs disciplining). When I asked her what she foresaw for library data in the future, she observed that it’s going to be more important to connect it to the open web, and more important than ever before for libraries to intermingle their data to enable us to identify real library trends.
For my part, it sets my mind at ease to know that Sarah is doing the work she’s doing the way she’s doing it, and that she’s keeping an eye to the future to find ways of impacting libraries positively. Now that’s a great use of data!
Read eReviews, where Cheryl LaGuardia and Bonnie Swoger look under the hood of the latest library databases and often offer free database trials