May 24, 2017

LJ Index 2014: Another Kind of Outcome Data

Generally, public library outcome assessment has relied on self-reported data from library patrons. This is the approach taken by the Impact Survey project of the University of Washington Information School. In turn, the Impact Survey is recommended specifically as the way to meet the Edge Initiative’s outcome-based evaluation mandates. This is the approach recommended by the Institute of Museum and Library Services (IMLS) for most project evaluations. In many cases, voluntary self-reports from library patrons is the only reasonable approach to learning how they benefited from using a collection, service, or program. But, it is not the only kind of outcome data.

Sometimes, for a particular library service, there may be more objective and comprehensive data on the outcome of interest. There are plenty of examples of public library projects whose outcomes are being measured by reading tests—either state-mandated reading tests or other more specialized tests.

Kim Fender (Public Library of Cincinnati & Hamilton County, Cincinnati, OH) reports that “our new Summer Camp Reading (not our summer reading program) gives each participant a DIBELS [Dynamic Indicators of Basic Early Literacy Skills] test at the start of the six-week camp, periodically during the camp, and at the end. The test scores of the 72 participants showed varying degrees of improvement and a strong correlation between attendance and improvement.

Carolyn Anthony, at Skokie Public Library, IL, says “the outcome effort that has been most successful is working with the schools to get a report on data from reading tests. We learned that the students who participated in summer reading either maintained or improved their reading scores on the Fall test while the typical student in the control group experienced the summer slide in reading scores.”

Krissy Wick, of Madison Public Library, WI, describes two projects underway.

Read Up with Madison Public Library is a joint project with the school district, the school and community recreation program (MSCR), and United Way. Using a DIBELS test, they will be doing a pre/post reading fluency assessment to assess “students’ reading levels before and after summer school.” She explains: “We will create three comparison groups: (a) students in summer school at Orchard Ridge Elementary School and Lapham Elementary School, but are not participating in MSCR in the afternoon; (b) students in summer school and Read Up with the Madison Public Library; and (c) a demographically matched group of students who are neither in summer school nor Read Up with the Madison Public Library.”

Sustain the Gain will be a project at O’Keefe Middle School involving students whose parents have given consent for the library to access the students’ MAP [Measures of Academic Progress] scores from Spring 2014 and Fall 2014. The purpose of the project is to determine “what impact student participation in the summer reading program has on sustaining their reading gains [over the summer].”

What all of these projects illustrate is that, sometimes, public library administrators, in-house researchers, or research consultants can gain access to “hard data” about an outcome that has some greater claim to objectivity than a patron’s self-reported outcome.

The other benefit of using such available data on library service outcomes is that it can make it possible to eliminate, at least partially, the self-selection factor associated with patron surveys. It is reasonable to suspect that patrons who feel they have outcomes to report are more likely to respond to such surveys. When available data of this sort is used, all participants in a program (in most of these examples, summer reading programs) can be included, avoiding the bias of a patron’s personal decision about whether or not to respond to a survey.

Admittedly, any library service whose ultimate outcome is to affect a student’s success in school has an easy option to pursue. More and more frequently, we are hearing about studies such as these, with which school officials—particularly those who control access to testing data—are willing to cooperate. By having school officials do the data-matching between library and test score datasets, student privacy can be protected easily, while providing access to this powerful kind of outcome data.

Before embarking on a self-report survey as a means of measuring patron outcomes, consider if there might be available data about the outcome in which you are interested. If not, a survey is a good alternative; but, if more objective, and possibly more complete, data does exist, consider using it, or having it used, in your evaluation research.

» Next page: “New Output Measures”

This article was published in Library Journal. Subscribe today and save up to 35% off the regular subscription rate.

Ray Lyons & Keith Curry Lance About Ray Lyons & Keith Curry Lance

Ray Lyons (raylyons@gmail.com) is an independent consultant and statistical programmer in Cleveland. His articles on library statistics and assessment have also appeared in Public Library Quarterly, Public Libraries, and Evidence Based Library and Information Practice. He blogs on library statistics and assessment at libperformance.com.
Keith Curry Lance (keithlance@comcast.net) is an independent consultant based in suburban Denver. He also consults with the Colorado-based RSL Research Group. In both capacities, he conducts research on libraries of all types for state library agencies, state library associations, and other library-related organizations. For more information, visit http://www.KeithCurryLance.com.

Share
Claims of “fake news” have vaulted once-dry information literacy into the spotlight. To seize the teachable moment, this online course will offer up-to-date tools and effective tactics to enable patrons to critically assess sources, facts, and context. Competitive team discounts are available. Please call 646-380-0773 to inquire about our discounted team rates.