May 23, 2017

LJ Index 2014: The Star Libraries

Outputs, outcomes, & other data

Despite the promotion of outcome-based evaluation by the Institute of Museum and Library Services (IMLS) almost from its inception in 1996, the difference among an input, an output, and an outcome is still unclear to many in the public library community. Indeed, the term input can be added to this confusion.

Following the model presented in IMLS’s 2006 online course Shaping Outcomes: Making a Difference in Libraries and Museums, inputs are the resources libraries use. These include tangible assets such as staff, physical collections, and facilities as well as less tangible assets like library websites, digital collections, and wireless networks. Outputs are the quantities of services produced by the library. The LJ Index focuses on four interrelated outputs: library visits, circulation, public access computer use, and program attendance.

Outcomes are an entirely different matter. They are changes experienced by library users—changes in knowledge, skills, attitude, behavior, status, or condition. So, some might find it helpful to substitute for the sound-alike terms input, output, and outcome “library resources,” “library services,” and “user changes.”

Understanding these data types is further complicated by at least two other types of data: customer satisfaction, and return on investment (ROI).

For several years, the IMLS has provided a template for reports about state subgrants funded by the Library Services and Technology Act (LSTA). It called on project managers to report both outputs and outcomes of their projects. As a reviewer of such reports, Keith Curry Lance (one of the authors of this article) has seen alarming numbers of cases in which inputs were reported as outputs (e.g., the numbers of specific types of materials added to collections) and outputs were reported as outcomes (e.g., attendance at programs on a particular topic).

Almost from the beginning of the discussion of outcome measurement, most of the library community seems to have considered customer satisfaction as an outcome. However, since an outcome is defined as a change experienced by the user as a result of library services, strictly speaking, this is questionable. Customer satisfaction is usually measured in terms of a degree of satisfaction with various aspects of service delivery, not how much one’s satisfaction may have changed. In any event, the idea behind outcomes is to assess not an individual, community, or segment of a community’s satisfaction with the library but the impact of libraries on the rest of that individual’s, community’s, or segment’s quality of life.

Another type of data that might muddle the concept of outcome is ROI. Most public library ROI studies—of which there have been many—have sought to fill in the blank in the following statement: For every dollar invested in the public library, the average member of the community receives ——— dollars of value in return. Economic benefits associated with library operations sometimes are measured by applying a dollar value to outputs, such as books borrowed, electronic articles accessed, or reference questions answered. Because these are still measures of library outputs relative to inputs, ROI approaches do not in the strictest sense measure outcomes–user change.

Relating outputs and outcomes

It is valuable for library administrators to have data on both outputs and outcomes, because they are different types of data. Output measures indicate how much service of different types libraries are providing. They can also indicate to whom libraries are providing services. Outcome measures indicate how the services libraries provide are changing the lives of library patrons. To manage a library well one needs both types of data, as well as an understanding of how outputs are associated with outcomes. This is fundamental to public accountability. Stakeholders—public officials and taxpayers—are entitled to know what and how much they got for their money from the public library as well as what difference it made in patrons’ lives.

The many differences between output and outcome data, however, confuse trying to use them together.

Another critical difference between outputs and outcomes is their data sources: libraries and users, respectively. Data on outputs (services provided) come from libraries, while data on outcomes (personal changes experienced) come from users. Barring extraordinary circumstances, libraries cannot know the changes users experience as a result of their services without consulting them either directly or indirectly. Also problematic is that library service outputs—at least as they are usually reported by libraries to their state library agencies and IMLS—tend to be generic: the total number of visits, circulation, program attendance, etc. Without knowing the purpose of each visit, the subject of the materials borrowed, or the specific questions asked, it is hard to match output to the possible outcome.

Another key distinction: reporting one is required, while articulating the other is voluntary. All state library agencies survey public libraries annually, and all of those surveys include questions about a variety of outputs. As a result, output data is available for virtually all U.S. public libraries every year. The situation with outcomes could not be more different. Outcome measurement is, by its nature, entirely voluntary. There is not the same working consensus about which outcome measures are important, so different libraries may choose to focus on them differently. Also, outcome measurement itself can be a labor-intensive and costly enterprise, unless one uses some of the turnkey solutions that are now available to support it.

ljx141101webStarTbl1

Qualitative vs. quantitative

As outcome assessment is conducted in most public libraries, outcomes are qualitative, while outputs are quantitative. Outcomes can only be quantified by reporting them in terms of the percentage of survey respondents who selected them from a list of possible outcomes. This percentage and how it ranks among other outcomes is strongly influenced by how the outcome is defined and the relative frequency of the change or activity experienced by the respondent. Sometimes, the outcome question is posed in such a way that it may be answered on the basis of one’s entire experience with libraries. Other times, it is posed in terms of what one has experienced in the past year. In the case of a specific event, outcome questions may be asked immediately afterward. Rarely are respondents to such surveys asked to report anything quantitative about an outcome, which makes some outcome survey results misleading, since one has no idea about the frequency of the outcome claimed.

For instance, in its sample education outcome results for the mythical Emerald City Public Library, the Impact Survey project reports that 46% of users learned about a degree or certificate program, 33% took an online class or workshop, and 27% applied for financial aid. These percentages make sense in the context of the “past 12 months” question: over the course of a year, respondents are more likely to investigate an academic program than to take a class and more likely to take a class than apply for aid. By contrast, however, these percentages are almost certainly misleading about the relative frequency of such outcomes: if one applies for financial aid, one usually must do it at regular intervals, and, if one decides to pursue an academic program, one is more likely to take multiple classes. Thus, the usual reporting of outcomes in terms of percentages of users can tell us less about the frequency of an outcome than the breadth of the outcome statement offered.

ljx141101webStarTbl2It’s all in the timing

Another distinction between outputs and outcomes concerns the time frame. Service outputs are counted either continuously (e.g., circulation) or as reoccurring (e.g., reference). Services are dispensed on an ongoing, usually day-by-day, hour-by-hour, minute-by-minute basis. The time frame of outcomes is less knowable. Whether a survey captures that someone obtained a job after finding an open position in a database or attending a job-seeking program depends entirely on how long after the library service the outcome is experienced. At the time they are surveyed, many patrons may not have had sufficient time to experience an outcome to report. Hence, it is safe to assume that practically all outcome surveys must underestimate the benefits people receive from using their libraries.

The most dramatic differences between output and outcome data have to do with the methodologies by which they are created. Output data are usually collected on an ongoing basis that is routinized and almost always entirely unobtrusive for patrons. Outcome data, by contrast, are usually collected episodically—rarely more often than annually, if that—requiring far more challenging methodologies and sampling procedures and almost always requiring intervention in the user’s library experience in some way. The only major exception to this is event-related outcomes, which some libraries collect routinely for specific types of programming (e.g., early childhood literacy).

A major change in outcome measurement in public library settings that could have tremendous implications would be the development of more unobtrusive methodologies for collecting such data. Ostensibly, this has not happened because the most obvious alternative to surveying users would be observing them as unobtrusively as possible—something that would place excessive demands on staff time or require installing and maintaining monitoring systems—in either case, violating user privacy in a heavy-handed Big Brother way and being cost-prohibitive. Surveys can be an annoyance, but they do offer efficiency and cost-effectiveness that few other approaches can match.

Ultimately, the most important difference between output and outcome data is that they answer different questions: outputs tell us how much service of different types libraries are providing, while outcomes tell us what differences that service made in a patron’s life. Sometimes library advocates work with only one type of data, sometimes they use both. Yet, usually, in an advocacy context, there is no imperative to connect the two. Public library administrators and decision-making stakeholders do not have that luxury. They must be able to make connections among inputs, outputs, and outcomes.

Projects to watch

A variety of projects and at least one upcoming event offer substantial support to public library administrators grappling with output and outcome measurement issues.

Two very prominent and related projects being supported by the Bill & Melinda Gates Foundation are the Edge Initiative and the Impact Survey. Sixteen of the 37 Star Libraries whose directors or other representatives commented to us reported using an existing assessment framework, such as the Edge Initiative. Six of the 37 Star Libraries whose directors or representatives commented to us indicated using a turnkey approach to outcome measurement like the Impact Survey.

The Edge Initiative—led by the Urban Libraries Council (ULC)—provides a self-assessment and benchmarking tool and other resources to assist public libraries in evaluating and improving their digital services. As of 2014, it is open to all public libraries nationwide at no charge. The Edge Initiative is designed as an assessment tool, so what it offers to those interested in output and outcome measurement is less the tools needed for such measurement than a mandate that it occur. For the former, the Edge Initiative partners with an allied, Gates-funded project, the Impact Survey.

The Impact Survey is a project of the University of Washington Information School, cosponsored by the Gates Foundation and IMLS. It is a successor project to the U.S. IMPACT Study, which culminated with the June 2011 publication Opportunity for All: How Library Policies and Practices Impact Public Internet Access. The Impact Survey was developed as part of that large-scale national study and now continues to be available for use by local public libraries. Its aim is to inform public library administrators and stakeholders who want to understand how and for what purposes library patrons use digital resources and services. A substantial part of it addresses outcomes by asking responding patrons to identify outcome-related tasks in which they had engaged during the prior 12 months. Participation was free to all public libraries through October 2014 and continues to be available for a nominal fee scaled to a library’s funding.

In July 2013, PLA established a Performance Measures Task Force to develop standardized measures of “effectiveness” for “widely offered public library programs” as well as to promote training for implementation and use of such measures. What distinguishes this effort most notably from the Edge Initiative and the Impact Survey is its scope—reaching beyond digital services.

As part of the 2014 Public Library Data Service (PLDS) survey, the task force surveyed public libraries about outcomes and other types of data (e.g., inputs, outputs) in 12 service areas. In the task force’s report to the PLDS Advisory Committee at the American Library Association (ALA) annual conference in Las Vegas in June, five and possibly six service areas were prioritized. Based on early responses to the PLDS survey, early childhood literacy, digital access and learning, and civic engagement were identified as most important to respondents. The task force chose to add encouraging reading and economic and workforce development.

When we interviewed available representatives of 37 of this year’s Star Libraries from the better-funded peer groups and asked on which areas they are currently collecting outcome data, 21 mentioned encouraging reading; 20, early childhood literacy; 16, digital access and learning; nine, economic and workforce development, and eight, civic engagement. Unfortunately, however, most felt unwilling or unable to share reports of these efforts, as they are either currently under way, were recently completed, or the results are—for various reasons—unavailable. The few reports that were shared with us focused instead on goals such as needs assessment and/or customer satisfaction. This self-selected Star Library group’s ranking of the service areas was somewhat similar to the PLDS survey’s results for libraries serving populations of 100,000 or more—in rank order: encouraging reading, digital access and learning, early childhood literacy, civic engagement, and economic and workforce development.

Like the Edge Initiative and the Impact Survey, PLA’s forthcoming performance measures will be implemented by a self-selected subset of the nation’s public libraries. The resulting outcome data should be very useful to the individual libraries that participate, even if the collective results cannot claim to represent all libraries nationwide. By definition, any realistic approach to outcome measurement needs to be service-specific to be useful. This precludes the universal collection of outcome data by all public libraries.

Though contributing to the evolution of outcome measures on which data are collected by IMLS is not an avowed purpose of this project, we are hopeful that more generic versions of some of its new output measures may be viable nationwide. We hope that it will make more clear and direct connections among inputs, outputs, and outcomes. That seems to be the direction in which the project is headed, and it would be a major contribution to public library assessment.

Research Institute for Public Libraries

Of the 37 Star Libraries whose directors or other representatives we heard from, 22 reported conducting some kind of locally designed survey. While ready-made tools like the Edge Initiative and the Impact Survey seem to be gaining momentum, this sampling indicates that regardless of the merits of such national efforts, administrators of a substantial proportion of public libraries are opting to go it alone. Perhaps these leaders regard those tools as too complex, generic, or expensive (in fees charged or staff hours required), or, despite their high profiles, may simply be unaware of them. To the extent that this is true something needs to happen to equip them better for “going it alone” or to help them to understand better how to get the most out of participating in one of the national projects.

This is where the upcoming July 2015 Research Institute for Public Libraries (RIPL) comes in. Next summer public library leaders will gather in Colorado Springs for a three-day, hands-on workshop to build practical skills in assessment, evaluation, and benchmarking. The institute is designed to help librarians learn how to develop and implement a comprehensive evaluation plan that includes outcomes, as well as input and output measures, to inform decision-making and strategic planning and to demonstrate the impact of their libraries.

» Next page: “All the Stars, State by State”

This article was published in Library Journal. Subscribe today and save up to 35% off the regular subscription rate.

Ray Lyons & Keith Curry Lance About Ray Lyons & Keith Curry Lance

Ray Lyons (raylyons@gmail.com) is an independent consultant and statistical programmer in Cleveland. His articles on library statistics and assessment have also appeared in Public Library Quarterly, Public Libraries, and Evidence Based Library and Information Practice. He blogs on library statistics and assessment at libperformance.com.
Keith Curry Lance (keithlance@comcast.net) is an independent consultant based in suburban Denver. He also consults with the Colorado-based RSL Research Group. In both capacities, he conducts research on libraries of all types for state library agencies, state library associations, and other library-related organizations. For more information, visit http://www.KeithCurryLance.com.

Share
Claims of “fake news” have vaulted once-dry information literacy into the spotlight. To seize the teachable moment, this online course will offer up-to-date tools and effective tactics to enable patrons to critically assess sources, facts, and context. Competitive team discounts are available. Please call 646-380-0773 to inquire about our discounted team rates.