May 22, 2016

America’s Star Libraries, 2014: Top-Rated Libraries

America's Star Libraries 2014

The Star Libraries All the Stars, State by State Another Kind of Outcome Data The Star Libraries by Expenditure Category New Output Measures What's Next for the LJ Index FAQ Changing Constellations Find Your Library All the Stars, State by State BiblioStat Connect

We are very pleased to announce the results of the seventh edition of the Library Journal Index of Public Library Service, sponsored by Baker & Taylor’s Bibliostat. The LJ Index is a measurement tool that compares U.S. public libraries with their spending peers based on four types of output measures of their per capita use. For this year’s Star Libraries, please click on “The Star Libraries” above; for more on what’s next for the index, see “What’s Next for the LJ Index”.

When the LJ Index and its Star Library ratings were introduced in 2008, our hope was that whether libraries were awarded stars or not, they would examine these statistics more closely—both for their own library and for their peers—and make fuller use of these and other types of data for local planning and evaluation purposes.

In the meantime, however, another type of data has come to the fore—outcomes. The conventional wisdom in the public library community today is that output data alone is insufficient to assess the performance of public libraries. The new big question is: What difference do libraries make in the lives of their users and communities? Yet, for many, the distinction between an output and an outcome has remained elusive and often confusing. Fortunately, over the past year or two, several major projects have begun or reached a level of maturity that provide public library administrators and stakeholders with some carefully crafted and broadly tested tools for making sense of output and outcome data.

Here we will explore what some of this year’s Star Libraries are doing with outcome measures, chiefly through their involvement with up-and-running projects such as the Edge Initiative and the Impact Survey, as well as developing ­efforts—for example, the work of the Public Library Association (PLA) Performance Measures Task Force. Comments were solicited from directors and other representatives of Star Libraries about how their experiences with outcome measurement affect their views about where public libraries need to go with output measurement.

» Next page: “The Star Libraries”

This article was published in Library Journal. Subscribe today and save up to 35% off the regular subscription rate.

Ray Lyons & Keith Curry Lance About Ray Lyons & Keith Curry Lance

Ray Lyons (raylyons@gmail.com) is an independent consultant and statistical programmer in Cleveland. His articles on library statistics and assessment have also appeared in Public Library Quarterly, Public Libraries, and Evidence Based Library and Information Practice. He blogs on library statistics and assessment at libperformance.com.
Keith Curry Lance (keithlance@comcast.net) is an independent consultant based in suburban Denver. He also consults with the Colorado-based RSL Research Group. In both capacities, he conducts research on libraries of all types for state library agencies, state library associations, and other library-related organizations. For more information, visit http://www.KeithCurryLance.com.

Share
CONNECTING INDIE AUTHORS, LIBRARIES AND READERS
SELF-eLearn More
SELF-e is an innovative collaboration between Library Journal and BiblioBoard® that enables authors and libraries to work together and expose notable self-published ebooks to voracious readers looking to discover something new. Finally, a simple and effective way to catalog and provide access to ebooks by local authors and build a community around indie writing!

Comments

  1. I am the director of Spirit Lake Public Library which you have incorrectly listed as having a population of over 10,000. My city population is 4840. Any idea why the population is wrong?

    • Ray Lyons says:

      Hello Cynthia,

      The 10,290 population figure you see in the spreadsheet posted with this article is from the IMLS 2012 data. You can confirm this using the IMLS Public Library Survey link:

      https://harvester.census.gov/imls/search/index.asp

      Enter identifying info for your library, When you are able to locate your library’s record, under the Library Details heading the last item shown is Legal Service Area Population where the 10,290 appears. (To see other library statistics for that year click on “Show All” at the right.)

      Contact your state library authority with any questions you have about the data shown in the IMLS search tool.

      Best

      Ray L.

  2. Mary Jo Finch says:

    Thank you for continuing to encourage librarians to look at measurement for what we can learn about ourselves by studying comparable libraries. Unfortunately, because you group libraries by TOTAL expenditure and then rank them by PER CAPITA measures, you set up a statistical mismatch that awards libraries with under-reported populations and encourages false comparisons for the rest of us.

    By setting up comparisons as you have, Library Journal is choosing to believe that Library ABC, which gets 5 stars every year, actually spends in excess of $1600 per capita and has a visitation rate in excess of 74. This means we believe that the city where this library resides believes in spending over $2m annually on library services for a very small population and that every single man, woman and child in that community comes to the library 1.42 times per week. Obviously this is not the case. They have a sizable budget because they are actually serving a larger than reported population, but their measurements are only being divided by the legal population rather than the actual population served.

    In each grouping the 5-star libraries have per capita expenditures well in excess of the per capita expenditures for the group as a whole. In the $1m-$5m group it is at its worst, with 5-star libraries having per capita expenditures 568% higher than the group as a whole ($317.90 versus $55.70). Wherever you see a huge per capita expenditure, you know you have an under-reported population, which means the other per capita measurements are false numbers.

    I would encourage library directors who wish to use the statistics for comparison to sort the libraries in their grouping by expenditures per capita (you will have to add a column for this), and then to look at the libraries who spend similarly to you. Besides providing reasonable benchmarks for performance, it will help you to explain to your boards why you continue to be star-less.

    • James Pierce says:

      I am very familiar with the Libraries listed in Ohio, having used many of them for five decades. I believe the rankings are correct, however they reflect combinations of unique characteristics to Places such as Ohio. First, Ohio has historically supported its Libraries at a much higher funding level than other States due to specific Tax Policies. Second, the population of Ohio traditionally has stressed literacy, and thus high Book/ Library use. Third, Ohio weather and geography favor Library use. Fourth, Ohio has a large amount of Colleges & Universities which coincides with concentrations of Library users. Notably the small suburban Libraries in Ohio as well as some other states noted for high scores are the bedroom communities of Colleges & Universities.
      I suggest that areas which have low scores have a combination of historically low tax support for government services such as Libraries, have governmental structures which do no have the Libraries as independant financial institutions directly responsible to tax payers, so that Library budgets compete with other governmental services, and due not have heavy use because of a focus of their populations on recreational and other activities which do not involve Library use.

  3. Ray Lyons & Keith Curry Lance says:

    Thank you for your thoughtful comments, Mary Jo. It is really heartening to see librarians taking quantitative data seriously and putting the time in, as you have, to think through the issues. And please accept our apologies for this delayed response. We didn’t see your comment right away. And our schedules were a bit of an impediment to preparing this response.

    We can’t really disagree with the main points you make, but do want to offer our perspective. We’ve written before about the strengths and weaknesses of national rating systems of any sort. These are report-card measurement systems that, by definition, are simplistic and broad-brush reflections of institutional data. Recognizing the limited nature of national ratings, this year our article focused completely on a more robust and fruitful performance measurement–library outcome evaluation.

    Yes, in the LJ Index ratings libraries with lower populations will benefit if they also serve patrons who are not local residents (see the LJ Index FAQ item #12). At the same time, we do not alter or create an alternate set of national public library data (FAQ item #20). Attempting to correct or omit libraries with high per capita values is a slippery slope. How do you decide which values are so extreme that they should be considered invalid? Which values close to apparently extreme values should also be omitted? And which values close to them? Eliminating one outlier creates another.

    If you decide to screen the data for these particular problems, what about other problems where reporting practices cause other comparisons to be potentially unfair? These become subjective decisions, while our design aim is to keep the ratings as impartial as possible.

    In report-card systems measurement decisions are usually trade-offs. Your suggestion to use total expenditures per capita in local peer group comparisons is another alternative. However, it can introduce problems similar to those that bother you about the LJ Index. Libraries sharing a given expenditure per capita level can easily vary in population from 10,000 to more than 100,000—a 1000%+ difference (higher than the 500% discrepancy you note).

    The most ideal method for forming library comparison groups is combining expenditures and population, as recommended in a statistical brief published by the National Center for Educational Statistics (NCES) in the late ‘90s (see http://nces.ed.gov/pubs98/98310.pdf). Yet, even this system will not avoid the problem you are concerned with. In a national rating system the number of “winners” is arbitrary (i.e. set by ratings rules). Libraries with low populations and high expenditures would still be guaranteed winning spots since they would excel even within their own category. Given a set number of total winners, these same libraries would still claim spots that would otherwise go to other libraries with different expenditure ratios. When we created the LJ Index in 2008, we considered the NCES model, but decided it is too complicated. It produces 45-50 peer groups which would be difficult to report on.

    We stand by per capita library output measures as legitimate for purposes of comparing libraries. Per capita measures have been traditional library statistics since the 19th century. (This is not to say they don’t have limitations that need taken into account.)

    More importantly, per capita measures used in the LJ Index are not “false data.” They are replicated exactly from the IMLS data files. Comparisons made may possibly be unsatisfactory—perhaps this is what you’re referring to as “false.” In any case, the data we use are trustworthy to the extent that they’ve been reported accurately to IMLS.

    Finally, the fact that a limited group of libraries repeatedly earn LJ Index Stars is not a sign of a defective measuring system. It’s the opposite, in fact. Ratings of all kinds (cities, hospitals, universities, state governments, etc.) have contestants that earn high scores year after year. This is a basic tenet of good measurement design: Repeated measurements should not vary erratically over time.

    Since the inception of the LJ Index we have encouraged libraries to pursue local comparisons for a richer evaluation of their performance. So we applaud your recommendations (other than characterizing IMLS data as “false”). We hope libraries will explore the data more thoroughly by conducting their own customized comparisons.

    Ray Lyons and Keith Curry lance

    • Mary Jo Finch says:

      Thank you for your reply. My love of math and my passion for libraries won’t allow me to give up on this just yet, so I hope you will indulge a further exchange on this subject. Star Libraries should be representative of their group. With the way libraries are currently grouped, if a library is in the top 7% of expenditures per capita, it has a 1 in 2 chance of being a Star Library. 59 of the 258 Star Libraries have expenditure per capita more than 3 standard deviations above the mean. 135 of the Star Libraries (52%) have a population less than 5000 (only 33% of libraries are this small). This model is rewarding outliers, specifically libraries with under-reported populations, and it is doing so almost a quarter of the time.

      The fact that a limited number of libraries repeatedly earn Stars may validate the consistency of the model, but it does not validate the model itself. And the fact that outliers are so often awarded Stars results in winners that the rest of the group cannot usefully compare themselves to. I would argue that while the output numbers may be true and the legal population figure may be true, when we divide one by the other to create per capita measures, we are setting up a ratio that assumes that the two numbers are related. In the case of under-reported populations, they are not related, and hence the ratio might be considered false (you may disagree).

      I would like to propose a new model. If libraries are grouped by size and scored on services per $1000 spent, the result rewards libraries that are getting the most for the money they spend. This model still considers both population and expenditures, but it mitigates the issue of under-reported populations by not using population as a divisor, and thus there is no need to eliminate outliers.

      I have done this provisionally, using ALA definitions: rural <5K (2511 libraries), very small <10K (1404), small <25K (1639), medium <100K (1485), large <500K (468), and very large 500K+ (79). When the services per $1000 spent are scored using the same z-score algorithm that you have been using for scoring, we get a crop of winners that includes some past winners as well as other libraries who are getting amazing results with the funds they have available. Congratulations to the Swink Library, the Combined Community Library of Ordway, the Dimmitt County Library, the AH Meadows Library, the Hill City Community Library, and Wake County Libraries for good stewardship of public funds! I would be happy to share my spreadsheet if you are interested.

      Of course, no model is perfect. Ideally every library would be compared with a unique set of libraries that are within a set percentage of it in both size and expenditure and are of similar structure, but I am not sure Excel and I are clever enough to figure that out!

  4. Excellent site you have here but I was curious if you knew of any discussion boards that cover the same topics discussed here?
    I’d really like to be a part of online community where I can get suggestions
    from other experienced individuals that share the same interest.
    If you have any suggestions, please let me know. Kudos!

  5. Some people have better use of libraries than actually reading a book.