December 18, 2014

The LJ Index: Frequently Asked Questions (FAQ)

1. What is the LJ Index of Public Library Service?
2. How was the LJ Index designed?
3. What data are LJ Index scores based on?
4. What statistical measures does the LJ Index use?
5. How does a library qualify to be included in LJ Index ratings?
6. How are the LJ Index scores calculated?
7. What does the LJ Index measure?
8. How are LJ Index “Star Libraries” identified?
9. Does the LJ Index measure the quality, excellence, effectiveness, value, or appropriateness of library services?
10. How should we interpret and publicize the LJ Index score our library received?
11. Where are the details about each library’s LJ Index scoring available?
12. Why does the LJ Index use per capita measures?
13. Why doesn’t the LJ Index include reference transactions as a service output?
14. Why aren’t registered borrowers data used in the LJ Index ratings?
15. Why doesn’t the LJ Index include library input statistics like library expenditures, staffing, volumes held, etc?
16. What about an index of library efficiency?
17. What are the disadvantages of a rating system that combines inputs and outputs together?
18. Other rating systems give different emphasis (“weights”) to one or more statistical items they use. Why doesn’t the LJ Index do this?
19. Why do some states have no star libraries?
20. Why does the LJ Index not revise or eliminate very high statistical data (“outliers”) prior to calculating its rating scores?
21. Why does the LJ Index convert each library’s statistics into “standard scores” rather than percentiles when calculating LJ Index scores?
22. Why does the LJ Index, in certain cases, allow one per capita service output to outweigh the other three when determining LJ Index scores and Star ratings?
23. How is the design of the Library Journal (LJ) Index of Public Library Service holding up over time?

1. What is the LJ Index of Public Library Service?
The LJ Index is a national rating system designed to recognize and promote America’s public libraries, to help improve the pool of nationally collected library statistics, and to encourage library self-evaluation.

2. How was the LJ Index designed?
Simplicity, transparency, and comprehensibility were the main objectives in designing the rating system. We used statistical correlation analysis to identify a concise and straight-forward set of indicators of library service provision. We also wanted to enable individual libraries to examine first-hand the data upon which their ratings are based.

One of the foundational ideas of this system is the acknowledgment of the strengths and weaknesses of library ratings systems of any kind. Ratings are best understood within the larger context of library assessment and evaluation. Comprehensive local evaluation of library operations by libraries and their constituents is the most productive method for assessing library performance.

3. What data are LJ Index scores based on?
LJ Index scores and star ratings (see FAQ item #8) are based on data reported annually by public libraries to their state library agencies and compiled nationally by the Institute of Museum and Library Services (IMLS). Each edition of the LJ Index specifies the statistical year the ratings pertain to. The November 2012 ratings are based on IMLS public libraries data for 2010; the November 2011 ratings are based on IMLS public libraries data for 2009; the October 2011 ratings are based on IMLS public libraries data for 2008; the November 2009 LJ Index ratings are based on IMLS data for 2007; February 2009 ratings are based on IMLS data for 2006.

Any issues with data accuracy or completeness should be directed to local libraries and/or their state library agencies.

4. What statistical measures does the LJ Index use?
LJ Index scores are based on four per capita service output statistics:

  • library visits
  • circulation
  • program attendance
  • public Internet computer use

These four measures were found to be closely related statistically. Other service output statistics available nationally-patron registration counts, reference transactions, and interlibrary lending-are not sufficiently related to these core four measures to justify their inclusion in the same index.

5. How does a library qualify to be included in LJ Index ratings?
To receive an LJ Index rating a library must satisfy these criteria:

  • Meet the IMLS definition of a public library
  • Have a service area with at least 1000 population
  • Have total operating expenditures of at least $10,000
  • Report the four service output statistics listed in FAQ item # 4.

6. How are the LJ Index scores calculated?
A complete specification of the calculation algorithm used appears here.

7. What does the LJ Index measure?
The index measures how quantities of selected services provided by a library compare with libraries within its peer group (groups are listed to the left). For each library, each of the four output statistics is measured against the average for the library’s peer group (see table).
A very high value on one or more statistics can compensate for lower values on other statistics. This “sensitivity” of the index is intended to encourage both the identification of excellence in specific services as well as thoughtful review of the validity and reliability of local data reports. (See also FAQ item # 18 regarding weighting.)

8. How are LJ Index “Star Libraries” identified?
Within each expenditure peer group we identify the top 30 scores. We give the top ten scores a 5-star rating, the next ten scores a 4-star rating, and the remaining ten scores a 3-star rating. However, for the $30 million and above expenditure group, we only identify the top 15 scores, and proceed to divide these into three groups of five.

The minimum number of star-rated libraries in each edition will be 255. However, when libraries tie for scores falling within the top scoring ranges, all tying libraries receive stars. For this reason, the total number of star-rated libraries can exceed 255 and may differ from edition to edition.

9. Does the LJ Index measure the quality, excellence, effectiveness, value, or appropriateness of library services?
No, the index measures none of these. By definition, service outputs do not reflect quality, excellence, effectiveness, or value of services to the library’s community. National-level data required to measure these aspects of library performance, even in a limited fashion, do not exist. Similarly, the index does not indicate whether library service output levels are appropriate for the library’s community, nor the extent to which services sufficiently address community needs. We encourage libraries to analyze their own operational and community demographic data locally in order to address these two vital assessment issues.

10. How should we interpret and publicize the LJ Index score our library received?
Inconsistencies in data collection and related limitations reduce the precision of any library rating system, including the LJ Index. While the index scores are carefully calculated, they should still be considered to be approximate.

Changes in a library’s LJ Index score from one edition to the next do not necessarily mean the library’s actual performance has changed. Score changes can be due to changes in a library’s peer group, to libraries recently added to the library’s current peer group, or to alterations in data collection and reporting practices.

Libraries should publicize their LJ Index ratings carefully and responsibly by including clarifying information like this:
LJ Index scores measure the levels of library service delivery relative to peer libraries nationally. The scores do not indicate the quality, effectiveness, or value of library services, nor whether the quantities of services provided sufficiently address community needs. Our library examines these important issues locally by [describe your library's self-evaluation efforts here].

Libraries wishing to issue statements comparing current ratings with prior editions should use language like this:
Our library’s LJ Index score [or star-rating] increased [or decreased or stayed about the same] compared with the prior edition. Given the approximate nature of national library ratings systems, this does not necessarily mean that our actual performance has changed. Multiple factors, including changes in data collection methods and the selection of libraries rated, also cause score differences between editions.

Please avoid statements like “Our library improved by xpoints over our prior index score.” As explained above, scores should not be considered to be highly precise and definitive; and score changes can be due to a variety of factors besides actual changes in library performance.

11. Where are the details about each library’s LJ Index scoring available?
The LJ Index ratings and the data on which they are based are readily available for download in the Find a Library section. In addition to these data, national and state rankings and national percentiles may be accessed and analyzed graphically and interactively by those with access to Bibliostat Connect, which offers the value-added benefit for customized analyses of being able to link LJ Index data to other data from IMLS, the Public Library Data Service, the state library agency, and the U.S. Census.

12. Why does the LJ Index use per capita measures?
Per capita measures reflect the relative prevalence of library services and library utilization compared with the population being served. These measures have been traditionally used in librarianship to compare libraries serving constituent communities of different sizes.

However, per capita measures do introduce certain irregularities which need to be kept in mind when interpreting comparative library statistics and rating systems like the LJ Index. Libraries delivering substantial amounts of services to non-resident users (for instance, libraries serving vacation communities) can show very high per capita service levels. These high levels exaggerate the relative levels of services these libraries deliver. Unfortunately, we do not have statistical tools for reliably identifying and adjusting for these cases in the IMLS data.

13. Why doesn’t the LJ Index include reference transactions as a service output?
Reference transactions do not correlate sufficiently with the four LJ Index output measures to justify inclusion. The decision to exclude reference transactions does not reflect on the value of reference services at all. It is due to questions about the quality of reference statistics, which appear to have both validity and reliability problems. It is not clear that reference data count only the intellectual product of trained librarians, or a wider range of services, such as directional questions. Also, the data are not necessarily counted in the same way by all libraries.

While all public library statistics can be questioned on those grounds, concerns are greater for reference because it is very weakly related-consistently so over time-to the four LJ Index indicators.

14. Why aren’t registered borrowers data used in the LJ Index ratings?
Similar to reference transactions (item #13), registered borrower counts are not statistically correlated with the other four LJ Index indicators. This is likely due to the fact that registration data do not reflect annual activity. They reflect a “running balance” of borrowers year by year, rather than annual library transactions that other service statistics represent. Yearly changes to registration rolls might be a better statistic to investigate for future use.

15. Why doesn’t the LJ Index include library input statistics like library expenditures, staffing, volumes held, etc.?
The LJ Index is a simple index of public library service output. Unlike other rating systems, this system reflects what libraries deliver, not resources utilized in providing services. The LJ Index does use expenditures as a way to group libraries into peer groups, rather than grading libraries on the amount of resources they receive and utilize.

16. What about an index of library efficiency?
Efficiency-the ratio of library services delivered (outputs) to resources used (inputs)-is a legitimate criterion on which to rate libraries. But it is a different criterion than service output, which is the focus of the LJ Index. It is probably impossible to design an index with any measure of statistical validity that simultaneously measures both service output and efficiency.

17. What are the disadvantages of a rating system that combines inputs and outputs together?
Measurement of individual and organizational performance is typically based on results, not on preparation techniques, effort, etc. Consider what would happen if designers of college entrances tests, like the SAT or ACT, decided to rate students on both inputs and outputs. The exam scores would then be based on inputs-students’ demographics, family and socioeconomic characteristics, study habits, school staffing and funding, etc.-as well on outputs-how the student answered the specific exam questions. This would be a preposterous practice-which is precisely why it does not happen. Inputs are invaluable in designing a probing analysis of why some output scores are higher and lower, but data about them should not play any role in determining an output score.

18. Other rating systems give different emphasis (“weights”) to one or more statistical items they use. Why doesn’t the LJ Index do this?
There are more disadvantages than advantages to weighting certain statistical items over others. The library profession currently has none of the consensus, standards, or empirical research on which to base these.

Given the changing role of public libraries and their unique roles in their communities, the LJ Index favors no single statistical indicator over others. As a result, it does not endorse particular programmatic objectives that the indicators may represent (e.g., “library as place” versus remote library use versus community outreach and engagement). Further, the social science and statistical literatures contain a growing body of skepticism toward weighting index variables.

19. Why do some states have no star libraries?
This is due to how the scores for libraries in these states compared to all the libraries in their respective expenditure categories, and not due to the state they hail from.

20. Why does the LJ Index not revise or eliminate very high statistical data (“outliers”) prior to calculating its rating scores?

The LJ Index is based solely on the official data released by IMLS. States and communities quarrel with U.S. Census figures every time they are released, and researchers question their accuracy; but none of them has the authority to change or ignore those data. The IMLS public library data work this same way. Responsibility for the accuracy of public library data is shared by local, state, and federal agencies.

In any case, unusually high data values (“outliers”) may well be valid.  While recognizing that some data might well be invalid, we purposely avoid any actions that could incorrectly eliminate or disqualify valid library data. Therefore, for purposes of these ratings we presume all IMLS data to be accurate. We rely fully on IMLS edit checks to adjudicate the accuracy of the data. However obvious some oversight might become and however much one may disagree with official determinations, we are not in a position to act unilaterally to correct apparent mistakes.

Moreover, it is an impractical task to second-guess various data items. Impartial review of the data would require the majority of the data to be re-examined, not merely the most obvious values, since any items could contain errors. The result would produce a variant set of national data and work at cross purposes to the IMLS system.

21. Why does the LJ Index convert each library’s statistics into “standard scores” rather than percentiles when calculating LJ Index scores?
Reducing the library statistics to percentiles degrades the data and deprives libraries of the full and accurate credit for their own data. Using percentiles also hides “outlier” values. Using percentiles re-states library data in a way that, “homogenizes” the data. The original statistical values are lost in the process. While a side-effect of this approach does cap high data at the 99th percentile, the approach also results in ordinal data that are low-precision estimates of the library’s original data, and scores that are arithmetically invalid.

22. Why does the LJ Index, in certain cases, allow one per capita service output to outweigh the other three when determining LJ Index scores and Star ratings?
The LJ Index of Public Library Service is an index of a simple, clear concept: service output. It does not value one of the four types of use (visits, circulation, public Internet computer use, or program attendance) over the other three. It is, therefore, desirable for a library’s legitimate exceptional performance on one statistic to “cancel out” the other three. Anyone looking at an LJ Index score and Star rating can be certain of what it means: exceptional per capita service output of one or more types.

No other library rating system measures a declared concept (quality? efficiency? productivity? something else?) or explains how its particular combination of statistics reflects that concept. Also, more convoluted library rating systems can have single indicators that carry an inordinate amount of weight compared to others. A good example are systems that count circulation data multiple ways, thus giving this indicator much more weight than other indicators.

23. How is the design of the Library Journal (LJ) Index of Public Library Service holding up over time?
As noted in the initial article that proposed the LJ Index, its statistical design will be re-analyzed and reviewed at regular intervals. The Trends in the LJ Index section of 2011′s fourth round of America’s Star Libraries articles examines this question in depth, and the 2012 article examines “calls for new output measures.”

24. Are there graphics or images my library can use to display their inclusion on the list of LJ Index star libraries? 

Yes, we can supply graphics or image buttons that identify LJ Index star library selection. Send inquiries to Kevin Henegan, khenegan@mediasourceinc.com

share save 171 16 The LJ Index: Frequently Asked Questions (FAQ)