During the few years that the Star Library ratings and LJ Index scores have existed, the changes in the public library landscape have been nothing short of seismic. Five years ago, only some libraries had embraced, and many were just taking notice of, certain emerging trends. Today, those trends are rapidly shaping commonplace expectations.
Via their own notebook computers or handheld devices, library visitors access library resources online, without reliance on public Internet computers. Streaming ebooks and audiobooks now available for download via library websites enable voracious readers and listeners to feed their book habits without visiting their libraries in person. Happily, such virtual transactions to acquire streaming media are now included in IMLS circulation data—at least for libraries that are up to speed on the ability to count them.
Perhaps the LJ Index service measure least impacted by the streaming media/smartphone/tablet computer revolution is attendance at library programs. Though with time—and the advent of better platforms for “virtual events,” such as Google+ Hangouts—even that measure may require redefinition to reflect accurately what libraries are delivering to their communities.
In the face of fundamental change in the ways public libraries do business, it’s natural to consider whether the LJ Index design is holding up. To investigate that question, we conducted a confirmatory analysis of the index design, correlating each of the four per capita measures with the index score derived from them for all four editions of the LJ Index.
For all four LJ Index years, all four measures are very strongly correlated with the overall index score. Bivariate correlation coefficients with LJ Index scores range from .834 for visits per capita to .737 for program attendance per capita this year (2009 data). That compares very favorably to the first round of LJ Index scores, for which bivariate correlations range from .836 for visits per capita to .686 for program attendance per capita.
It is impossible to know how new phenomena not measured by these data—for instance, Wi-Fi access in libraries—figure into the scenario.
What is clear, however, is that, despite such developments, the correlations between LJ Index scores and the newer measures—public Internet computer use and program attendance—have strengthened steadily, while the correlations between LJ Index scores and the older measures—visits and circulation—have diminished only slightly. Despite changing technology, presence of data outliers, and an ever-growing number of libraries reporting the data, the statistical validity of the LJ Index is stronger than ever.
Our hope is that IMLS and the state library agencies are moving as quickly as possible to incorporate into existing service measures, or to add new service measures for, visits to library websites, downloads of streaming media and other database content, chat- and email-based reference transactions, technology training sessions, and sign-ups or sessions for Wi-Fi access via the library. We understand that the IMLS Library Statistics Working Group has assigned a task force to examine such issues and look forward to its report and actions based on it.
Reviewing the trends for four service measures across the four years of LJ Index data so far, it occurs to us to remind our readers about the basic nature of the index scores. Simply put, a library’s score is based on how far above or below its expenditures group means (or averages) it falls on the four measures. Thus, one of the easiest things to forget about the nature of the LJ Index is that its scores are based on a “moving target” from year to year. Even if, hypothetically, a library had reported the same per capita values all four years, its LJ Index score for each year would differ, depending on the expenditures group the library belonged to each year, which other libraries did and did not report the required data each year, and the annual group averages on each statistic.
An examination of the changing means for each measure by expenditures category reveals two conspicuous patterns in the data for public libraries scored.
1 While there is considerable movement within the four per capita measures, regardless of expenditures group, mean circulation—ranging from five to almost 12 items annually—remains larger than mean visits (almost four to more than seven annually), public Internet computer use (about one to almost two annually), and program attendance (.32 to .50 annually), in turn. Per capita use of public Internet computers has not even begun to close in on visits or circulation. (In the absence of data, we can only wonder how library website visits and database downloads might compare.) Likewise, despite some impressive gains in some expenditures categories, library programs draw smaller numbers of library users.
2 Per capita means for circulation and visits tended to increase steadily from year to year for most expenditures categories, while means for public Internet computer use and program attendance tended to rise more modestly, or remain static, from year to year.
The extent to which the statistical means (averages) have changed during the relatively short history of the LJ Index is clear when we compare data for libraries that received index scores in the first and latest years, 2009.1 and 2011, which correspond to the 2006 and 2009 IMLS data files. For most expenditures categories on most statistics, the change across that interval was usually in the middling single digits or the low double digits. For detailed data on per capita means by expenditures category, see “Star Data by Peer Group.”
Circulation per Capita
Change in mean circulation per capita was greatest for the top expenditures category, $30 million–plus, at 28.7 percent, and least for the bottom expenditures category, $10,000–$49,999, at -9.5 percent. Notably, that lowest spending group of libraries is the only one for which this statistic dropped between the 2006 and 2009 data years.
Visits per Capita
Change in mean visits per capita was also greatest for the top expenditures category, $30 million–plus, at 18 percent, and least for the bottom expenditures category, $10,000–$49,999, at -2.5 percent. Again, the lowest spending group is the only one for which this statistic dropped from 2006 to 2009 data.
Public Internet Computer Use per Capita
By contrast, change in mean public Internet computer use per capita was greatest among the lower expenditures categories—ranging from 17.4 percent for the $50,000–$99,999 category to 11.1 percent for the $10,000–$49,999 category (notably, the only statistic that increased for this category across the four years). Among most of the higher expenditures categories, the change on this statistic was in single, and sometimes negative, digits—ranging from 6.2 percent for the $10 million–$29.9 million category to -1.6 percent for the $5 million–$9.9 million category.
Program attendance per Capita
For most expenditures categories, mean program attendance per capita tended to change at the highest rate—reaching double digits for three categories ($10 million–$29.9 million, $1 million–$4.9 million, and $400,000–$999,999) and approaching it for another ($200,000–$499,999). The lowest spending range was the only one for which this statistic dropped between 2006 and 2009.