As technology has become central to American life, nearly every organization leverages data to improve its performance. Political campaigns analyze potential voters. Credit-reporting agencies devise algorithms to predict who can repay loans and credit cards. Businesses reach potential customers with highly targeted marketing and advertising messages.
Yet when it comes to analyzing their own data, many libraries have not really been able to capitalize on it. Although they generate a trove of useful information about what titles and genres are most popular in what branches, they haven’t been able to fully leverage that data to boost circulation through better collection development, ordering, and programming. Or to use the data to help them justify buying decisions—a necessity in the current financial climate.
With cloud-based services now available to house large amounts of data and scalable platforms to analyze the data efficiently, data analytics is only going to get bigger. That technology is available—even to libraries—and using it is quickly becoming the responsible way to manage existing collections and future collection buying.
As libraries adopt this technology, however, they need to take note of an emerging debate. At the heart of the debate is this question: who controls the data? Does a library’s usage data belong to the library, or does it belong to an outside vendor? Is the technology platform being offered an open application programming interface (API), one that’s compatible with other systems so that libraries have flexibility? Or is it a closed system that only the vendor can unlock, usually for an extra fee?
Despite the growing recognition of the importance of “Big Data Analytics” for libraries, some vendors are restricting libraries’ access to their own data. These constraints remove the ability of libraries to access in-depth usage analytics that would help them make evidence-based purchase and removal decisions.
The restrictions also mean libraries cannot compare or share their usage trends with their peers, a useful tool for collection development. What’s more, once libraries lose control of their own usage data due to such limitations, ultimately, the power of how their data is used is placed erroneously into the hands of the vendor. Such trends are particularly concerning at a time when generating usage data from either an ILS or a cloud-based platform is more important than ever.
For the vendors that offer these types of restricted platforms, the benefits are clear: only they have the ability to perform meaningful analysis, and they can charge for that service. For the libraries, though, the benefits of such an arrangement are meager. To succeed, libraries need access to all types of usage data and to easy-to-create reports, which do not require significant library staff time or dependency on an outside vendor to produce.
Of course, any company building products for the library community should protect itself through patents and trademarks. The intellectual property invested in building these server- and cloud-based systems is substantial and unique to each company and it should be protected. But firms do not need to protect the library transaction data that flows through their systems. That information is not proprietary to the vendor. The truth is that firms that offer “Big Data Analytics” are not in competition with ILS vendors, but rather rely on access to this data from the libraries to assist with evidence-based analysis for the benefit of libraries.
Libraries should be free to choose who analyzes the data and how, whether it is an outside firm or the library itself. The data, after all, is the library’s data. Having libraries’ data be widely accessible would result in competition among providers to produce the best-in-breed products for libraries to choose from, which would benefit both libraries and their patrons.
But until our industry reaches that point, libraries can protect themselves and avoid being hurt by these restrictions by seeking platforms with open APIs. These interfaces allow vendors and data analytics platforms to work collaboratively through the integration of applications. Not only does this ensure libraries are retaining access to their usage data, but open APIs can also be used to automate this data extraction, helping libraries save time. Overall, they ensure that libraries have the freedom to analyze their usage data on whatever platform they choose, and are not restricted in accessing any of their usage data or in automating the extraction of this usage data.
For centuries, libraries have been repositories of knowledge. Going forward, let’s make sure that libraries are fully able to harness the knowledge about themselves and their patrons to improve the library experience to the greatest possible extent.
B. Scott Crawford is Vice President and General Manager of collectionHQ, an evidence-based collection development software tool owned by Baker & Taylor.
|Data-Driven Libraries: Navigating Options & Measuring Outcomes: Librarians today are facing the inescapable reality that data is slowly beginning to govern much of what they do. Whether it is figuring out the best way to curate data sets or learning how to parse the ever growing number of metrics that every library is generating, librarians have to determine the most constructive way to deal with this ocean of information that a growing number of software companies and applications are making available. Watch this webcast series to learn innovative data-driven solutions that will navigate you through the data to create viable plans for your library's future|