by Tania Fersenheim, Content & Applications Manager at Fenway Libraries Online
Are we spending money wisely? Librarians and administrators ask themselves this question in many ways, both big and small, every day. Sometimes it keeps us up at night. It’s inherent in the choices we make between different brands of dry erase marker, different resources covering the same subject area, and in decisions to implement or replace software that helps us do our jobs.
Libraries are asked more and more to justify the dollars we spend, and many of us are investing time and money in analyzing usage data, assessing the effectiveness of instruction programs, and otherwise attempting to quantify the return on our investment of the institution’s money.
A colleague used the word “brobdingnagian” in casual conversation the other day, and it struck me again how apt the term is in describing current ILS industry trends. I am not alone in noting that Alma and WMS, the top contenders among ILSes, are moving us back in the direction of gigantic one-stop-shopping library management systems, after a couple of decades of external solutions popping up to fill the emerging needs unmet by the ILS — eResource management, federated search, subject guides, data warehousing/reporting, and even discovery.
The ILS is a behemoth in many ways, and while it’s a must-have for us to run our libraries, we can ask ourselves if we are in fact spending an outsized amount of money for “commodity” functions, and for functions that could be or are already performed better elsewhere. In other words, we need to be asking the same ROI questions about the dollars we spend on our ILS that we ask about dollars spent on materials and programs. What we need is an ongoing discussion about the ILS, what it must do, what it does not need to do, and how we can best connect to the applications and services that we deem the very best for a multitude of functions.
I’ve cataloged a few assumptions about the current state of the work we do related to the ILS, and some questions we should be asking ourselves. My goal is not to answer these questions — that’s an exercise for the library community as a whole — but instead to crystallize some of the thoughts and questions running through my conversations with colleagues.
Assumption: The return to (or final realization of the “dream” of) a monolithic ILS means that activities we had started doing elsewhere are being pushed into (or back into) the ILS.
Question: What activities must we absolutely, positively do within the ILS? The answer to this may vary from library to library, but there are a few that I think would come up on everyone’s list:
- Management of metadata for physical resources
- Inventory control for physical resources – knowing what we have, where it lives, where it actually is, and how to get to it
- Patron management – at least some aspects of patron management, since we ultimately need to know who should be accessing our stuff and who has our stuff in their possession
- Store our inventory transaction data
Question: What activities are we doing outside the ILS that our ILS vendors might like us to do inside the ILS?
- eResource management and usage analysis – Many of us are using external ERMs such as Gold Rush & CORAL, and even Excel spreadsheets to track our licenses, store and crunch usage statistics.
- Cataloging and MARC record manipulation – Who among us wouldn’t rather create MARC records in Connexion and make complex batch changes in MarcEdit or Perl than in our ILS, even when ILSes attempt to include those tools?
- Acquisition – Vendor acquisition websites like GOBI and EBSCOnet streamline selection, ordering, and renewal processes at the supplier end, allowing us to buy materials and even manage spending by fund, analyze our spending and export data to the ILS. Many of us simultaneously need to manage or at least record funds and spending in an ERP like Peoplesoft. And then there’s all the work we do in Excel.
- Reporting – Even with vendor-provided services like Alma Analytics, we export a lot of data and analyze it using tools like Tableau, Cognos, homegrown data warehouses and even (again) Excel. We combine it in various ways with data from external sources like vendor COUNTER reports and proxy server logs to try to draw the big picture of activity in our libraries. At least one vendor (Infor) is moving in the direction of ILS-agnostic reporting tools with Open V-Insight, which allows combination of data from disparate sources like the ILS, proxy server, and ILL for centralized analysis.
- Database and eJournal management and discovery – One of the classic deficiencies of older ILSes, and one which ILSes like Alma and WMS attempt to address, is the management of electronic resources. Services like SFX, Metalib and even LibGuides grew up to fill that need and they do it well.
- OpenURL/Link resolvers – Herbert Van de Sompel started us down the OpenURL path with SFX and a multitude of other services have been deployed since, like EBSCO Full Text Finder.
Assumption: The activities we perform outside the ILS are being performed there for a reason
Question: Why are we doing these activities outside the ILS and what are we missing out on if they are pushed into the monolithic ILS?
- Best-of-breed applications – The ability to choose the software that best fits our needs and complements the core ILS functions. I will return to the example of bibliographic metadata manipulation and ask again: Who among us wouldn’t rather create MARC records in Connexion and make complex batch changes in MarcEdit than in our ILS? Current discovery pairings like Alma/Primo and WMS/WorldCat are difficult or impossible to uncouple, and when my previous institution moved to to Alma/Primo we keenly missed the functional richness of the SFX eJournals A-Z list.
- Access to data – Aleph Reporting Center, Alma Analytics, CLSI Director’s Workstation, the names change but each represents an abstraction layer between us and our data, with decisions made by the vendor as to what we could or should have access to in a reporting tool. We return again and again to extracting raw data ourselves and using tools like Tableau, and homegrown data warehouses to transform and analyze it based on our specific local needs. SaaS systems like Alma and WMS create further obstacles to reporting and the export and re-use of our data for other purposes, by not permitting direct database access.
- Reducing repetition – Some of our work still needs to be done outside the ILS. Why should we repeat it in the ILS? We order and track spending in external services like GOBI and then need to simultaneously manage or at least record funds and spending in the ILS and an ERP like Peoplesoft. That’s a lot of data being fed or re-entered into multiple locations.
- eResource management – this is one of the areas I think it makes a lot of sense for the functions to be folded in the ILS. Tracking licenses, vendor, support and access information, etc (though maybe not vendor usage statistics and analysis) seem like a good fit for the ILS, but the capabilities of Alma at least need a lot more development before libraries can give up systems like Gold Rush and CORAL.
- OpenURL/Link resolvers – This is another area that I think makes sense to include in the ILS, as long as we also have the option of using a different service if that’s what we prefer, which brings us back to the ability to choose best-of-breed software.
Assumption: It is, or someday will be, possible to do our work in the setting that best suits our local needs and work styles, and that fully supports interconnectedness between the must-have ILS functions and all the other functions we need or prefer to perform outside the ILS. The big challenge with performing functions outside the ILS has always been connecting the ‘must have’ ILS to those functions.
Question: What do we need to reach this state of interconnectedness?
- Connections – Data must flow. It sounds unnecessary to say we need connections to accomplish interconnectedness, but we need a rich and bountiful suite of APIs with solid connections at each end so that data can flow easily and securely between systems.
- A platform with a flexible data store – One of the big challenges to interconnectedness is the need to transform the data as it moves from system to system. At each transformation (from GOBI to Alma to Peoplesoft, for example) some data must be discarded or changed to fit its destination. Much of this transformation is neutral, some is beneficial, but each represents a potential loss, and keeping data up-to-date across multiple systems with disparate data formats can be a full time job. What opportunities are there for storing data in one place and interacting with it in its native location, or reducing the need to change data when it moves into a central data store?
- Choice – A marketplace that enables us to choose amongst multiple options, either at the platform level or the app level, to build our own set of best-of-breed products to meet both run of the mill and idiosyncratic local needs. A marketplace of two or three major ILSes is not exactly an embarrassment of riches.
Final Assumption and Question:
The ILS is many libraries’ most expensive technology investment. If there are only a few things that absolutely, positively must be done within the ILS, and we have so many things we are doing or would rather be doing outside the ILS, are we paying top dollar just to be able to check out a book?
Tania Fersenheim has worked in library automation for over 20 years, on both sides of the library/vendor equation. After spending several years as a support analyst and trainer at Geac Computers, Tania has gone on to administer a diverse array of systems in academic libraries, including major vendor systems and an increasing variety of open source systems and applications. Tania holds an A.B. in English from Cornell University and an M.S. in Library and Information Science from Simmons College.