April 24, 2018

Library Assessment as Check Mark | Not Dead Yet

Here’s an issue about which I’ve been hearing from colleagues quite a lot lately—that of libraries undertaking and carrying out assessment methods and then ignoring or “trumping” the findings by doing what they wanted to do in the first place, but putting a “check mark” next to assessment in their mental (or literal) to do lists, indicating, “yep, did that!” My thought in such cases is: well, no, you didn’t do that!

I don’t have to tell anyone that “assessment” is a huge buzzword in libraries and other educational institutions today—you see it everywhere you look in the literature, and even if you’re not looking, it crosses your horizon. It’s the ubiquitous thing to do to find value, codify rational practices, increase budgets, and justify just about anything libraries do, or want to do. But assessment is, in fact, much more than just a buzzword. Done well and in good faith, assessment projects form a major pathway to creating a truly user-centered library. Listening to, and acting upon, user needs can create the kinds of relevant and vibrant libraries we all aspire to build.

Realistically, there is seldom a set of circumstances in which a library can do everything asked for by users, whether for monetary, practical, or ethical reasons. Which means, of course, that for every assessment project done, it’s necessary to give a disclaimer to participants that, while the library is asking for their feedback, it cannot guarantee that all the advice given will be implemented or acted upon. But what colleagues are reporting to me, sadly, is that they are being asked essentially to go through the motions of doing surveys, focus groups, and other modes of assessment, but that the results seem barely to be read by the folks doing the asking. Instead, once the assessment is done and results are reported back, decision makers put a tick mark next to “assessment” in the project plan, noting that feedback and input from users was actively sought and gathered in putting together a project or service. Then they go ahead and do the project or create the service as originally planned, heeding little or none of the feedback. In some cases, someone may have gone to a conference recently and saw there a nifty new piece of equipment demonstrated that, by installing it in the library (rather than buying or doing whatever the users asked for) would make them look really cutting edge and educationally cool.

Is this being done with malice aforethought? Definitely not. It is likely being done because the decision makers involved truly believe they are doing users a favor by exposing them to the newest technological bell and whistle, or at least to a technological bell and whistle that is new to their institution. What is maddening to my colleagues who report this to me is that quite large sums of money get spent in this way on so-called “technological solutions” that solve something that was not a problem or need in the first place. And even more inexplicably, this pile of money gets spent on something about which no pre-assessment was done with either staff or users to see if it can even be useful in the setting in which it is installed. As an even more bizarre component of this scenario, colleagues report that the subsequent use of said equipment is often neither monitored nor assessed. Given the ongoing costs of some whiz-bang stuff, this seeming disjuncture between assessment theory and practice is puzzling.

Unfortunately, this kind of practice does more than milk the budget for big bucks—it quickly vitiates the credibility of those concerned within the institution and demoralizes those involved in gathering user feedback. If it occurs repeatedly, it creates mistrust and skepticism among both users and staff, since it indicates that assessment is not being done in good faith, but disingenuously, as an ultimately ineffectual means to a predetermined end. Considering how much work is entailed in actually doing assessment, it just doesn’t make sense to go through the motions of doing it if it’s really just a pyrrhic exercise. So if you’re not actually going to use any of the results of assessment PLEASE don’t bother to do it in the first place. It raises users’ expectations and is a sure-fire way to demoralize staff. And bad faith assessment is, in my opinion, worse than no assessment at all.

As noted here, I’ve heard from quite a number of colleagues on this subject. I would very much like to know if this kind of scenario has played out at other readers’ libraries, too, or if your experiences have been very different from those described to me. You can post comments here or email me at: claguard@fas.harvard.edu.

Read eReviews, where Cheryl LaGuardia and Bonnie Swoger look under the hood of the latest library databases and often offer free database trials.

Cheryl LaGuardia About Cheryl LaGuardia

Cheryl LaGuardia always wanted to be a librarian, and has been one for more years than she's going to admit. She cracked open her first CPU to install a CD-ROM card in the mid-1980s, pioneered e-resource reviewing for Library Journal in the early '90s (picture calico bonnets and prairie schooners on the web...), won the Louis Shores / Oryx Press Award for Professional Reviewing, and has been working for truth, justice, and better electronic library resources ever since. Reach her at claguard@fas.harvard.edu, where she's a Research Librarian at Harvard University.

The Latest Trends in Library Design
Hosted in partnership with Salt Lake County Library and The City Library—at SLCo’s Viridian Center—the newest installment of our library building and design event will let you dig deep with architects, librarians, and vendors to explore building, renovating, and retrofitting spaces to better engage your community.
Building Literacy-Rich Communities
Hosted by Library Journal and School Library Journal, Stronger Together is a national gathering of thought leaders and innovators from across the country who will share where and how partnerships between school districts and public libraries are having success. Join us May 10–12 at the University of Nebraska Omaha, as we explore the impact these collaborations are having on the institutions, communities, and kids they serve.


  1. Susanne Stranc says:

    I’ve encounterted this recently when the motions were gone through to pick a new ILS system but those going through it knew which company would be picked in advance because a recent hire in tech services had already worked with the software. The previous time we all knew which one the system administrator favored but their presentation was so bad we had to go with another system [which we are now replacing due to unfilled promises].

    • Oh dear, Susanne, that does sound like a classic case of assessment as checkmark. Most of the folks with whom I’ve talked about this practice note that they’d vastly prefer the decision just be made at the outset, rather than having their time and effort wasted with an empty exercise. I agree with that — if a decision has already been made, why put folks through a charade? Answer: for the checkmark! Sorry to hear, too, about that bad presentation and the unfilled promises.
      Good luck with future efforts, and thanks for writing,