Here’s an issue about which I’ve been hearing from colleagues quite a lot lately—that of libraries undertaking and carrying out assessment methods and then ignoring or “trumping” the findings by doing what they wanted to do in the first place, but putting a “check mark” next to assessment in their mental (or literal) to do lists, indicating, “yep, did that!” My thought in such cases is: well, no, you didn’t do that!
I don’t have to tell anyone that “assessment” is a huge buzzword in libraries and other educational institutions today—you see it everywhere you look in the literature, and even if you’re not looking, it crosses your horizon. It’s the ubiquitous thing to do to find value, codify rational practices, increase budgets, and justify just about anything libraries do, or want to do. But assessment is, in fact, much more than just a buzzword. Done well and in good faith, assessment projects form a major pathway to creating a truly user-centered library. Listening to, and acting upon, user needs can create the kinds of relevant and vibrant libraries we all aspire to build.
Realistically, there is seldom a set of circumstances in which a library can do everything asked for by users, whether for monetary, practical, or ethical reasons. Which means, of course, that for every assessment project done, it’s necessary to give a disclaimer to participants that, while the library is asking for their feedback, it cannot guarantee that all the advice given will be implemented or acted upon. But what colleagues are reporting to me, sadly, is that they are being asked essentially to go through the motions of doing surveys, focus groups, and other modes of assessment, but that the results seem barely to be read by the folks doing the asking. Instead, once the assessment is done and results are reported back, decision makers put a tick mark next to “assessment” in the project plan, noting that feedback and input from users was actively sought and gathered in putting together a project or service. Then they go ahead and do the project or create the service as originally planned, heeding little or none of the feedback. In some cases, someone may have gone to a conference recently and saw there a nifty new piece of equipment demonstrated that, by installing it in the library (rather than buying or doing whatever the users asked for) would make them look really cutting edge and educationally cool.
Is this being done with malice aforethought? Definitely not. It is likely being done because the decision makers involved truly believe they are doing users a favor by exposing them to the newest technological bell and whistle, or at least to a technological bell and whistle that is new to their institution. What is maddening to my colleagues who report this to me is that quite large sums of money get spent in this way on so-called “technological solutions” that solve something that was not a problem or need in the first place. And even more inexplicably, this pile of money gets spent on something about which no pre-assessment was done with either staff or users to see if it can even be useful in the setting in which it is installed. As an even more bizarre component of this scenario, colleagues report that the subsequent use of said equipment is often neither monitored nor assessed. Given the ongoing costs of some whiz-bang stuff, this seeming disjuncture between assessment theory and practice is puzzling.
Unfortunately, this kind of practice does more than milk the budget for big bucks—it quickly vitiates the credibility of those concerned within the institution and demoralizes those involved in gathering user feedback. If it occurs repeatedly, it creates mistrust and skepticism among both users and staff, since it indicates that assessment is not being done in good faith, but disingenuously, as an ultimately ineffectual means to a predetermined end. Considering how much work is entailed in actually doing assessment, it just doesn’t make sense to go through the motions of doing it if it’s really just a pyrrhic exercise. So if you’re not actually going to use any of the results of assessment PLEASE don’t bother to do it in the first place. It raises users’ expectations and is a sure-fire way to demoralize staff. And bad faith assessment is, in my opinion, worse than no assessment at all.
As noted here, I’ve heard from quite a number of colleagues on this subject. I would very much like to know if this kind of scenario has played out at other readers’ libraries, too, or if your experiences have been very different from those described to me. You can post comments here or email me at: email@example.com.
Read eReviews, where Cheryl LaGuardia and Bonnie Swoger look under the hood of the latest library databases and often offer free database trials.