Our library is due for an external review next year, so when I saw that the standards for academic libraries had been revised I wanted to see how they might inform our self-study process.
Since I started working in academic libraries, the standards have gone through three editions and been consolidated. When new standards for college libraries were published in 2000, they were radically different than the “how many books, how many seats, how much desk space per student” approach. (I am not making this up; I remember getting out a tape measure and thinking, “Dang, our carrels are too small to meet the standards.” They’re still too small. It turned out that nobody cared enough that we failed that standard to spend a lot of money on new carrels.)
The 2000 college library standards were more about what students got out of libraries than about how much went into them and dropped the concept that numerical measures were universally appropriate or useful. It was unnerving for some library directors, who thought we were getting a bit too relativistic when we no longer could point to firm guidelines to make a case for libraries. But for some of us, it was finally putting the emphasis where it belonged: on how college libraries contribute to student learning.
A few years later, the college library standards were reworked to apply to academic libraries of all kinds. And now we have these new ones which focus on “expectations for library contributions to institutional effectiveness.”
In many ways these standards don’t seem philosophically that different to me—they are, like the previous ones, focused on outcomes—but they are different in tone and in scope. The tone seems defensive. We have to prove our value or… what? The institution will shut us down? Really? I know there’s a vogue for bookless libraries, but I haven’t heard too many institutions brag about being libraryless universities. Besides, I haven’t found that demonstrating great outcomes in spite of limited inputs improves the input side of the equation.
As for the scope-well, we know we are understaffed in comparison to the schools we compare ourselves to when we’re not feeling defensive. But my first impression was that implementing these standards as written would take at least one FTE to prove how our lack of staff affects our ability to contribute to institutional effectiveness-and that just doesn’t seem like an effective use of resources.
There is something dizzyingly fractal about these standards. There are nine principles; each has three to nine performance indicators. For each performance indicator we’re supposed to develop multiple outcomes. For some reason, the words of Wanda Gág’s classic children’s book Millions of Cats comes to mind. Outcomes here, outcomes there, evidence-based outcomes everywhere. Hundreds of facts. Thousands of facts. Millions and billions and trillions of facts.
Okay, I’m exaggerating. It’s not that bad. We’re meant to adapt these standards to our institutional settings and collect the evidence that matters to us, and these new standards are in tune with the increasing burden of proof placed on higher education. But we do, as a profession, seem to have a habit of dividing everything that we think matters into minute measurable portions. When the standards for information literacy were first published, I showed them to a group of faculty, who instantly objected that research skills were divided into efficient performance of tasks, like a Tayloristic time and motion study; there was no acknowledgement that students who could perform lots of tasks well could actually put them together to do good research. They were also dismayed that words like creativity and originality were missing.
Beyond this fractal division of values into smaller and smaller parts, these standards reflect a general sense that higher education has failed and that the only way to survive is to gather lots of evidence that we’re worth it. In a weird way, this ends up with the focus being, like the pre-2000 standards, all about us and our survival.
This clicked together with something I bumped into when I should have been working on our self study document. A review of several books bewailing the state of higher education appeared recently in The New York Review of Books, and blogger Historiann invited fellow bloggers to talk back. One of them, the author of Reassigned Time 2.0, wrote a great post titled “The Epic Fail, or Failure as the Ultimate Four-Letter Word.” She questions why we are so quick to assume “there’s a whole lot of failing going on.” It’s a brilliant post and it clarified for me something that is frustrating about so much of our discourse on the future of libraries.
Indulging in failure
She questions the assumptions we make about higher education’s failings and why we take such pleasure in calling out our failures. She reminds us that, in many ways, the state of higher education has improved in recent decades. She made me think back 20 years to our small collection of books and journals (we have far more resources today), the seven to ten days it took for interlibrary loan requests to arrive (now they sometimes arrive the day they are requested), and our inability to support the kind of research our faculty routinely do today (and yes, the expectation we can provide what a research library provides sometimes makes me crazy, but they’re doing amazing work and they are taking their students along for the ride. It’s fabulous learning).
It made me think, “You know? Sure, we have issues, but we should give ourselves a little credit.”
I wonder if by devoting such a lot of effort to assembling evidence that our libraries aren’t failures—when, let’s face it, nobody’s going to close the library anytime soon—we’re protesting too loudly that we are relevant, we do matter, we’re not extinct. I’m not against reasonable assessment. There’s a lot of value in probing what our students are learning and where they’re having difficulty and using those observations to do better. But that’s a climate of healthy curiosity, not self-justification.
Why do librarians have so little faith in libraries and their value? Faculty, students, and administrators aren’t nearly as convinced as librarians are that we’re teetering on the edge of obsolescence. The author of Reassigned Time 2.0 ends with some advice that is worth considering:
I’d much rather ask what we might do to help students to succeed, to help faculty to teach, to help administrators (yes, I even want to help them) to facilitate the work of the university. I’m not sure that “failure” is really the point here. I think rhetorics of failure, discourses of failure, might be a way to reinforce the status quo, the networks of power that oppress the vast majority of people who aren’t attending Ivies, elite liberal arts colleges, and even flagship state universities….
Maybe we need to think less about how to justify our existence, how to avert the “crisis” in which we find ourselves, and how to sidestep failure. Maybe we need to stop retreating into the easy pleasure of announcing our failures at every turn, or, perhaps more insidiously, in complicating our analyses of our failures. That’s not transgressive: it’s pathetic and counterproductive. Maybe less important than thinking about our mistakes and our missteps is thinking about what we already do well and what we might do better. Maybe our “epic fail” is our focus on our failings.
|Data-Driven Academic Libraries is a free three-part webcast series, developed in partnership with Electronic Resources and Libraries (ER&L), that will touch on just some of the many areas where libraries are gathering, analyzing, and using data to change how they work—fueling your ability to better put this information to work in your own libraries.|