When I recently did my annual Best New Reference Databases list (it’ll be in the March 1 issue of LJ) I included one outstanding new re-release: the August 2012 update of JSTOR from ITHAKA. I reviewed the file in the 11/1/12 LJ, and was so impressed by the new main search page, along with the ease of searching, finding, and manipulating records. ITHAKA deserves credit for making the file so much better, rather than resting on its laurels and the ubiquity JSTOR enjoys among researchers.
As I was working on the Best list, it struck me that the state of e-resources is truly a mixed bag with respect to their discovery, access, and usability. Some of that is due to the fact that continuing substantive development of search interfaces is a variable thing—not all companies keep trying to improve their products, as did ITHAKA. Some of it’s due to the fact that there are so many different products being brought to market by so many different entities, and it’s hard for librarians, let alone the individual researcher, to keep track of all of them. But I think that some of it’s due to the fact that A) it’s not really possible to “standardize” e-products, because they need to be able to deliver different things differently, depending upon the searcher’s need, and B) it’s not really all that easy to search online successfully and effectively.
Google and Google Scholar are very popular for several reasons, not least of which is they’re so simple to use: a single search box and immediate results. Possibly 23,000,000 results, but immediate, anyway. Their popularity is proof of just how much (mostly newbie) researchers don’t know about, dislike, or dread using many library-based databases. And I have to admit that back in the early days of electronic resources, I was trepidacious about so-called “end-user searching,” because it seemed obvious that if researchers were doing the online searching themselves, they were likely to experience a great deal of frustration, not getting what they wanted out of the databases. That, of course, was in the dark ages of the 1980s, when we were often searching using commands and tags, but my concerns continue to be borne out today. Undergraduate researchers now look at me like I’ve got two heads if I talk about subject headings or descriptors, unless I can get them to pay attention long enough to see what a difference using those antediluvian information appendages can make to the quality of their search results. I try to do this as fast as possible, since so many students can barely sit still long enough for me to sign into a database. Frankly, I don’t explain what I’m doing much of the time when I’m helping a student researcher, because they don’t want to hear it—they want to see the full-text of the perfect article onscreen right now and if I can’t do that what good am I to them, anyway?
Then there are the wonderful students who want you to show them exactly what you did to get the results you got out of the database. All goes well until you get to the part that took you 20+ years to learn about how information works (and doesn’t work) and how you have to tease it out of a zillion online items. And trying to explain the bare facts of that would take so long the student would have graduated by the time you finished.
The syllogism that giving students the ability to search online themselves will make them good researchers is predicated on the flawed premise that they know what they’re doing in an online database, or that they can “pick it up” in a matter of minutes. This idea is a load of random. Post-baby-boom researchers may know how to mark up a web page in HTML within seconds, but they’re not going to grasp the complete underpinnings that govern sophisticated search systems in a trice. It takes extensive online experimentation and education to coax what you really want out of that computer.
The part in the searching equation that hasn’t happened yet, because it is so hard to do, is getting online systems to the point of employing sufficient artificial intelligence to be able to bring into play what takes humans years to learn—a combination of knowledge and technique that encompasses a huge range of subjects and technologies. Discovery systems haven’t gotten us there yet, not by a long shot. And the more I see of current day online technology, the more heartened I am about job security for librarians.
In the meantime, we’re all struggling with how to get these online resources to our users. Given how much they cost, and how much of a chunk of library budgets they account for, it’s a shame (not to say scandal) that more library researchers continue not to know about what we actually have for them to use (or the related problem: that they use these wares a lot but don’t know that the library provides them, ergo, they think they don’t need the library anymore because “they can get it all online”—don’t get me going on this or my blood pressure will soar). As one means of helping to fix that problem, my colleague Marie Kennedy and I have recently finished the book, Marketing Your Library’s Electronic Resources: A How-To-Do-It Manual for Librarians. It’s due out in March 2013, and I hope it’s helpful for ameliorating at least one part of the e-resource problem facing us all.
Meanwhile I’m on the lookout for new databases and re-releases that might help to make “that miracle occur” for every researcher. I’ve love to hear candidates if you have any.
|Data-Driven Academic Libraries is a free three-part webcast series, developed in partnership with Electronic Resources and Libraries (ER&L), that will touch on just some of the many areas where libraries are gathering, analyzing, and using data to change how they work—fueling your ability to better put this information to work in your own libraries.|