First of all, would you pay for a Google (or any other brand) search?
I can’t say I would. If anything, I’m thinking Google should be the one paying for the wealth of data they coax out of me directly and through every interaction I have with their kudzu-like vine network of interrelated products — email, docs, browser habits via Chrome, maps, book search, bookstore, RSS feed reader/social network, etc. (same goes for most other search providers). As Jaron Lanier told librarians at the recent ACRL conference, “users are not the customers—they’re the product.”
But Kevin Kelly, on his The Technium blog, teases out an interesting — if unlikely — future scenario, in which users are asked to pay per search to get at the wealth of web data to which they’ve become accustomed. What would they pay? What’s the reasonable search price on “the royal wedding guest list“?
Kelly’s scenario may be unlikely, but his post is based on the results of some interesting research into relative search measures and merits, research that also points to some interesting questions about how to measure the value of library materials, discovery services, and even library services more broadly:
Last year three researchers at University of Michigan performed a small experiment to see if they could ascertain how much ordinary people might pay for search. Their method was to ask students inside a well-stocked university library to answer questions asked on Google, but to find the answers only using the materials in the library. They measured how long it took the students to answer a question in the stacks. On average it took 22 minutes. That’s 15 minutes longer that the 7 minutes it took to answer the same question, on average, using Google. Figuring a national average wage of $22/hour, this works out to a savings of $1.37 per search. [Note: that last calculation is Kelly's, not the Michigan researchers'.]
I don’t buy that $1.37/search figure for a minute, the same way I don’t buy the results of most public library circulation value calculators — you’re putting a very precise number on what is at best the result of a series of loose approximations. To calculate anything further with those values just compounds the error.
The study’s methods are, however, an interesting way to compare the efficacy of different processes. In the paper Kelly cites, the researchers also attempt to assign values to the reliability of resources consulted; combining the time and quality values may make for a useful aggregate value proxy that would allow comparison across other library resources and research avenues.
What I’d like to see — and ACRL folks, this could make for a great poster or paper — is a similar comparison of brute web search, search using library databases via the library homepage, and how those two methods stack up against the new class of aggregate discovery engines.
There’s a lot of anecdotal evidence that students are finding many more results through these services, and ideally more relevant results. But are those results leading to more efficient research, raising the mean number of relevant sources and articles consulted?
If any schools out there are already tackling this kind of comparison, I’d love to hear about it.