Nearly ten years ago, I argued that understanding where information comes from and what economic, social, and cultural forces shape the creation and transmission of information should be part of what we call information literacy. While we could debate whether this means grafting on a strain of media literacy to our pedagogy, or whether what we’re seeing emerge is transliteracy or a metaliteracy, my primary concern was that being able to evaluate sources depends in large part on understanding how various media choose stories to tell, how they validate claims, and how the ways audiences interact with these stories shape what becomes common knowledge.
Unfortunately, we tend to reduce the complexity of these issues to a few golden rules. Use scholarly sources. If you use Wikpedia, use it only as a jumping off point. At a slightly more sophisticated level, we might use the CRAP test: evaluate each source that you rely on for currency, reliability, authority, and point of view. Extra points if you take Marc Meola’s advice and invite students to compare and corroborate sources.
But we rarely talk about how journalists get stories and use sources, and how that’s different than how trade publishers decide which books to acquire, and how trade publishers’ decisions are different from the way university presses operate. We describe sources by their external features and rank their quality by type. Even identifying sources by type can be confusing. Students may be told by a history professor that scholarly articles have few pictures and tend to be more than 20 pages long. An hour later, they’re in a biology class, where their teacher would be surprised at the students’ newly-minted definition of what proper published research looks like.
We’re instructing students to do what comes easily. If you use books, you’re probably safe if the words “university press” appears on the title page. Limit your database search to scholarly articles. The assumption is that research that looks like scholarship is innately superior to other sorts of research, but we might as well be telling students that storks bring babies. I’d bet on the research that went into a New Yorker piece by John McPhee or Seymour Hersh against a large proportion of scholarly articles any day, and they have the added benefit of being understandable. We aren’t really teaching students to think, we’re teaching them to judge books by their covers. This seems even more superficial now that the formats are changing and traditional publishing models seem increasingly unsustainable. This week’s kerfuffle as the renowned science publisher Springer published, then seemed to withdraw a book about intelligent design, illustrates why determining validity using brand names is tricky.
I’m thinking about this because I’m teaching a class that I designed around the idea that students need to know how information works, not just how to work information, like so many levers. As I mentioned last week, we just read and discussed Vannevar Bush’s 1945 essay, “As We May Think,” in which he sketches out a means of organizing and sharing information using “trails of association” rather than indexing. To dabble in the concept of “trails of association,” we used a tool called Nowcomment to share our reactions. Students found it interesting to discuss the work this way, and see one another’s insights as they were reading the essay. We also talked about other ways to create trails of association, by setting up RSS feeds and using social bookmarking tools and citation management software.
But that kind of ongoing interaction with ideas doesn’t really work for undergraduates. Mostly they are asked to find and synthesize information about a topic that’s largely unfamiliar to them. Our library tools are supposed to serve both the researcher who is continually drilling deeper into the vein they’ve mined for a decade and who personally know most of the living experts on the topic— as well as the undergraduate who has two weeks to learn enough about a topic they barely understand to make an argument about it. In the end, I’m not sure – even with the newest generation discovery layers and their filtering options – that we’re really serving either population well.
Is it a UI problem?
A few days ago, Alan Jacobs, a Wheaton College English professor who blogs for The Atlantic wrote a short piece: “Google-Trained Minds Can’t Deal With Terrible Research Database UI.” He makes the point that Google has some features that work well for him. He searches JSTOR through Google because it automatically corrects his spelling. He complains that a reference librarian told him, when a specific search for a known journal article in a database produced pages and pages of random results, that he had to include the journal’s ISSN; that seems so unlikely, I can only imagine that something was lost in translation. (It also illustrates that the use of a link resolver’s list of locally accessible journals for locating a known item is fuzzy, even to people whose research skills are assumed to be sophisticated.) “The obvious answer to this problem is to train people to do better searches,” he writes. “But the most obvious answer may not be the best one . . . there’s one vital issue [librarians are] neglecting: research databases have the worst user interfaces in the whole world.”
I don’t disagree with him. Our databases are deeply, tragically lame. But I don’t blame vendors. They have designed databases under our assumption that most information use involves locating sources by subject. Databases ask us to put into words what it is we need, when often we aren’t quite sure—which is why librarians are trained to conduct reference interviews. What they deliver is usually a large number of roughly-sorted results, most of which are not actually useful.
The limits of information seeking
In spite of what Jacobs says, students are actually pretty good at finding five sources. The real problem they have is one we don’t address very well in the library or the classroom. What about this issue they are investigating raises intriguing questions? How have other people been approaching it? How do those approaches inform one’s own understanding? Why does it even matter?
Only a small percentage of information use starts with identifying an information need and seeking authoritative sources that will satisfy that need. Much information use involves creating paths for information to flow toward you (as experts do, building networks and following online conversations) or being able to make good judgments about information you encounter. We assume information is sought and that judgments can be made based on visible signals embedded in a source. As the information landscape changes, as definitions of authority and reputation change, as we move into a world where publishing will be fundamentally different, we need to rethink what we talk about when we talk about information literacy.