October 24, 2014

Teaching How Information Works, Not How to Work Information | Peer-to-Peer Review

Nearly ten years ago, I argued that understanding where information comes from and what economic, social, and cultural forces shape the creation and transmission of information should be part of what we call information literacy. While we could debate whether this means grafting on a strain of media literacy to our pedagogy, or whether what we’re seeing emerge is transliteracy or a metaliteracy, my primary concern was that being able to evaluate sources depends in large part on understanding how various media choose stories to tell, how they validate claims, and how the ways audiences interact with these stories shape what becomes common knowledge.

Unfortunately, we tend to reduce the complexity of these issues to a few golden rules. Use scholarly sources. If you use Wikpedia, use it only as a jumping off point. At a slightly more sophisticated level, we might use the CRAP test: evaluate each source that you rely on for currency, reliability, authority, and point of view. Extra points if you take Marc Meola’s advice and invite students to compare and corroborate sources.

But we rarely talk about how journalists get stories and use sources, and how that’s different than how trade publishers decide which books to acquire, and how trade publishers’ decisions are different from the way university presses operate. We describe sources by their external features and rank their quality by type. Even identifying sources by type can be confusing. Students may be told by a history professor that scholarly articles have few pictures and tend to be more than 20 pages long. An hour later, they’re in a biology class, where their teacher would be surprised at the students’ newly-minted definition of what proper published research looks like.

Superficial selection
We’re instructing students to do what comes easily. If you use books, you’re probably safe if the words “university press” appears on the title page. Limit your database search to scholarly articles. The assumption is that research that looks like scholarship is innately superior to other sorts of research, but we might as well be telling students that storks bring babies. I’d bet on the research that went into a New Yorker piece by John McPhee or Seymour Hersh against a large proportion of scholarly articles any day, and they have the added benefit of being understandable. We aren’t really teaching students to think, we’re teaching them to judge books by their covers.  This seems even more superficial now that the formats are changing and traditional publishing models seem increasingly unsustainable. This week’s kerfuffle as the renowned science publisher Springer published, then seemed to withdraw a book about intelligent design, illustrates why determining validity using brand names is tricky.

I’m thinking about this because I’m teaching a class that I designed around the idea that students need to know how information works, not just how to work information, like so many levers. As I mentioned last week, we just read and discussed Vannevar Bush’s 1945 essay, “As We May Think,” in which he sketches out a means of organizing and sharing information using “trails of association” rather than indexing. To dabble in the concept of “trails of association,” we used a tool called Nowcomment to share our reactions. Students found it interesting to discuss the work this way, and see one another’s insights as they were reading the essay. We also talked about other ways to create trails of association, by setting up RSS feeds and using social bookmarking tools and citation management software.

But that kind of ongoing interaction with ideas doesn’t really work for undergraduates. Mostly they are asked to find and synthesize information about a topic that’s largely unfamiliar to them. Our library tools are supposed to serve both the researcher who is continually drilling deeper into the vein they’ve mined for a decade and who personally know most of the living experts on the topic— as well as the undergraduate who has two weeks to learn enough about a topic they barely understand to make an argument about it. In the end, I’m not sure – even with the newest generation discovery layers and their filtering options – that we’re really serving either population well.

Is it a UI problem?
A few days ago, Alan Jacobs, a Wheaton College English professor who blogs for The Atlantic wrote a short piece: “Google-Trained Minds Can’t Deal With Terrible Research Database UI.” He makes the point that Google has some features that work well for him. He searches JSTOR through Google because it automatically corrects his spelling. He complains that a reference librarian told him, when a specific search for a known journal article in a database produced pages and pages of random results, that he had to include the journal’s ISSN; that seems so unlikely, I can only imagine that something was lost in translation. (It also illustrates that the use of a link resolver’s list of locally accessible journals for locating a known item is fuzzy, even to people whose research skills are assumed to be sophisticated.) “The obvious answer to this problem is to train people to do better searches,” he writes. “But the most obvious answer may not be the best one . . . there’s one vital issue [librarians are] neglecting: research databases have the worst user interfaces in the whole world.”

 I don’t disagree with him. Our databases are deeply, tragically lame. But I don’t blame vendors. They have designed databases under our assumption that most information use involves locating sources by subject. Databases ask us to put into words what it is we need, when often we aren’t quite sure—which is why librarians are trained to conduct reference interviews. What they deliver is usually a large number of roughly-sorted results, most of which are not actually useful.

The limits of information seeking
In spite of what Jacobs says, students are actually pretty good at finding five sources. The real problem they have is one we don’t address very well in the library or the classroom. What about this issue they are investigating raises intriguing questions? How have other people been approaching it? How do those approaches inform one’s own understanding? Why does it even matter?

Only a small percentage of information use starts with identifying an information need and seeking authoritative sources that will satisfy that need. Much information use involves creating paths for information to flow toward you (as experts do, building networks and following online conversations) or being able to make good judgments about information you encounter. We assume information is sought and that judgments can be made based on visible signals embedded in a source. As the information landscape changes, as definitions of authority and reputation change, as we move into a world where publishing will be fundamentally different, we need to rethink what we talk about when we talk about information literacy.

 

Barbara Fister About Barbara Fister

Barbara Fister is a librarian at Gustavus Adolphus College, St. Peter, MN, a contributor to ACRLog, and an author of crime fiction. Her latest mystery, Through the Cracks (see review), was published in 2010 by Minotaur Books.
Photo by Debora Miller

Share

Comments

  1. It is beginning to dismay me how much I am starting to agree with Barbara. On this issue, I have been advocating the same need to help students understand information itself for probably as long as she has (see the introductory chapter to by book: Research Strategies: Finding your Way through the Information Fog). Most students I work with believe there are two kinds of information – the stuff they find daily through Google, Facebook and Twitter, and the academic stuff that professors seem to value so much but which isn’t worth the effort to find unless the prof demands it. Most of my students have little idea where information comes from, what practical difference there is between a website and a peer reviewed journal, and what it takes to evaluate the resources they are finding.

    Where I do disagree is on the so-called lameness of academic databases. OK, some of them are lame (I won’t name these), while others, like EBSCO, are remarkably easy to use with a little training. The fact is that finding information well demands a trade-off involving the need to use resources that are a step above throwing keywords into a box and hoping for the best. Academic databases have field search searching, controlled vocabularies, and so on, that may be mystifying to students but can be taught.

    If you want to see sheer joy on an undergraduate’s face, show them how to use an academic database well and “magically” get a set of 25 articles that are pretty much bang on what they were searching for. We let our students down if we assume that, since all they really know or like is Google, we need to Googleize all our academic search tools. Let’s teach them instead.

  2. Excellent piece Barbara – I think we are simplifying things as this is perhaps what many users want. For example, an undergrad probably really wants ‘just enough’ to pass their exam, not to necessarily become fully information literate (though in the long run this will obviously benefit them, most probably can’t see it right now, I know I couldn’t back then!), and perhaps we are under pressure (with falling budgets and staffing resources) to deliver on this goal first and foremost? Give teh students what they want not necessarily what they need? It is ‘wrong’ obviously and far from ideal, but how do we solve this I wonder, in the face of the reality of increasing student numbers and decreasing resources? I wish I knew the answer :)