Last week, I interviewed OCLC’s Bruce Washburn about an OCLC Research project called oclcBot—a program which takes book records in the Internet Archive’s Open Library and matches their ISBN numbers with their corresponding OCLC numbers. I recently found out how another massive book site makes use of OCLC records—though in a more low-tech way.
Project Gutenberg is home to some 33,000 public-domain ebooks, and has become a go-to destination for new e-reader owners looking for free reading materials. Librarians have often directed Kindle owners to Project Gutenberg to soften the blow of having no OverDrive ebooks available for the popular device (although Kindle OverDrive ebooks will finally become available later this year). The Colorado Library Consortium (CLiC), in collaboration with other Colorado libraries, created a set of MARC records last December for popular Project Gutenberg content (with direct links to the downloadable ebooks and audiobooks) that libraries could easily put into their own catalogs.
But some librarians may not be aware of the human factor behind all that digitized text. Project Gutenberg texts are often scanned by volunteers and run through optical character recognition (OCR) software. Human proofreaders are an integral part of the process, making countless small corrections to a text before it is posted.
On occasion, just one missing page or ink-smudged passage can become a stumbling block to making a public-domain work available at all. Such problems are crowdsourced on the Distributed Proofreaders wiki, and a look at the “Missing Pages” wiki page provides a fascinating look at the huge amount of work that goes into Project Gutenberg’s corpus, as proofreaders offer up their requests to the community.
So how do these volunteers use OCLC records? The same way everyone else does: to find specific copies of books. The wiki provides WorldCat links to many “Missing Pages” books to help locate new copies to scan. There may even be a few at your library.