(This story has updated to include a link to the ARL’s Code of Best Practices in Fair Use for Academic and Research Libraries mentioned in the story which was released as scheduled.)
In the good old days, librarians were not quite so preoccupied with intellectual property laws, particularly when it came to unpublished research materials in special collections, which never left the library.
But the large-scale digitization of special collections (and books) creates complex and ambiguous copyright concerns many librarians are ill-equipped to deal with, and there is a corrosive fear that a previously unidentifiable copyright owner will one day emerge, reassert rights to their now widely disseminated “orphan work,” and sue for infringement.
As a result, librarians frequently do not apply their fair use rights under the Copyright Act in a robust manner and they focus, instead, on risk aversion, to the detriment of scholarship and their patrons.
“A lot of institutions are uncomfortable with that risk even though it’s almost minute,” said Dwayne Buttler, a professor with the university libraries at the University of Louisville.
Buttler was part of a panel on Saturday at ALA Midwinter in Dallas discussing orphan works and digitization.
“You really do just have to make judgment calls in fair use,” Buttler said, adding that the best approach is to acknowledge the strengths of a fair use argument, which is what the University of Southern Mississippi did when it created its Civil Rights in Mississippi Digital Archive. The university devised an intellectual property model, including a take-down policy, to deal with contemporary works which may still be under copyright protection.
“They [Mississippi] had to go through and digitize things whose owners were unidentified,” said David Hansen, a digital library fellow at the University of California, Berkeley, School of Law, who sat on the Saturday morning panel.
The university made the “reasonably diligent search,” to use the Copyright Office’s language, for the rights holder, Hansen said, but in the end it came down to risk analysis and accepting “the risk that there might be some lurking rights holder along the way.”
“The idea was that this was a low risk because these were the kind of materials that these authors want to be made available,” said Hansen, who in December published Orphan Works: Definitional Issues, the first white paper from the Berkeley Digital Library Copyright Project.
A lack of consensus about how to apply the fair use provision, and the timidity and confusion that result among academic and research librarians, has been a focus of the Association of Research Libraries (ARL), which is going to release on January 26 a code of best practices in fair use (a webinar is scheduled as well).
“Academic and research librarians operate in an environment of mild risk coupled with fantastic opportunity,” Brandon Butler, ARL’s director of Public Policy Initiatives, told LJ (Butler was not a part of the ALA panel). “The code will illuminate some of the opportunities made possible by fair use, and helps libraries to put the attendant risks in perspective,” he said.
[UPDATE: The report is available here.]
Assessing and reducing risk in an intelligent fashion was also the driving force behind the intellectual property rights strategy for digitizing modern manuscript collections that panelist Kevin Smith, the scholarly communications officer for Duke University, co-authored last year for the Triangle Research Libraries Network.
This document guided a large‐scale digitization project of 40 manuscript collections called “Content, Context, and Capacity: A Collaborative Digitization Project on the Long Civil Rights Movement in North Carolina.”
“The strategy is specifically designed to avoid having to look at every individual work,” Smith said at the panel.
Some rights holders can be identified, curators help get a sense how much material is in the public domain, and other material has transformative content added to strengthen a fair use argument, Smith said.
“But it’s designed to avoid having to make those granular decisions, because we know we don’t have the time, or the money, or the staff to do that,” he said.
The approach was endorsed on January 20 by the Association of Southeastern Research Libraries (ASERL).
“The TRLN strategy strikes a wise balance between providing better access to a larger number of rare materials and authors’ rights,” Sarah Michalak, the president of ASERL’s board and the associate provost and university librarian for the University of North Carolina at Chapel Hill said in a statement sent to LJ. “There was a groundswell of support for this approach among ASERL libraries across the region. This endorsement passed unanimously–we felt that strongly about it.”
Best practices and strategies can help keep the fair use doctrine from atrophying, particular as rhetoric gravitates more and more around a compulsory licensing framework for dealing with orphan works in a digital environment.
“I think we need to not be afraid of fair use,” said Mary Minow, a panelist and a library consultant who writes for librarylaw.com. “Fair use is somewhat determined by what is customary so if people start holding back because they are so risk averse what is customary becomes much narrower,” she said.
Minow and other panelists said there was a danger in legislating to enable the licensing of orphan works, such as Canada’s system or what has been proposed by the Hargreaves Review in the United Kingdom. Countries with these licensing schemes do not have a fair use provision.
“Once we move into a licensing scheme then there’s a market and that helps to defeat an argument for fair use, so it’s something we have to be very careful of,” Minow said.
The danger licensing schemes pose to fair use may be why the content communities support such an approach, Smith said.
“Not only do they see a potential source of income but they see a way to cut off fair use claims, and libraries and educational institutions depend on fair use claims,” he said. “If we have to sacrifice one or the other we have to sacrifice the licensing scheme rather than let it become a way to undermine fair use.”
Smith said the Authors Guild lawsuit brought against HathiTrust was a stage in a movement by the publishing and rights holder communities to get Congress to address orphan works through a licensing scheme.
As far as creating a registry for orphan works, since the law doesn’t impose any affirmative obligations on rights holder (as the Google Books Settlement had envisioned), Smith said the Accessible Registries of Rights Information and Orphan Works towards Europeana (ARROW) used by European national libraries was a good model.
“It makes a lot of sense, if you are going to go that route, to help build the tools that will help lower the transaction costs of trying to find potential owners,” he said.
|Data-Driven Academic Libraries is a free three-part webcast series, developed in partnership with Electronic Resources and Libraries (ER&L), that will touch on just some of the many areas where libraries are gathering, analyzing, and using data to change how they work—fueling your ability to better put this information to work in your own libraries.|