#Citylis student, Librarian and comics reader and otaku.
Currently studying for a MA Library Science at City, University London. .
I have a background of working in Academic Libraries. I joined Croydon College Library as a Learning Resources Assistant in 2001 and held many different titles until I was made redundant in late 2014. After a brief stint working as Library Assistant for King's College London, I am now working as Information Resources Adminstrator for St Mary's University.
I have a BA in History and Classics, am ACLIP Certified, experienced in Cataloguing and Classification (MARC21 and DDC) and also a qualified teacher in the FE Sector (Cert Ed).
I have a background of working with eResources and Library systems and am interested these areas as well as the impact of technology on learning and Libraries.
In my spare time I’m holding down the demanding job of father to two demanding children Emily and Ruby, but when I do get free time I like reading Manga (Naruto, Bleach, Dragonball, One Punch Man to name a few) and watching Anime. I love Star Trek and Star Wars although I decline to choose one over the other.
You can find me on twitter @tashtom
Despite having completed my #citylis Masters dissertation on the subject it seems I haven’t yet got rid of the Copyright bug and so I found myself booking onto CILIP’s Copyright Conference, which took place on the 27th April. The event was hosted in the rather cool looking Cavendish Conference centre, which featured a stylish glow in the dark effect and provided guests with their own water bottle and bowl of sweets.
The conference featured a wide variety of talks focussing on many different aspects of Copyright and licensing and included several 5 minute ‘Lightning talks’ to help break the day up. The day was hosted and MC’d by Naomi Korn, who already I knew through Twitter and as the author of a report into Orphan Works and risk management but had never had the fortune to meet until now.
In welcoming us to the event Naomi talked about a photo she had taken of herself with a friend at the recent unveiling of the Dame Millicent Fawcett statue by Gillian Wearing in Parliament Square, to examine the many copyright issues surrounding rights ownership, ethics, licensing and the use of copyright exceptions. (You can read about it in Naomi’s blog post) It was a really good example and amazing to think how many different issues can be contained in that one simple photo, if you were to really follow it through.
The opening keynote Open: Unlocking value was from Jodie Fraser (@josiefraser) and looked at the value that Open Licensing such as the use of creative commons licensing can provide to organisations and society as whole. In addition to talking about Creative Commons, she also discussed the growth of Wikipedia as a tool for open knowledge, and talked about open data.
An interesting question she posed was about whether Data is the new Oil. She examined the many similarities but noted that unlike oil, data is not a finite resource and that the value from data is gained by combining it with other data to provide new solutions and new uses, whereas adding oil to oil just give you more oil!
Is data the new oil? Slide from presentation by Jodie Fraser
After sharing some great examples of what can be accomplished using Open Data and Open Licensing such as the Citymapper app and the Hero Arm from Open Bionics she ended by highlighting the benefits of Openness and suggesting some questions that could be asked to help foster openness in organisations.
Sticking with the subject of openness Chris Banks (@ChrisBanks) gave an overview of the development of the UKs-Scholarly Communications License. (UKSCL) She explained that currently there are a myriad and often contradictory of Open Access policies, set out by the different stakeholders from funders to publishers and research institutions, which make it difficult to determine which open access deposit option is optimal for an output.
The UKSCL, has been developed in partnership with publishers and funders and is based on a model developed by Harvard University. It is intended to simplify the many funder and publisher OA (open access) provisions, by establishing a standard model license to be adopted by institutions that would allow depositors the retention of re-usage rights to their research (via a CC-BY-NC license) whilst remaining compliant with policies for the REF.
It allows authors to retain the right to make their author accepted manuscript (the final draft of an article which has been accepted for publication by a journal, incorporating any suggested changes from peer review, but prior to copyediting, typesetting and proof correction by the publisher.) publicly available for non-commercial use (CC BY-NC 4.0) from point of publication.
Authors who choose to (or whose only option is to) self-archive. It also enables immediate compliance with both funder OA requirements and helps to ensure outputs are eligible for the REF (the scheme used to assess the quality of research in UK higher education institutions). It also allows authors greater retention of the rights to their work whilst still allowing them the freedom of choice over where they can publish (including in non-OA journals).
Banks explained that work on developing the SCL is continuing, and is currently focussed on working with funders such as UKRI are looking to examine how best to utilise further funding for open access. There is also focus on engaging with academics around self-archiving, slef-deposit and open access, as well as continued discussion with publishers and stakeholders such as learned societies.
Corinna Reicher gave some “Top on clearing rights in moving images”, which included studying the credits for details of Copyright and making use of national film archives for help in tracing rights owners. Carrying on with the theme of rights clearance Ben White, head of IP at the British Library, gave another lightening talk on rights clearance in Libraries/cultural heritage organisations. Drawing on the experiences of the British Library’s digitisation projects, he highlighted the difficulties that arise from clearing rights in collections such as the BL’s archival sounds projects, which took an average of 4 hours works to clear one item (out of 100,000).
#CILIPCopy18 5 minutes about rights clearance from Benjamin White, British Library. Average rights clearance takes 4 hours per item. BL project digitising sound archives, inc wildlife, music, oral history = 100,000 items online. Individual rights clearance required in most cases.
Following the end of the morning’s talks, there was a panel discussion with the speakers taking questions from the floor, followed by a break for lunch.
After lunch the presentations continued, with an interesting talk by Ruth Morris from law firm Freshfields Bruckhaus Deringer which gave us an overview of the key contract and license management in a law library, mentioning that even users of the law library are often unaware of the restrictions around reuse of material. She gave us an insight into the types of information requests that the library deals with and spoke about the range of licenses, (such as CLA and NLA) the library requires.
Contract and licensing management issues in a law library – key issues
She then talked about types of commercial licenses the library holds with vendors of online legal and business resources and spoke about the key contract clauses that were required. These can cover a range of issues from common clauses relating to interruptions in the services to specific terms governing what happens to the agreements in the event of a merger or access by outsourced staff.
Ruth also spoke about what she saw as the key trends in license agreements saying that there was a shift away from single user licenses for resources and that vendors were increasing looking to find new ways to limit the ability to share content through what she enhanced copyright policies. It was definitely an interesting insight into the various issues involved in licensing content, and some of it was familiar to me having worked with eResource licenses in the past.
Other presentations of interest where Debbie McDonnell’s presentation on her efforts to increase awareness of Copyright among staff at the British Council. She showed us some of the materials that were created to raise awareness, which led to 77% of staff stating they confident in their understanding of copyright (up from 65% in 2012). Strategies included a Copyright month.
Wrapping up the day’s presentations was Alex Fenlon, Head of Copyright and Licensing, Library Services, University of Birmingham. His talk gave us an insight into the kinds of activities and duties that took up most of his time as opposed to what he would ideally be wanting to focus on, as head of Copyright and Licensing. In particular he said that currently he is having to spend a lot of time in discussions about the licensing of resources for students that will be based at their new Dubai campus. Other major demands included preparing for the (then) incoming GDPR legislation and discussions around the possible implementation of the UKLSCL. He also provided some useful tips around advocacy, such as making it relevant using current examples of Copyright cases, and had the curious honor of being the only person of the day to mention Naruto the monkey.
Once again it was another informative talk that rounded off the day nicely. Overall I really enjoyed the day, there were lots of great and informative talks and even orphan works got a few mentions and I expect to be attending next years and several others like it in my new role, as Assistant Librarian Subscriptions and Licensing at University of Arts London, which I started today!
It’s hard to believe that it’s been five months since I last wrote a blog post! When I last posted I was still in the thick of writing my dissertation, and my previous post spun out of that. But thankfully it’s all over and done with now, the deadline was early January and while there were days when I thought it would never be finished, I got it all done.
And was it worth all the stress and anxiety? Hell yes!
I’m pleased to say that despite the times where I struggled with those thoughts of self-doubt and “I don’t know what the hell i’m doing”, particularly around the data analysis side, it paid off in the end as in March I got my overall grade for the Masters and as those that follow me on Twitter will have seen I am now a Master of Library Science with Distinction!
And if you want to read said Dissertation I’ve made it available online via the Citylis group on Humanities Commons, an Open Access Repository run by the Office for Scholarly Communication at Modern Language Association, meaning its freely available to [More about Humanities Commons]This means it’s freely available to anyone who wishes to read it.
Where are all the orphans? How effective is current legislation in enabling cultural heritage institutions to make orphan works available online?
Be sure to check it out if your interested, and be sure to look at the other dissertations and papers on there by other #citylis alumni and staff, its full of interesting stuff. Here is the group link.
So after roughly 2 and half years my #Citylis journey is at at an end. It’s been a long journey, when I started #citylis I was just coming out of a really nasty episode of depression, which really knocked me for six. But ultimately while it was a horrid experience it paved the way journey to #citylis and beyond, and while one door may be closing others will no doubt be opening. And the thing is #citylis is such a great place that you never really leave, just ask Ludi, Debbie, Shohana and Oz. A bit like Hotel California as Lyn likes to say.
Now that’s not to say i’m embarking on a PhD, not yet anyway, but you never know. However, I’ll still do my best to come and support #citylis at events and so forth. (As I did when I recently attended the launch of the 2nd Edition of David Hayne’s book on Metadata, which saw David giving talk covering metadata, information management and privacy.)
I also returned to City recently for an evening called Databeers London, which while not a #citylis event, could easily have been. The event featured 6 data-speakers giving short talks using the PechaKucha method (20 slides with one slide every 20 seconds). It was a great event with a great turn out of around 300 people and some really great talks and free beer (hence the name Databeers):
The speakers were:
Harrison Pim (British Museum): Bookworm: Social Networks From Novels
Ellie King (Governmental Digital Services): Using convolutional neural networks to classify content on GOV.UK
Pablo Aragon (Pompeu Fabra University): Data science in the era of fake news
Merve Alanyali (Warwick University): I know what you ate last summer: What can we infer from Instagram pictures?
Bruce Pannaman (busuu – a social network for learning languages): Connecting our users across language barriers – the journey from user to friend.
Finally, my interest in Copyright led me to sign up for the CILIP Copyright Conference which took place yesterday, in one of the most luxurious conference centers I’ve ever been in (everyone got their own bowl of sweets and the desks glow in the dark!) It was a really great day with lots of interesting speakers, and a few familiar faces from my time researching the dissertation, so look out for a brief write up on here soon.
I started this post many months ago, but never got round to finishing it. Inspired by a recent post on fair use by fellow #citylis student @olivianesbitt I thought I’d dust of this post and finish it with a sprinkling of orphan works for good measure.
When the European Commission published its i2010 digital Libraries initiative, announcing its intention of creating a single European digital library providing online access to European cultural heritage, it was clearly operating under a desire to avoid the legal wrangling faced by Google over its Google Print Library Project (Google Books). As Durantaye (2010:161) notes that:
In trying to resolve the problems surrounding orphan works, the European Commission has closely followed the procedures that the United States has used to evaluate the extent of, and possible solutions for, the orphan works problem. (Durantaye 2010:169)
She later refers to a speech by former Commissioner for Information Society and
Media Viviane Reding evoking Google Books “in order to induce member States to “act swiftly” and to find “pro-competitive European solutions on books digitisation.” (Durantaye 2010)
Conceived by Larry page and Sergei Brin in 1999, the Google Books project was intended to use a “web crawler” to index and analyse the content of vast collections of digitised books. In 2004 Google announced Google Print Library Project a partnership with several large public and research libraries, with the aim of scanning and digitizing 15 million books from collections. Their aim was to create a vast database from the collections of major research libraries, that enabled users to search for words or phrases and read snippets of text.
This effort however fell foul of various rightsholder organisations and in 2005 Authors’ Guild and the Association of American Publishers sued Google over the project arguing it infringed their rights. Google argued that their use amounted to fair use, especially since they were only displaying snippets of the book and that converting texts into a database amounted to transformative use (a key provision of the fair use argument.) This argument was not wholly convincing to many as this New Yorker article illustrates.
In 2008 a settlement was reached in which Google agreed to compensate rights holders whose works were scanned, in return the represented parties agreed not to sue Google.
The settlement would have represented a form of ECL(extended collective license) agreement, in that the represented rights holders agreed to Google’s continued use of their works in return for compensation and Google agreeing to install terminals in libraries on which the works were to be viewed. The settlement was not universally accepted, with critics arguing that it would give Google a monopoly on online out-of-print books, and the settlement was rejected by in 2011 by the district court.
After the case proceeded on merit through to the second circuit it was eventually ruled by Judge Denny Chin that the project qualified as fair-use under US Copyright law. A subsequent appeal by the Authors’ Guild was rejected by Supreme Court in 2016, finally laying the case to rest.
The crux of Google’s defense centered on the notion of fair use, a “judicially developed limitation on the scope of copyright, now codified in 17 U.S.C. ’107.” (Zimmerman 2015) In considering whether Google’s use could be considered fair use, the court considered four factor set out in 17 U.S.C. ’107:
The purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
The nature of the copyrighted work;
The amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
The effect of the use upon the potential market for or value of the copyrighted work.
Zimmerman (2015) describes the each of the four factors as representing:
a continuum, tilting more or less strongly for or against a finding of fair use. Factor Four ‘ concerning the economic impact of the use, and particularly whether it represents a substitute for the original work ‘ is often said to be the most important consideration.
Fair use is as Meyer (2015) say’s a ‘distinctly American concept’ , in the UK a less flexible doctrine of fair dealing exists that allows a user to “make use of a limited, moderate amount of someone else’s work” (Intellectual Property Office 2014a) for the purposes of :
Research and study
Criticism or review
Reporting of current events
Parody, caricature and pastiche
Whereas fair dealing is codified in the US Copyright Law, fair use has no statutory definition meaning as the UK Intellectual Property Office(2014b) says:
it will always be a matter of fact, degree and impression in each case. The question to be asked is: how would a fair-minded and honest person have dealt with the work?
Unlike fair dealing the uses that can be considered fair use are far wider, in an article for the Harvard Law Review (Leval 1990) Judge Leval sets out the use of a work that could be considered Transformative Use, he writes:
Transformative uses may include criticizing the quoted work, exposing the character of the original author, proving a fact, or summarizing an idea argued in the original in order to defend or rebut it. They also may include parody, symbolism, aesthetic declarations, and innumerable other uses.
In his deliberation on the case of Google Books, Judge Leval found that the Google’s use was indeed fair and transformative saying:
Google’s unauthorized digitizing of copyright-protected works, creation of a search functionality, and display of snippets from those works are non-infringing fair uses. The purpose of the copying is highly transformative, the public display of text is limited, and the revelations do not provide a significant market substitute for the protected aspects of the originals. Google’s commercial nature and profit motivation do not justify denial of fair use.
“If, instead of making digital copies, Google had employed six million elves to scour the text of the books and furnish information [to users] there would’ve been no question of infringement. The mere fact the Google supplies that information through a digital copy, as opposed to employing six million elves, does not convert the lawful provision of information into infringement.” (Albanese 2017)
And while authors groups may have been disappointed with the ruling, for librarians and other groups seeking to rely on fair dealing, the judgement provided a greater sense of how fair dealing operates.
Returning to the issue of Orphan Works, a report for the US Copyright Office (Hansen 2015) states that while the Office believes:
“fair use and orphan works liability limitation provisions should coexist in the statute. In practice, however, the use of most orphan works is one in which the would-be user believes it is necessary to seek permission or a license, to either ensure peace of mind, avoid unpredictability, or, more likely, to avoid exposure to liability”
It goes on to state that, while some groups like the Libraries Copyright Alliance had argued against the need for orphan works legislation, based upon the fair use outcomes in the Google Books and HathiTrust Trust cases:
“several stakeholders from the library, archives, and museum communities prefer
orphan works legislation to an exclusive reliance on fair use. ” (Hansen 2015;43)
When it comes to Orphan Works and fair dealing, Hansen alludes to the constantly evolving nature of fair use, as being a reason not to rely on it, saying:
Office does not believe that reliance on judicial trends, which may turn at any point, is a sufficient basis to forgo a permanent legislative solution.
Ultimately, it seems there is a trade-off between the US and the UK in terms of copyright exceptions, the US have the flexibility of fair use doctrine, as defense against copyright infringement, but no orphan works legislation. While the UK has the more limited fair-dealing, but benefits from two, albeit flawed, orphan works solutions. (The EU exception, and IPO Orphan Works Licensing Scheme).
The Tool creates an archive of all tweets using a chosen keyword or #hastag in Googlesheets, which you can then explore interactively using the TAGSExplorer. This produces and interactive network graph style visualisation allowing you to explore twitter activity around your chosen hastag or term.
You can also explore the #docperform hastag using the TAGSExplorer
You can also explore the full archive via the TAGSArchive
In addition to this you can also the Storify created by @lynrobinson of the event which I have linked to here.
Documenting Performance (DocPerform) is a project organised and run by Department of Library & Information Science, (CityLIS), at City, University of London. The 2017 symposium was held from 6th to 6th November 2017 and aimed to bring together a multidisciplinary group of people, with a shared interested in understanding and developing the ways in which performance, as part of our cultural heritage, can be created, recorded, preserved, re-experienced and reused. The symposium is intended to collate a representative body of work in this area, to foster new partnerships and collaborations, and to support the dissemination and implementation of ideas generated.
In celebration of the start of this years #citylis, I thought it might be fun to write a very brief who’s who guide to some of the main figures in Library and Information Science. It is by no means definitive, so feel free to add your own thoughts in the comments below if you feel someone is missing.
Ashurbanipal, King of Assyria (668-around 630 BC)
The last great King of the Neo-Assyrian empire in Nineveh near Mosul in Iraq. Ashurbanipal was responsible for assembling The Royal Library of Ashurbanipal, thought to be the oldest surviving Royal Library in the world. The library consisted of 30,000 cuneiform tablets and writing boards on a range of subjects including historical inscriptions, letters, administrative and legal texts, alongside found thousands of divinatory, magical, medical, literary and lexical texts.
The fragmented remains were discovered in the 1850s and are now kept in the British Museum. In 2002 the Ashurbanipal Library Project was setup between the museum and University of Mosul, in Iraq, with aim of cataloguing and digitizing the library to make it available to new and future generations.
Founder of Oxford’s famous Bodleian Library. After a career as an Oxford academic, Member of Parliament and diplomat for Queen Elizabeth the First, Bodley set about restoring the Library known as Duke Humfrey’s, which had fallen into disrepair. The restored library reopened in 1602 containing some 2000 volumes, and included works in Hebrew, Turkish, Arabic, Persian and Chinese. Today the Bodleian is one of Europe’s oldest libraries and also functions as one of the UK’s six legal deposit library alongside the British Library, the National Library of Scotland, the National Library of Wales, the University Library, Cambridge, and. the Library of Trinity College, Dublin.
Suzanne Briet (1 February 1894 – 13 February 1989)
Known as Madame Documentation, Renée-Marie-Hélène-Suzanne Briet was born in Ardennes, but grew up in Paris. She began her career in librarianship at the Bibliothèque Nationale de France in 1924 and would go on to shape both the field of librarianship and Documentation. At the BNF Briet was responsible for establishing the Office of Documentation, Alongside Chemist Jean Gérard she was responsible for co-founding the Union Française des Organismes de Documentation (UFOD) in 1931, the french equivalent of ASLIB or the American Documentation Institution. Brie Went on to influence the development of library education in her role as Director of the l’Institut National des Techniques de la Documentation one of France’s oldest library schools.
In 1951 Briet published her treatise on Documentation: Qu’est-ce que la documentation?, a text of great significance that considers documents not as material objects but “evidence in support of a fact“. Her expanded definition of documentation, marked a departure from previous definitions asking the question:
“Is a star a document? Is a pebble rolled by a torrent a document? Is a living animal a document? No. But the photographs and the catalogues of stars, the stones in a museum of mineralogy , and the animals that are cataloged and shown in a zoo, are documents.”
In 1997, Michael Buckland’s What is a Document? revived interest in Briet’s concept of Documentation and led to a renewed interest in the study of Documentation, providing a foundation for modern debates about the nature of documents.
Born at Bury St. Edmunds, Richard de Bury was a Benedictine monk, he studied at Oxford and became tutor to the Prince of wales, the future Edward III. Bury was a skilled diplomat and administrator, serving as keeper of the privy seal, chancellor and treasurer of the exchequer. One of the first English book collectors, he founded a library at Durham, searching far and wide for books and manuscripts. Prior to his death on 1345 De Bury wrote his Philobiblon, a collection of essays concerning the acquisition, preservation, and organization of books, in which he describes ‘his means and method’ of collecting books.
Melville Louis Kossuth (Melvil) Dewey, called “the father of modern librarianship” invented the Dewey Decimal Classification(DDC) system and helped establish the American Library Association, the ALA. At the age of 21 whilst working on the reclassification library of Amherst College, Dewey devised a system of decimal numbers on top of a knowledge structure originally outlined by Francis Bacon. The system outlined in A Classification and Subject Index for Cataloguing and Arranging the Books and Pamphlets of a Library, became the Dewey Decimal Classification System, which he Copyrighted in 1876.
Having helped establish the ALA that same year, he served as secretary from 1876 to 1890 and then president for the 1890/1891 and 1892/1893 terms. Alongside R.R. Bowker and Frederick Leypoldt he became co-founder and editor of the Library Journal. In the year, following his appointment as librarian of Columbia College in 1883, Dewey founded the first ever library school, the School of Library Economy, which opened in 1887,with a cohort of 20 students, mostly women, at Dewey’s insistence.
Following his move to New York State Library, in Albany, the school was reestablished under his direction as the New York State Library School. As director of the New York State Library (1889 to 1906), secretary of the University of the State of New York (until 1900) he reorganized the New York state library, into one of the most efficient in the United States. He was also responsible for establishing a system of traveling libraries and picture collections. Dewey, died of a stroke on 26th December 1931 at the age of 80.
Luciano Floridi is currently Professor of Philosophy and Ethics of Information at the University of Oxford, and Director of the Digital Ethics Lab of the Oxford Internet Institute. Flordi’s main areas of research are Information and Computer Ethics (Digital Ethics), the Philosophy of Information, and the Philosophy of Technology. His current work includes the lifelong project, Principia Philosophiae Informationis, the Information tetralogy.
Florid’s work in the area of Information Philosophy and Digital Ethics is extensive having published more than 150 papers on these subjects.
In his Floridi’s central premise of his Information Philosophy is that :
Semantic Information is well formed, meaningful and truthful data. Knowledge is relevant semantic information properly accounted for: humans are the only known semantic engines and conscious inforgs (informational organisms) in the universe who can develop a growing knowledge of reality and the totality of information.(note the crucial absence of semantic)
Floridi also argues that we are moving into the 4th revolution, following the Copernican, Darwinian and Freudian revolutions. In the Fourth Revolution information becomes our environment, the ‘infosphere‘. Floridi argues that following the Fourth revolutions we are becoming interconnected inforgs amongst other inforgs, our online personalities and personas begin to bleed into our ‘real lives’ leading to a phenomenon known as onlife. Floridi’s work confronts the philosophical, ethical and moral issues of this new reality in which we find ourselves, what Flordi deems the ‘information ontology’ including the ethics of, Information, Onlife and particularly Artificial Intelligence.
Conrad Gesner was a Swiss physician and naturalist born in Zurich 1516. As a child he demonstrated an aptitude for both Greek and Latin and at school studied classical languages and theology in Strasbourg. In 1533 he was given a scholarship to study medicine in Bourges University, France.
. In 1537 he produced his first Greek–Latin dictionary and in 1545 he published his Bibliotheca universalis, a bibliography of 1800 author listed alphabetically, and accompanied by annotations and listings of each author works. The work the first of its kind took him four years to complete and earned him the name “the father of bibliography.”
Between 1551–1558 Gesner produced his greatest Zoological work, the Historiae Animalium, a four volume bibliography of writings on natural history, combined with encyclopaedic descriptions of every known animal. A fifth volume covering snakes and scorpions was published after his death in 1587. The book was illustrated with some 1,200 hand drawn woodcuts. Gesner’s unique method of arranging his notes involved cutting them into slips and arranging them as desired. Gesner’s other works included studies of plants and his final book De Omni Rerum Fossilium (A Book on Fossil Objects, Chiefly Stones and Gems, their Shapes and Appearances), in which he stressed the importance of the form of an object to its classification. He died of plague in 1565, having published 72 books, and written 18 more unpublished manuscripts.
Johannes Gutenberg (born 14th century, Mainz —died probably February 3, 1468)
Johann Gensfleisch zur Laden zum Gutenberg, son of an upper class merchant, was born in Mainz, Germany and devised the printing press that precipitated the “Printing Revolution in Europe”. Specifically It was Guttenberg’s method of printing with movable type, that would usher in the development of printed books in the west, influencing the reformation, renaissance and libraries. Although little is known of his life, around 1428/1430 he is thought to have moved to Strassburg (modern Strasbourg, France), following a dispute between Guilds. With Strasbourg at war Gutenberg is thought to have returned to Mainz around 1448.
Between 1450 and 1453 he entered into business with Johann Fust, who helped him to purchase the tools and materials he needed. However, by 1452, Guttenburg was heavily in debt to Fust and unable to repay the loan. A new agreement was entered into by the two men which made Fust a partner in Guttenberg’s business, however by 1455, Guttenberg was once again unable to pay.
Fust sued, successfully winning ownership of Guttenberg’s business, including his press and his masterpiece, “Forty-Two-Line” Bible , which Guttenberg had first managed to print at some point during the course of the trial. With Fust’s son in-law joining him in his newly business acquired business they went on to produce the first ever book to bear the name of it’s printers’, the Psalter (Book of Psalms). The Mainz Psalter was printed with 2 colour capitals, using a method of woodblocks and multiple inking no doubt pioneered by Guttenberg and put into practice by Fust and Schoeffer.
In 1462 Fust and Schoeffer’s business was destroyed in the sack of Mainz. Guttenberg remained in the city, and continued his printing, although, since he didn’t put his name to his outputs little is known about what he printed. He died in February, 1468, and was buried in the church of the Franciscan convent in Eltville, Germany.
Gottfried Wilhelm Leibniz (1 July 1646 – 14 November 1716)
Born in in Leipzig, Saxony, during the 30 Years War, Leibniz was a philosopher and polymath. Thanks to his farther’s extensive library of Greek and Latin texts, he was able to read by the age of four and by the age of eight had taught himself latin. By 1662 he had already completed a Bachelors Degree in Philosophy at the University of Liepzig. He served as Librarian to the Duke of Guelph, at the Leineschloss Palace and in 1691 he was appointed as Librarian of the Herzog August Library at Wolfenbuettel, containing some 100,000 volumes, and which Leibniz helped design.
Liebniz’s first discussion on the ordering of libraries appeared in his Leibniz discussed the order of books in a library in Nouveaux Essais sur l’Entendement humain, a rebuttal of John Locke’s An Essay Concerning Human Understanding, writtenbetween 1703 and 1705, but not published until 1765.
Swedish botanist and the father of Taxonomy, Carl Linnaeus was born 1707 the eldest of five children, in Råshult, Sweden. At an early age, he was taught the names of every plant, by his father Nils, a keen gardener who took his son into the garden whenever he could. By 1728, having spent a year at University of Lund studying medicine, Linnaues transferred to Upsalla University. Whilst there he wrote a thesis, Praeludia Sponsaliorum Plantarum, on the classification of plants based on their sexual parts. The thesis caught the attention of Professor Olof Rudbeck and led him to ask Linnaeus to become a lecturer in botany.
Between 1732 and 1735 Linnaues travelled throughout Sweden including to Lapland, where he hoped to learn all he could about the country’s flora, fauna and natural resources. During his travels he used his binomial system of nomenclature to describe his findings and discovered great quantities of the twin flower Campanula serpyllifolia,later known as Linnea borealis. His Flora Lapponica described 534 species using his Linnaean classification and taxonomy. In 1735 he published his Systema Naturae in which he first established the three kingdoms that are still used today, Animal Vegetable and Mineral or Regnum Animale, Regnum Vegetabile and Regnum Lapideum. Alongside the Species plantarum the book is still used today by scientists and the basis for naming animals and plants respectively.
Born in Paris 1600, Naudé was well educated and was an avid reader of authors classic and modern. Having attended several colleges, and receiving the title master of arts he enrolled in the University of Paris to study medicine. Despite his medical training Naudé would never practice medicine and instead was offered the position of Librarian to President Henri de Mesme. Whilst working for de Mesmes, whose library contained some 8,000 printed books, Naudé would write his famous Advis pour dresser une bibliothèque, considered the first modern treatise on Librarianship. Addressed to his patron de Mesme Naudé’s Advis consisted of 9 chapters dealing with the selection, acquisition and arrangement of books under the subject headings that included “Theologie, Physick, Iurisprudence, Mathematicks, and Humanity”. Naudé’ used his Advis to advocate his vision for a universal library that was open to the public. Following, his time in the Bibliotheque Memmiana Naudé returned to his medical studies before he was asked to join Cardinal Bagni the Vatican ambassador in Paris when returned to Italy in 1629. Naudé returned to Paris in 1642, and the following year he entered into the service of France’s first minister Cardinal Mazarin, once again in the role of Librarian. In service to Mazarin, Naudé sought to establish France’s first public library and would spend the next ten years devoted to the creation and development of his Universal Library in the shape of the Bibliotheque Mazarine, in Paris.
Belgian bibliographer, lawyer and entrepreneur Paul Marie Ghislain Otlet, was another figure said to be the ‘father of information science’, and ‘father of the internet’. Born in Brussels, Belgium, in 1868, he trained as a lawyer, completing his law degree at the Free University of Brussels in 1890. That same year whilst working as an intern at the offices of Edmond Picard, he met fellow lawyer Henri La Fontaine, who shared Otlet’s interest in bibliography.
Otlet and La Fontaine soon became good friends and in 1892 they formed the International Institute of Social Bibliography and began a bibliographic survey of sociological literature that would last the next three years. In 1895 they established the Institut International de Bibliographie and turned their focus to the cataloging of published information across all subjects. Together they created their Universal Bibliography, a card catalog comprising over 400,000 entries recorded on index cards, each assigned a class number, initially based on the Dewey Decimal Classification and later his own UDC.
Otlet and La Fontaine initially decided to use a translated version of the Dewey Decimal Classification, with the agreement of Melvil Dewey, in the process they developed and adapted it to their needs, creating a classification scheme they named Universal Decimal Classification. Like Dewey UDC divided all knowledge into 10 main categories, that could further be subdivided into any number of subcategories. Where the two diverged was in the separation of numbers, while Dewey used the decimal point from which it took its name, the UDC used a range of notations, such as the plus and equals signs, the colon and parentheses to allow a much expanded range of relationships between concepts.
They published the first complete edition of the UDC in 1905 in form of theManuel du Répertoire Bibliographique Universel (Handbook of the Universal Bibliographic Repertory) a 2000+ page containing elaborate and extensive subject arrays illustrated by extended classification tables, auxiliary tables and a guide to the scheme’s use in creating catalogs and indexes. The arrival of the First World War forced Otlet and La Fontaine into exile, with the former travelling to the Netherlands, Switzerland, and finally to France. Meanwhile La Fontaine journeyed to London and then the United States. Both were committed to peace as reflected in their writings with Otlet penning his Traité de paix générale (Treatise on General Peace, 1914) and Les problèmes internationaux de la guerre (International Problems of War, 916) whilst La Fontaine published his The Great Solution: Magnissima Charta (1916) in the United States where he was involved in the Pacifist Movement.
In 1910 having been to the universal exposition in Belgium, Otlet conceived the idea for the Palais Mondial or World Palace, which would act as an international centre for knowledge and peace. At its centre would be the Mundaneum, a universal network of all the world’s knowledge and containing his universal bibliography. In what has been described as an ‘analog internet’, Otlet envisioned network of “electric telescopes”, dubbed ‘resau’, connected to the Mundaneum, through which users could request documents from the great libraries, that would be would be projected into a telegraph room. Following the end of the War, the Begian Government, proving receptive to the idea, provided Otlet and La Fontaine, space in the left wing of the Palais du Cinquantenaire, a government building in Brussels, opening 1921. The following year it was shut briefly, due to lack of support from the government, but was reopened again after lobbying from Otlet and La Fontaine. In 1924 Otlet renamed the Palais Mondial to Mundaneum and the Universal Bibliographic Repertory, continued to expand and take in all forms of document including letters, reports, newspaper articles, and images.
By 1934, the Belgian government had again lost interest in funding the Munadaneum and its offices where closed, despite the protest of Otlet. Whilst the collection remained in situ, but inaccessible, Otlet returned to his writing producing in 1934 Traité de documentation, still considered a key text in the sphere of Documentation. The following year he published Monde:Essai d’universalisme (1935), which described his vision for a worldwide information network, that foreshadowed the internet. In 1940, Germany invaded Belgium and the Palais du Cinquantenaire was taken over to house a collection of artwork of the Third Reich, destroying much of the Mundaneum in the process. Otlet salvaged the remains and moved them to Parc Léopold, the dilapidated building in which the collection remain until it’s rediscovery by a young research name Boyd Rayward, in 1968.
Otlet died in December 1944, however the Mundaneum continues today as a private museum and archives center, with a mission to conserve, preserve and showcase within its space of temporary exhibitions, archives and collections bequeathed by its founders : nearly 6 km current documents and 12 million index cards of Universal Bibliographic Repertory!
In many of his ideas Otlet was ahead of his time, the semantic relationships that UDC allows have been compared by many to the RDF-Triples data model that underlies the semantic web. His thoughts on a network of information centres and the transmission of documents predicted the Internet several decades before Tim Berners-Lee would first propose his vision of hypertext.
Widely regarded as the ‘Father of the Information Age‘ Claude Elwood Shannon was born in Petoskey, Michigan. After obtaining bachelor’s degrees in Mathematics and Electrical Engineering from the University of Michigan, Shannon began his graduate studies in electrical engineering at MIT in 1936. His familiarity with Boolean Algebra allowed him to design electrical switching circuits based on Boolean Logic. His master’s thesis, A Mathematical Theory of Communication, was described by Howard Gardner as “possibly the most important master’s theses ever written”, whilst other have called it “the Magna Carta of the information age.”
Whilst at Bell Labs, he would work closely with Alan Turing, who had been seconded to Washington in 1943, to aid the allies efforts in decryption. In 1949, his previously classified paper “A Mathematical Theory of Cryptography” was published in the Bell Labs Research Journal. Shannon’s landmark theory stated that all communications could be though of as the same regardless of the medium. Noise poses a risk to all messages regardless of the channel and so Shannon declared that the key to ensuring accurate delivery of any message was the information contained in the message, rather than the meaning of the message itself.
Shannon stated that all communication systems can be broken down into the same essential components, information source, source, transmitter, channel, noise source receiver and destination. From there he was able to determine that the encoding of message by the transmitter was the key to ensuring the accuracy of the message and the avoidance of noise.
Building on the premise of information as a measure of “surprise, or the amount of uncertainty we can overcome” he used the example of a coin toss to illustrate his point. Asserting that a fair coin toss, with equal chance of landing on either side, head or tails contains one bit of information. Shannon argued that the messages we send are like weighted coin tosses, they aren’t merely a random assemblage of characters but follow implicit rules that make them more predictable. Using this knowledge exemplified by the rule that certain characters are usually followed by others for example the letter “Q” is most commonly followed by a “U” or an “E”, he was able to show that the value of English (he called it H value) characters could be less than 1 bit. He expressed this in the following equation:
In 1956 he joined MIT’s Research Laboratory of Electronics and served as faculty until 1978. Outside of work Shannon dabbled in robotics and computing, he invented a juggling robot, a flame-throwing trumpet, an electronic maze-solving mouse called Theseus, and a roman numeral arithmetic machine called THROBAC I Thrifty Roman Numeral Backward-Looking Computer. One of his most interesting devices was the “Ultimate Machine” a featureless box, with a single switch on the front, when the switch was flipped the lid of the box would open and a mechanical hand reached out, flipped off the switch, then retracted back inside the box.
In 1973,the Institute of Electrical and Electronics Engineers bestowed him the first ever Shannon Award. In later life Shannon was diagnosed with Alzheimer’s and spent his last years in a nursing home. He died in 2001. His legacy lives on in his Information theory and work which formed the basis of modern computing, the internet and everything that followed.
Welcome to part 3 of my#citylis dissertation blog, discussing the impact of EU and UK legislation on the use of Orphan Works by UK Cultural Heritage Institutions. Part 1 is here, part 2 here.
To recap, orphan works are works for which the rights holder (author, producer) is unknown or cannot be located. These pose a major barrier to cultural heritage institutions wishing to make their collections available online, since making them available online without permission from the rights holder would constitute copyright infringement. This is because Copyright law states that the act of reproduction (for example scanning) and making available (such as through a website) are the exclusive rights of the author/creator. If cultural heritage institutions were to make such works available without permission they would be infringing Copyright leading to potential financial and reputation damage, from a claim by a reappearing rights holder.
To resolve this issue legislators around the world have adopted different solutions ranging from national licensing schemes, exceptions to copyright, to proposals for limiting the amount of damages paid in infringement cases.
Act 77 of the Canadian Copyright Act allows anyone seeking to use in Copyright works where they are unable to locate the rightsholder to apply for a license to the Canadian Copyright Board. The Board will evaluate if the efforts made to locate the rightsholder are sufficient and may then grant a license. Licenses are non-exclusive but permit certain uses including reproduction, publication, performance, and distribution.
Licensees are required to pay royalties to the Collective societies, to be held as compensation for a reappearing rightsowner. Collective societies were required to hold the royalties for upto 5 years after the expiry of the license after which they were entitled to ‘dispose of the royalties as it sees fit for the general benefit of its members’. However, this practice was abandoned and the collective societies were able:
to use the unlocatable owners’ royalties as they saw fit from the outset, as long as the collective undertook to compensate the owner if necessary. (De Beer & Bouchard 2009)
The Canadian legislation appears to have had a limited effect as to date less than 300 licenses have been issued since (Copyright Board of Canada). The Report of the Register of Copyrights (2015) notes that several studies have drawn attention to low usage and flaw in the Canadian system.
German legislation on orphan and out-of-commerce works was passed on 1/10/2013 and entered into force on 1/1/2014. The amendments to the Copyright Act represented Germany’s implementation of the EU Orphan Works directive, permitting the digitisation and making available to the public, under certain conditions, of qualifying works from the collections of publicly accessible libraries, educational institutions, museums and archives. (VGWort.de)
The legislation establishes a presumption that a collecting society administering the rights in such works is also entitled to do so for the works of non-members provided that the usage is non-commercial, the works in question are recorded in the Register of Out-of-Commerce Works maintained by the German Patent and Trademark Office and the rightsholder has not objected within six weeks of registration.
The Hungarian Copyright Act (HCA) tackles orphan works in three distinct sections. The HCA was amended in 2003 by Act CII to include a free use provision that permits libraries, archives and other educational institutions, to provide limited onsite access to works in their collection, including orphan works, via dedicated terminals for educational and scholarly research purposes.
Specific legislation dealing with orphan works, came into effect 1 February 2009. The orphan works specific provisions of the HCA allow the Hungarian Intellectual Property Office (HIPO) to grant licenses for both commercial or non-commercial uses of orphan works. Applicants must complete a documented diligent search and pay compensation for their use.
Article 67 of the Japanese Copyright Law allows users who have been unable to locate or identify the rightsholder of a work after due diligence to apply for a compulsory license. Applicants must deposit compensation for reappearing rightsholders, the sum of which must correspond to the normal royalty rate and is determined in conjunction with the Culture Council by the Agency of Cultural Affairs. Compulsory licensing is only available for works that have been:
made public or those for which it is clear that they have been offered to or made available to the public for a considerable period of time. (United States Copyright Office 2015)
Under Japanese legislation it is possible to obtain a compulsory licence for works of a foreign author as long as the work will continue to be exploited within Japan. The terms conditions for diligent search for foreign works are the same as those that apply to domestic works. (Favale et al. 2013)
Per Article 50 of the Korean Copyright Act users may apply to the Minister of Culture, Sports and Tourism for a compulsory license to allow use of certain types of orphan works. Applicants must demonstrate that they have taken “considerable efforts” to identify the rightsholder or rightsholder’s place of residence and compensation must be paid at market rates, as determined by the Korea Copyright Commission.
The Swiss Copyright Act contains provisions on orphan works which are limited to sound and audio-visual recordings. Art. 22b URG/CopA authorises users to seek authorisation for the exploitation of works from the licensed collective management organisations if the rightsholder cannot be contacted, is unknown or cannot be located. (Suisa Blog)
The Netherlands implemented the Orphan Works Directive into their national Copyright Act (‘Auteurswet’) in 2014 with the law entitled: Wet van 8 oktober 2014 tot wijziging van de Auteurswet en de Wet op de naburige rechten in verband met de implementatie van de Richtlijn nr. 2012/28/EU inzake bepaalde toegestane gebruikswijzen van verweesde werken (Act of 8 October 2014 amending the Copyright Act and the Related Rights Act
with the implementation of Directive 2012/28 / EU on certain permitted uses of
orphan works) this amended the Dutch Copyright Act and Neighbouring Rights Act (Favale et al 2016). Prior to this the Dutch had no specific orphan works legislation, instead relying on contractual agreements between heritage institutions and rightsholder organisations. (KEA 2011)
A 2011 study by the IViR (Institute for Information Law of Amsterdam University) proposed what it considered two viable solutions to improve rights clearance, 1) a compulsory collective licensing model or 2) an extended collective licensing model. The report concluded that in order to satisfy the need of rights holders to exercise their rights, certain restrictions would be required such as limiting licenses to cultural heritage institutions with a public mission. To ensure film producers don’t suffer unfair competition, from CHIs in the exploitation of their digital rights they suggest the option of granting a license on audio-visual heritage material older than ten years.
In the United States Orphan Works legislation was first introduced in 2008’s Shaun Bentley Orphan Works Act, but the bill never made it into law before congress adjourned. The bill:
would have limited remedies where the infringer had performed and documented a good faith reasonably diligent search before using the work; the infringing use of the work provided attribution to the copyright owner, if known; and the infringing user included an appropriate symbol or notice in association with any public distribution, display, or use of the work. (United States Copyright Office 2015)
The report by the Register of Copyrights proposed an ECL (Extended Collective Licensing) system as the “best answer to solving the mass licensing that is inherent to mass digitization.” The report cites the voluntary agreement between parties in the Google books settlement as evidence that with government support such a system could be made to work:
we believe that with government support and oversight to ensure that any legislation is developed transparently and in a way to benefit a wide array of stakeholders equally, ECL can be successful here. (United States Copyright Office 2015)
The report examines the application of fair-use as an alternative to legislation, noting that representatives of libraries and other groups had argued that legislation on mass digitization was unnecessary since the courts were capable of using fair use doctrine to evaluate projects on a case-by-case basis. However, the report argues that reliance on fair use:
can only go so far in enabling the development of mass digitization [and therefore] should Congress wish to encourage or facilitate mass digitization projects providing substantial access to the expressive contents of copyrighted works, it would need to look beyond fair use to a licensing model, either voluntary or statutory. (United States Copyright Office 2015)
In their study of orphan works legislation Favale et al (2013) analysed the proposed orphan works legislation and claimed that the US approach focused on limiting liability for users of orphan works “in order to maximise the public access to these works and to foster the diffusion of public digital libraries.” They argue that this reflects the market-driven approach to copyright in America and stated that for this reason:
collective management of rights (either “extended” or not) do not find a viable place among the proposed solutions to the orphan works problem in the US.
However, as we have seen above the most recent approach of the US Copyright Office seeks to reconsider a collective rights management approach. In its comment on the proposals for an ECL system the Internet Archive criticised the US Copyright for basing its approach to heavily on Google Books arguing that such a project was a unique occurrence and would most likely not be repeated. They argued that an ECL system as proposed would be unsuited to the current decentralised approach to digitisation in the United States and instead propose strengthening of existing notice and takedown systems already in use such by many digitisation projects including their own.
What is ECL?
ECL (Extended Collective License) is a form of collective rights management agreement negotiated between a user and a third-party collective management organisation, widely used in Nordic countries of Denmark, Sweden, Finland and Norway. Under a collective license an agreement is negotiated made between a user and a representative organisation for the use of works in a particular category. This agreement is given extended effect by including the works in the same category of non-members known as “outsiders”. The CMO is responsible for locating rightsholders and distributing remuneration.
The legislation made it possible for books published in France prior to 1st January 2001 to be digitized if they were no longer commercially available from the publisher and is not available in printed or digital form. This was achieved by granting the collecting society SOFIA the right to authorize the digital reproduction or representation of such books six months after their entry into the ReLire database of out-of-commerce works, maintained by the Bibliothèque National de France. As Franck Macrez notes, orphan works would be included since they would meet the definition of “unavailable book”. Under the Act the collecting society would be required to take appropriate measures to locate and distribute royalties received. It was also to take steps to preserve the interests of rights holders not party to the publishing contract.
The act raised concerns for turning copyright on it’s head, allowing authors to opt out but would have improved the accessibility of unavailable works in cultural heritage institutions particularly as the collective society would have the right to grant publicly accessible libraries remuneration free permission for non-commercial usage of an unavailable work if no rightsholder is traced within 10 years of the date of the first authorisation to exploit. Reappearing rights holders would be able to put an end to such exploitation at any time, if they believe that the digital cause harm to their reputation or honour.
In 2013 the Decree implementing the law was challenged by the writers Sara Doke and Marc Soulier (also known as French Science Fiction writer Yal Ayerdhal) who sough to have it annulled by the Conseil D’etat, on the grounds that the amendments in the Decree represented a limitation to their exclusive right of reproduction provided by Article 2(a) of Directive 2001/29 (the InfoSoc Directive) which was not included in the list of exceptions to this right listed in article 5 of Directive 2001/9. The case was referred to the Conseil Constitutionnel, asking it to rule on the law’s constitutionality. The Council found that the law did not prevent the exploitation of works in other forms than digital and that the plaintiffs’ rights to private property granted under article 17 of the French Declaration of Human Rights and Civic Rights(1789) had not been breached.
Following this the Council of State then sought a ruling from CJEU on the question of whether the decree was incompatible with articles 2 and 5 of the Infosoc Directive. The preliminary ruling of Advocate General Wathelet found that French law was not compatible with the InfoSoc Directive.
The AG found that the French legislation was incompatible on the following grounds:
The Decree does not allow the author to provide their prior express consent for the reproduction and communication to the public of their works, described by the AG as “cornerstone of the author’s exercise of his exclusive right” (Case C‑301/15 n61). Tacit or implied consent deprives the author their essential intellectual property rights.
This is not altered by the fact that the works are distributed non-commercially or that the author has the option to oppose, withdraw the exercise of his rights by the CMO and receive remuneration.
On the question of the influence of the 2011 Memorandum of Understanding on the digitisation and making available of out-of-commerce work (signed by three European library associations, rightsholder organisations, journalists, publishers, authors and artists) the AG argued that the agreement could the scope of exclusive rights set out in Articles 2 (a) and 3(1) of the InfoSoc Directive since the MoU is not a legally binding instrument but a means for ensuring the legal certainty of voluntary agreements negotiated between users, rightholders and collective rights management organisations.
While the decision of the CJEU and the AG did not specifically address the issue of Orphan Works it has implications for the digitisation and making available of orphan works as Andree (2016) argues:
Does that mean that orphan works can never be placed in such a database, since it is impossible to secure the consent of their unknown right holder, or, to the contrary, should Soulier be interpreted as meaning that only orphan books can be placed into the database, since it is not possible to secure the prior consent of their authors, and they are thus out of the scope of the Soulier decision?
Much of the Soulier Doke case hinged on the issue of the authors prior to consent to the use of their work. As Valérie-Laure Benabou (2017) writes in a blog post(French) that the judges stated that:
…the author be “effectively informed of the future use of his work by a third party and the means at his disposal with a view to prohibiting it if he so wishes”. The absence of such effective information deprives the author of any possibility of “taking a stand” on future use, which makes consent purely hypothetical
Sganga (2017) argues that the decision could have potentially far-reaching consequences, Citing the decision’s focus on the requirement of the author to provide express consent for any exercise of exclusive rights would mean:
outlawing the very basic principle on which ECLs operate, that is the possibility to extend the collective licence between CMOs and users to all the works belonging to a definite category, regardless of whether or not the author is a member of the organization.
She notes that CJEU recognised the societal value of exploiting out-of-print books but does not indicate to Member States a solution that would be compliant with EU law. She proceeds to argue that the decision creates more ambiguity and uncertainty around the functioning of Collective Licensing Agreements:
In censoring a commercially oriented law to protect authors without engaging in a more detailed analysis, the CJEU has created further uncertainties, and opened the gate for a potential flow of complaints against national collective management schemes.
However, Benabou (2017) suggests that there is no obstacle in principle to a ‘collective management mechanism’, whereby non-response from an author is considered consent, despite the fact that it appears to be in conflict with the right to prior authorisation. She writes that:
in order not to undermine the very principle of prior authorisation, it is necessary for the author to be individually informed about the terms of the intended use, so that his silence can be regarded as a genuine implicit endorsement of that use. Thus, there is no obstacle in principle to a compulsory collective management mechanism of an exclusive right “by default”, provided that each author has been able to understand the implications of his inaction after being individually notified, which makes the consent purely hypothetical.
Territoriality and the challenge of cross-border dissemination
Once of the main challenges for CHI in disseminating their digital collections is the principle of territoriality, a principle of public international law that limits the extent of protections and exercise of rights to the borders of a sovereign state. Despite efforts made by the EU to harmonise aspects of Copyright through Directive 2001/29/EC (also known as the InfoSoc Directive) Copyright in European Countries is still based upon the principle of territoriality.
Accordingly, there are two legal rules that can be applied in to determine the applicable law for the cross-border dissemination of works in an online environment, the principle of lex loci protectionis & the principle of country of reception. The first, derived from Article 5 of the Berne Convention, stipulates that the law of the state where the work is made available is applied. The principle of the country of reception applies the legislation of the state or country where the work is accessed. In practice, this means that in order to avoid infringement CHIs wishing to disseminate their works, must obtain a license from rights holders for each territory. (Axhamn & Guba 2011; Anderstotter 2017)
The Orphan Works License scheme, is limited by the principle of territoriality in that the license granted is only valid for use in the UK, as the IPO’s guidance states:
An orphan works licence will cover the lawful use of the work in the UK only. It is the responsibility of the organisation or person using the orphan work to ensure that they comply with the law of any other jurisdictions where they may wish to use the work.
The Orphan Works Directive overcomes this by ensuring mutual recognition of a works orphan status in all member states.
The main drawback of an exception such as the one implemented by the EU Orphan Works Directive is the need to clear works on an individual level, through means of a diligent search. This proves to be time and resource intensive for organisations, however once diligence has been completed users are provided with a greater degree of certainty that the work is orphaned or else are able to determine than an ECL (Extended Collective Licensing) solution that forgoes diligence in favour of a payment to a Collective Management Organisation, which takes on responsibility for tracing rightsholders and redistributing fees. As Guibault (2015) writes the cumbersome and limited nature of the Orphan Works Directive has led to many member states to consider ECL:
Compared to standard collective rights management, the “extension” of agreements to non-members of a CMO significantly facilitates the licensing process to the benefit of rights owners and users alike: even if not all rights owners are identified, license agreements can still be concluded and remuneration paid, allowing the use to take place under specific conditions.
The main drawback to ECL as an approach is that many of the works licensed under these schemes are restricted geographically to their country of origin, meaning only the citizens residing within their national territories can access them. For example access to the Norwegian Digital Library (bokhylla.no) is restricted to users based in Norway. (although it states that users “without Norwegian IP addresses can apply for access for specific purposes, primarily research, education and professional translation” for an initial period of six months.See here for details)
Rignalda (2011) lists this amongst other issues which he argues need to be resolved before ECL can be an effective solution to mass rights clearance. On this point he argues that if all countries implemented or adopted ECL, then it would be possible to set up organisations to manage multi-territorial licensing with responsibility for arranging and administering extended collective licences in all Member States. (so-called ‘one-stop shops‘)Axhamn & Guibault (2011) recommend the promotion of multi-territory licensing modelled on the IFPI I Simulcasting Agreement
Compatibility of ECL with International Law
Third, that ECL should conform to international and European copyright law. On this point, he notes that in principle this should not be an issue given that it is recognised within Directive 2001/29/EC. In response to issue of whether the requirements for opting out would constitute a conflict with the prohibition of formalities within the Berne Convention, but argues that it may be possible to make a distinction between the notion of formalities related to the automatic grant of Copyright, as opposed to a formality around the enjoyment of the exclusive nature of copyrights. This he argues would depend on the form taken by an opt out mechanism, citing the difference between a simple notification via email and the need to provide more detailed information.
Axhamn and Gubault (2011a) examine ECL in the context of the Berne Conventions’s three-step test, which requires that all limitations and exceptions introduced by member states should be confined to “certain special cases” (i) that “do not conflict with a normal exploitation of a work” (ii) and “do not unreasonably prejudice the legitimate interests of the author”(iii).(Knights 2001) Since the extended effect of a an ECL Agreement acts a limitation on the exclusive rights of an author there is a risk of it coming into conflict with the three-step test. Their opinion, however, is that the characteristics of the ECL statutory provisions, namely freely negotiated agreements between parties to the agreement, the principles of equal treatment and right to (separate) remuneration and the option for rightsholders to opt out from the agreement, decrease the likelihood of ECL agreements conflicting with the three-step test.
Since most ‘outsiders’, usually foreign authors, are guaranteed remuneration at the same rate as members of the CMO and also in most cases have the choice to opt-out of any collective management agreement in which their works are included, ECL is considered to comply with the principle of equal treatment. (Axham and Guibault, 2011a)
Compatibility with EC/EU Law
Article 5 of Directive 2001/29/EC article contains an ‘exhaustive enumerations of exceptions and limitations to the reproductive right and the right of communication to the public’ (Council Directive 2001/29/EC Recital 32). With the exception of one, the limitations listed in article 5, are optional, allowing member states to choose which ones to implement in their national legislature. All limitations implemented must comply with the text of the Directive and are applied in accordance with the 3-step test.
However, Clause 18 of Directive 2001/29/EC states that:
This Directive is without prejudice to the arrangements in the Member States concerning the management of rights such as extended collective licences.
This implies that ECL agreements are not regarded as exceptions or limitations in the meaning of the Directive and are thus not enumerated under article 5 (Rosén 2011)
Drawbacks of ECL
As several authors. including Rignalda(2011) and Axhamn & Guibault (2011a), note one of the main drawbacks of ECL as a solution, is the cost of licensing. Rignalda cites the cost per book negotiated for Norwegian digitisation projects as averaging €13. Clearly at this rate not scalable/ feasible for entire collections, they explain the reasoning for this as being due to market forces saying:
The essence of this issue is that extended collective management is not a policy instrument. It is not intended to be used by governments to control copyright and achieve certain policy goals. It is merely a facility to support the market, to lower transaction costs, and to resolve market failure.
Rignalda(2011) also references the high cost of administering a system of extended collective licensing as an issue, noting that this applies in particular to the costs incurred by CMOs for locating rightsholders and paying them the licensing fees that they are due. Essentially the advantage of ECL over the Orphan Works Directive, is that it transfers the burden of locating rightsholders to the CMO in exchange for a fixed rate licensing fee. In some instances this may be preferable.
Anderstotter(2017) carried out comparative research into the legislative frameworks for clearing orphan works in UK and Sweden. As part of her research she conducted interviews with Victoria Stobo and Nic Poole and both were asked to give their thoughts on ECL as a means of outsourcing rights clearance. Both commented on the cost with Poole noting he had:
’yet to see a collective licensing scheme that is cheaper than it is to simply deal with the risk.’
He continues by further highlighting pricing as a major barrier to the success of collective licensing, maintaining that:
’the collective licensing always falls down on price. (…) It costs human and organisational resources even if the licence cost is very low. (…) If [a work] has become orphaned it is highly unlikely the cost of collective licensing scheme is going to be proportionate.
Per Anderstotter(2017) Stobo describes ECL as a ‘bureaucratic nightmare‘ claiming
’It’s not well-used, it’s expensive in terms of the time it takes to go through the applications process, and you can’t use it for mass-digitisation. It could perhaps be used for high-risk orphans.
Stobo also raises the fact that the UK does not have a strong tradition of collective management in most sectors, this is a point raised by Rignalda(2011) who notes that while ECL is popular in Nordic countries where the benefits of such a system are widely understood. He believes that there may not as widespread availability of CMOs sufficiently representative to issue license in countries where ECL is not so popular.
On this basis it appears that ECL is not a viable alternative to the problem of mass digitisation for cultural heritage, although it may have a limited use for certain categories of Orphan Works. There is nothing in the Orphan Works Directive to prevent the use of the two solutions side by side, since the directive states it is without prejudice to collective management agreements. However, this still leave both CMOs and CHIs facing high costs for rights clearance. Therefore a solution remains elusive.
During August I launched a survey of UK GLAM institutions asking them about their experiences of the UK licensing scheme and the EU exception. From this I hope to get an understanding of how institutions are using the license scheme / exception and if not what their reasons for not doing so are. I am also planning on conducting some brief interviews with some of the respondents. The survey has been created in Google Forms and distributed through Twitter, email and various Jisc Mail Lists. The questions can be read here and the actual survey can be via this link, just in case you’re reading and are able to contribute 😉
I will post an update once I’ve analysed the responses in detail , but in the meantime here’s a sneak preview.
Graphs showing the responses to questions 2 & 4 of my orphan works survey
Anderstotter, K. (2016) A Single-Minded Market for Digital Assets? [Master’s Thesis] King’s College London
Knights, Roger (2001) Limitations and Exceptions under the “Three-Step-Test” and in National Legislation–Differences between the Analog and Digital Environments. Regional Workshop on Copyright and Related Rights In The Information Age organized by the World Intellectual Property Organization (WIPO) in cooperation with the Russian Agency for Patents and Trademarks (Rospatent) Moscow, May 22 to 24, 2001 http://www.wipo.int/edocs/mdocs/copyright/en/wipo_cr_mow_01/wipo_cr_mow_01_2.pdf
Opinion of Advocate General Melchior Wathelet in Marc Soulier and Sara Doke v Ministre de la Culture et de la Communication and Premier Minister, C-301/15, EU:C:2016:536.
Ringnalda, A. (2011) Orphan Works, Mass Rights Clearance, and Online Libraries: The Flaws of the Draft Orphan Works Directive and Extended Collective Licensing as a Solution. Medien und Recht International , Vol. 8 pp. 3-10. Available at: https://ssrn.com/abstract=2369974
In my last post I wrote a brief introduction to my #citylis dissertation, a study of the effectiveness of current Orphan Works legislation. To recap Orphan works are Copyrighted material for which the copyright holder is unknown or unable to be located. The collections of cultural heritage institutions such libraries and archives hold a significant amount of such material. As they are unable to contact the rights holder to gain permission for using this material, institutions have chosen to not digitize these works. This has led to what is known as the 20th century black hole
My dissertation seeks to determine how effective the legislative solutions to Orphan Works have been in enabling cultural heritage institutions to digitize and make available online the parts of their collections that contain Orphan Works.
1) Why is it important?
Without, an effective solution much of our global cultural heritage will remain hidden and inaccessible to anyone unable to visit the institutions containing so many of these valuable works. A Collections Trust report (2009) claimed that:
The huge scale and significant impact of Orphan Works, conservatively estimated to be some 25 million items across public sector organisations, has led to a ‘locking up’ of content with little or no prospect of these items ever making a meaningful contribution to a knowledge economy without potentially complex and costly ‘due diligence’ processes.
Baker (2016) claims that the inability to clear the copyright of works whose authors could not be located, has hindered various mass digitization projects including Europeana and Google Books. This is echoed by Van Gompel and Hugenholtz (2010) who state that:
By impeding the clearance of copyright and related rights, the orphan works problem may frustrate entire reutilization projects and prevent culturally- or scientifically-valuable content being used as building blocks for new works
David R Hansen (2017) believes that it is a problem that is worth solving claiming that:
As long as it remains unsolved, a significant fraction of our culture will be hidden or suppressed. The problem is not that we can’t put our hands on the works themselves. Most are sitting on library shelves, just as, conversely, most works sitting on library shelves may be orphans. The problem is that we are not legally allowed to make them more accessible and usable, even when the rightsholders would welcome us to do so.
2) We don’t really know how big a problem it is, but it’s big:
In a 2010 study on orphan works and the costs of rights clearance Vuopala argues that:
The first and fundamental challenge when dealing with the issue of orphan works is to quantify the extent of the problem, i.e. to establish reliable figures on the amount of orphan works within collections of cultural institutions in Europe.
Various studies give estimates about the proportion of Orphan Works in the collections of cultural heritage institutions. A 2011 rights clearance exercise for the British Library found that 43% of a sample of 140 books published between 1874 and 2010, were Orphan Works. Meanwhile, a 2009 Collections Trust study meanwhile reported that ‘the average proportion of Orphan Works in a collection overall was measured at 5% to 10%’, whilst certain sectors, such as libraries and archives likely to have much higher. Based upon estimates they calculated that the UK museums sector was likely to include holdings of over 25 million Orphan Works. Furthermore, 26 of the 503 respondents to the survey reported that their collections contained over 1 million Orphan Works.
When these vast collections are taken into account, analysis to calculate the volume of Orphan Works represented in this survey alone starts to become mind-boggling. Individual estimates suggest that there are single organisations in the survey sample that hold in excess of 7.5 million Orphan Works. (Jisc 2009)
Overall they estimated that 503 UK institutions could hold in excess of 50 million orphan works. Vuopala (2010) argues that:
It is hard to establish reliable figures on the amount of orphan works, because at the moment there is no easy way to establish that a work is orphan. Hence, very little systematic research has been done and hardly any empirical data has been available about problems related to orphan works.
3)Why do works become Orphaned?
There are several reasons why works become orphaned including:
The work has insufficient information to identify the creator copyright holder.
The original copyright holder can no longer be located at the original address and there are no records of a new address
Copyright has been assigned to a new owner
The copyright holder has died or the business has ceased to exist
The copyright owner does not realize that they benefit from copyright
Length of duration of copyright in unpublished text based works in the UK
A major reason why there are so many Orphan Works is that there is no register of copyrights requiring creators to register their works. This is because copyright is automatic the Berne Convention states that there should be no formalities to the granting of copyright. Article 5(2) of the Berne Convention states that:
The enjoyment and the exercise of these rights shall not be subject to any formality ; such enjoyment and such exercise shall be independent of the existence of protection in the country of origin of the work.
Greenburg (2012) claims the rise in the number of Orphan Works has resulted from the:
shift away from formalities coupled with the explosion of instant authorship and the expansion in scope and duration of copyright,
As Baker(2017) notes the Orphan Works problem has led to calls for a system of copyright registration to be reintroduced (until the 1923 Copyrighted works had to be registered with the Stationers Company) however such a suggestion would be impractical in the current age of digital and social media. Any return to a system of copyright registration would create a situation whereby creators such as bloggers, photographers would need to apply for copyright protection for their posts on a daily basis. As Greenburg (2012) argues any return to registration would lead to a situation, whereby due to the costs involved in a formal registration with the Copyright office, a creator such as a blogger would be unable to determine at the point of creation which if their works were likely to be commercially successful or not. Consequently, he argues they would likely not to register any works.
Furthermore, in order to be of value to potential users of Orphan Works, any such register would need to be updated to record any transfer of intellectual property rights. A voluntary system of registration exists in the United States whereby creators can submit their work for registration with the Copyright office, however, this is optional and does not grant them any additional rights. It does, however, provide a public record of the copyright claim, and is necessary prior to any infringement claims (U.S. Copyright Office 2012).
EDIT someone pointed out to me that under US copyright law, a rights holder can only sue and collect damages for infringement if one they have registered a property for copyright, the right to sue (for damages) is important, especially as a deterrent to use of the work without permission or licensing.
One of the recommendations of the 2009 report on Orphan Works, ‘In from the Cold An assessment of the scope of ‘Orphan Works’ and its impact on the delivery of services to the public’ (Jisc) was the creation of an ‘official national database’ such a database they suggested should be:
…on an ‘opt-in’ basis, so that copyright holders would be responsible for making sure that they put their works into the database if they want to benefit and that, otherwise, organisations could use works as they see fit.
The Gowers’ Review of Intellectual Property (Gov.uk 2006) made a similar recommendation for the establishment of a voluntary register of Copyright, possibly in partnership with existing rights holder databases. However, there is a difference between a voluntary register and a formal register that would require rights holders to opt out if they didn’t wish their work to be used. Such a system would likely be in breach of the Berne convention as it would represent a formality.
N.B. it could be it’s referring to a form of Extended Collective Licensing, but it’s not clear.
4) The length of Copyright is damaging our cultural heritage
As William Patry (2012) argues in How to Fix Copyright one of the main causes of Orphan Works is the length of Copyright term, as he says:
If you can’t track down who owns rights in the work, you can’t use it no matter how socially beneficial your use may be and no matter how likely it is that the copyright owner has lost all interest in exploiting the work. This was not a problem when
the term of protection was short: those who argued for a longer term of protection selfishly thought only of themselves, not thinking or caring that the longer the term of protection, the greater the loss to society from the inability to create new works based on old works, and an inability to preserve old works. The long term of protection is seriously damaging to our cultural heritage.
When the first act of Copyright, the Statute of Anne was enacted in 1709, it granted authors a protection of 14 years, 21 years for books already in print. At the end of the 14 years if the author was still alive then the copyright was renewed for another 14 years. Over time authors and authors rights societies have sought to lengthen the term of protection afforded by Copyright.
Today the length of Copyright today stands at life +70 (70 years after the death of the author/creator) this means the copyright in most works published today and within the last century won’t expire and enter the public domain for a considerable length of time. This means that for CHIs to digitise them, they will need to track down the creator or current copyright holder. The further away from the date of creation the greater the chance that the work will become orphaned.
Patry (2012) proceeds to claim that there is no Orphan Works problem, rather a problem with the length of Copyright term, arguing that the longer copyright term makes it harder to use works:
Short of contacting each author about each work, there is no longer a way to determine which works the author desires to protect and which works he, she, or it (in the case of companies) doesn’t wish to protect: All works must be treated as under protection, requiring permission before use.
COMMUNIA, an international association advocating for policies that expand the public domain and increase access to and reuse of culture and knowledge, have called for a reduction in the length of term of Copyright.
They argue the excessively long duration of copyright protection in combination with a lack of formalities has a detrimental effect on the accessibility of our shared, stating that:
There is no evidence that copyright protection that extends decades beyond the life of the author encourages the production of copyright protected works. Instead the requirement to obtain permission for works by authors that have long died are one of the biggest obstacles for providing universal access to our shared culture and knowledge.
5)It’s all about risk
Perception of risk and tolerance or intolerance to risk is an important factor in the selection of material for digitisation. The greater the tolerance of risk the more materials that are likely to be included in digitisation. The risk may be as much do with concern for reputational damage, as the concerns regarding financial penalties arising from infringement. Well known and prestigious CHIs may be less willing to risk their reputation in digitising material without permission from rights holders. Favale, Schroff, and Bertoni (2017) point to differing approaches to risk among institutions dealing with Orphan Works, stating that:
Most risk-averse institutions do not digitize or do not publish orphan works whereas others take the risk to use the works without clearance. Others try their best to locate the rightholders of these works, to a different extent.
In his report looking into legal strategies for digitising Orphan Works in the United States Hansen(2017) notes that risk and uncertainty are two of the main reasons why so few of the estimated millions of orphan works in libraries and archives are made available online, saying that:
Librarians, archivists, and others want to digitize orphan works and make them available for free online, but often don’t because of risks associated with legal action and, specifically, copyright infringement actions. If the rightsholder of a work that the digitizer thought was orphaned were to later come forward with a copyright infringement claim, the result could be devastating.
Reputational damage, arising from an act of infringement could impact on the ability to secure participation from rights holders for future digitization projects, as Deazley & Stobo (2013) noted when discussing the Codebreakers digitisation project at the Wellcome library:
Damaging your reputation as a trusted and reliable repository is indeed a serious risk, if you consider that the reputation of the Wellcome Trust and by extension the Library, was important in securing the participation of rights holders in Codebreakers in the first place.
Where risk/copyright is a factor that leads to the exclusion of material from digitisation projects, this creates the possibility of a skewed public digital record. In an age where the primary access to the public record and cultural heritage is online, the exclusion of a category or categories of material could lead to a distortion towards material that requires little or no rights clearance.
As Stobo et al (2017) noted this has several impacts: first, because digital is now the primary means of access for CHI users they may be unaware of what is missing from said digitised collections. Second, research using online materials is skewed on the basis of what is available, this is especially significant for disciplines such as the Digital Humanities that rely on large data-sets for their research. Lastly, if material is not accessible due to copyright, this has implications for access and preservation since preservation may only be prompted when there is a request to use the material. In other words preservation may also be hindered by the inability to facilitate rights clearance for online access.
Rosati (2013) notes the impact of Copyright :
there have been significant instances in which the copyright status of the various works that could be potentially included in digitization projects, along with difficulties arisen in clearing the relevant rights, have either impeded or significantly altered the scope of digitization projects.
7) Part of a larger problem
The orphan works issue can be seen as a symptom or manifestation of the larger issue of how Cultural Heritage Institutions go about clearing rights for digitization projects. Orphan works may comprise only a small percentage of a collection that is being digitized, or it may not be apparent before the start of a project whether works are orphaned or not. Ringnalda (2011) says this in claiming that orphan works issue is:
not the main hurdle on the way to a successful Europeana. Instead, the orphan works problem is only a symptom of a much larger issue: the inability to clear copyrights for the mass digitization and online dissemination of entire library collections.
Meanwhile, Vuopala (2010) notes the cost of clearing rights can amount to far more than the cost of digitizing the material, especially for smaller institutions. While, the cost of digitizing is reduced as projects are scaled up, the more items digitized the cheaper the cost per item, the cost of clearing. Furthermore, the process of rights clearance for large scale digitization projects can be extremely time consuming and laborious, as Ringnalda (2011) argues:
Licensing on an individual scale seems to be too much to ask for. If we have to wait until the copyright owners for all the works in library collections have been found or looked for, we will not see a digital library in the near future, with or without orphan works. Waiting for all those works to pass into the public domain would be more efficient, and probably quicker, too.
While it may be more efficient and quicker to wait for works to pass back into the public domain this does nothing to resolve the issue of the 20th Century Black hole. The question facing legislators has been how to resolve the issue.
8) The i2010 Digital Library initiative and the need for a legislative solution
In 2005 the European Commission’s Information Society and Media Directorate published its i2010 digital Libraries initiative, stating the intention of creating a single European digital library providing online access to European cultural heritage. Launched in 2008 Europeana acts as an aggregator for Europe’s digitised cultural heritage and offers direct access to digitised books, audio and film material, photos, paintings, maps, manuscripts, newspapers and archival documents that are part of Europe’s cultural heritage.
The EU has been considering the issue of orphan works since 2006 when it established a High Level Expert Group (HLEG) on Digital Libraries. An interim report “Report on Digital Preservation, Orphan Works and Out-of-Print Works” adopted by the group in 2007 stated that a solution to orphan works was desirable for at least literary and audio-visual works. It proposed that non-legislative solutions to orphan works should include the creation of dedicated databases concerning information on orphan works, improvements to rights holder metadata in digital material, and enhancements to contractual practices, particularly for audiovisual works. The Subgroup also recommended that Member states give appropriate support to contractual arrangements that take into account the role of cultural institutions.
In the UK the Gower’s Review of Intellectual Property (2006) considered the problem of orphan works and made various recommendations on how to resolve the issue, including proposing an ‘orphan works’ provision to the European Commission that would enable creative artists to reuse orphaned material. Anticipating future legislation to resolve the situation, it also recommended that clear guidance be issued by the Patent Office regarding
the parameters of a ‘reasonable search’ for orphan works, in consultation with rights holders, collecting societies, rights owners and archives,when an orphan works exception comes into being.
Lamentably, many of the recommendations made by the Gower’s Review had failed to be implemented by the time of Sir Hargreaves Review of Intellectual Property and Growth (2011) which noted that only 25 of the 54 recommendations made by Gower’s had been implemented. Hargreaves seems to imply that the reason for this was down to the effectiveness of groups acting on behalf of rights holders whose lobbying ‘”has been more persuasive to Ministers than economic impact assessments.”
Hargreaves’ report would also attempt to tackle the issue of orphan works himself suggesting that the creation of a Digital Copyright Exchange, which would facilitate the sale of licenses by rights owners, claiming that automation would speed up and reduce the cost in the process. The will result in:
…a UK market in digital copyright which is better informed and more readily capable of resolving disputes without costly litigation.
The need for a legislative solution to the issue of orphan works is illustrated by Ringnalda(2011) who argues that given that infringement is a criminal offence in many European countries, allowing users to simply start using the works after an unsuccessful attempt to locate the rights holder would not be appropriate. He says:
Inducing public libraries to wilfully violate criminal law by having them use orphaned works without permission would therefore clearly violate public order and policy. Self-regulation cannot suffice. A legal solution is required.
While the i2010 strategy and the Europeana platform that it gave birth form the background to the adoption of the Orphan Works Directive in 2012, Janssens & Tryggvadóttir(2016) note that particular attention to preservation and making available of European cultural heritage, was also driven by Google’s book project, arguing that this was the spur behind the European Digital Library initiative. Rosati (2013) agrees that Google Books settlement encouraged the development of orphan works legislation and wider reforms relating to digitisation of works, citing a 2009 speech by, then Commissioner for Telecoms and Media Digital Europe, Viviane Reding, in which it was claimed that:
lacking a reform of EU rules on orphan works, digitization of works and the development of attractive content offers (including the Google Book Library Project) would not have taken place in Europe.
An 2008 report of the Copyright Subgroup of the HLEG identified four conditions that should be met by potential users of orphan works:
A user wishes to make good faith use of a work with an unclear copyright status;
Due diligence has been performed in trying to identify the rightholders and/or locate them;
The user wishes to use the work in a clearly defined manner;
The user has a duty to seek authority before exploiting the orphan work…
By 2011, when the EU began to consider again the development of a legislative solution to enable the use of orphan works, they considered various options that fit into two categories, the first is based on extended collective licensing (ECL) and the second is modelled on a non-exclusive license. Ringnalda (2011) claims that, of the five options considered by the European Commission four would:
Prescribe modalities of either an exception or a limitation, of a statutory licensing scheme
The fifth and preferred solution as one that allowed member states to devise their own legal technique to enable libraries to make their orphaned works available online, as long as prior permission was given for each work. Any work recognized as orphaned in on member state must be recognized as an orphan in all in order to ensure they are available throughout Europe, a principle is known as mutual recognition. This is necessary to ensure that a user need not comply with the rules of orphan works in 27 member states.
Rosati (2013) outlines possible options considered by the Commission, including the adoption:
of a legally binding standalone instrument on the clearance and mutual recognition of orphan works, a specific exception to be added to Directive 2001/29 (the ‘InfoSoc Directive’), or guidance on cross-border mutual recognition of orphan works.
The favored solution was one that allowed member states to devise their own legal technique to enable libraries to make their orphaned works available online, as long as prior permission was given for each work. Any work recognized as orphaned in one member state, fooling a diligent search must be recognized as an orphan in all in order to ensure they are available throughout Europe, a principle is known as mutual recognition. This is necessary to ensure that a user need not comply with the rules of orphan works or conduct a diligent search in 27 member states. As Rosati (2013) argues:
On this basis, it would have been possible to make orphan works available online for cultural and educational purposes without prior authorization, unless (or until) the relevant right holder put an end to the orphan work status.
Besides the options outlined above legislative solutions to the orphan Works problem can include reducing liability for infringement resulting from the use of Works. And limitation of damages that can be claimed by rightsholders. These approaches were included in a proposed Orphan Works Bill which failed to make it through Congress in 2008. (At the Orphan Works Symposium in Bournemouth Prof. Peter Jaszi regaled us with a tragic tale about US attempts to introduce orphan Works legislation)
The eventual approach taken by the European Parliament was the adoption of an Orphan Works Directive on 25th Oct 2012, establishing a new exception to copyright exclusive rights for a number of orphan works. The final text incorporated minor amendments from the initial draft, these included articles 3, which states that a diligent search should be carried out in good faith, 5(1A) which states that a diligent search should be carried out in good faith and only prior to the use of the work, and provisions for the right to fair compensation for reappearing rights holders.
The purpose of the directive was to ensure a legal framework that ensured the lawful cross border online access to orphan works contained within the collections of institutions such as libraries, museums, archives, educational establishments, film heritage and public broadcasters with a public as part of their public interest mission. A directive was necessary to ensure cross border access, reduce transaction costs and facilitate the identification of rightsholders, in doing so it would advance the wider aim of building the knowledge economy. (Rosati 2013)
The eventual directive also restricted the overall scope of coverage, by excluding standalone artworks.
A flawed directive?
Despite the efforts of the EU Parliament to create a directive to enable the widespread digitization of orphan works, there are several flaws that mean it’s impact has been limited. Below I will briefly outline the main issues.
A structural problem?
As outlined above the issue of clearing orphan works is part of the wider problem of clearing the rights of in Copyright works for mass-digitization. All digitization projects that require individual rights clearance of works, face the possibility of high transaction costs that result from both seeking out rights holders and negotiating usage. In many cases rightsholders may request a fee for use of their works.
Conducting the diligent search can be time consuming and costly, not least because as many studies, have, highlighted (Borghi, Favale and Erickson, 2016, Favale, Schroff and Bertoni 2015, Favale, Homberg, Kretschmer, Mendis and Secchi 2013, Ringnalda ) the requirements are either too vague or too onerous, leading to institutions facing. excessive time costs spent trying to meet the requirement, to establish certainty.
Part of the problem is that unlike the UK Licensing Scheme there is no independent body to certify a search as diligent, organisations are left to judge for themselves to determine what constitutes a diligent enough search. Consequently, as Stobo, Patterson and Erickson (2017) highlight many institutions did not view a completed diligent search as sufficient to enable digitisation, relying instead on additional risk assessments. This because as Baker(2016) says:
Though the regulations require that a diligent search be conducted to determine the orphan status of a work, and set out relevant sources to be consulted for each category of relevant work, they fail to specify whether consulting each of the specified sources will automatically guarantee operation of the exception.
Selection of material
Cost of exhibition development (calendar time, scheduling, space)
Knowledge costs related to identifying and handling IP
PR / reputation costs arising from embarking on infringing activity
Subscription fee to database required for DS (Favale et al. 2016)
Labour cost of examining works (Dickson, 2010)
Labour cost of searching for rightsholders / DS (Dickson, 2010)
Labour cost of corresponding with rightsholders (Covey (2005; Stobo et al, 2016))
Material cost of communicating w/ rightsholders (Covey, 2005)
Alterations to project design incurred by rightsholder requests
Fees paid to rightsholders located by DS
Fees paid to license orphan works in UK scheme or ECL
Alterations to display of work at request of rightsholder
Takedown of work on rightsholder re-emergence (Schofield & Urban, 2015)
Compensation paid on rightsholder re-emergence
TABLE 1: Characterising costs of rights clearance in three phases (Borghi, Erickson & Favale (2016)
A further issue with the directive highlighted by Baker(2016) concerns provisions relating to the reappearance of rightsholders. The regulations state that a reappearing rightsholder may put an end to a works orphan status, by providing evidence of ownership, but fails to make clear what standard of evidence is required or what form it should take form this should take. Furthermore, whilst mutual recognition of a work’s orphan status is guaranteed in the directive, it doesn’t make clear that there is mutual recognition of the termination of a work’s orphan status.
According to Ringnalda (2013) the proposed Directive intended to:
“ameliorate the burden of diligent search efforts by referring to a number of sources that should be consulted before a search is considered to have been sufficiently diligent.”
However, such an approach he argues fits the problem of orphan works but fails to resolve the issue of mass rights clearance.
“It may provide a legal technique that allows orphan works, but if libraries will be required to clear copyrights for each individual work we shall not have to worry about the orphan before their copyrights have already expired”
Stobo, Erickson and Patterson(2017) reported that whilst many institutions such as the BFI, and the British Library have made efforts to engage with the orphan works directive, several others including Wellcome Library, National Portrait Gallery, National Library of Wales, and National Library of Scotland have not made use of either the Orphan works Directive or the UK IPO Orphan Works Licensing scheme, instead relying on their own internal risk management policies.(See also Stobo et al 2013)
In my next post I will look at the proposed diligent search crowdsourcing platform (ENDOW) as a solution to the issues around diligent search. I also aim to report on the survey I am going to conduct of UK cultural heritage institutions to establish how they are approaching the management and use of orphan works in their collections. This research will contribute to an understanding of the effectiveness of the orphan works legislation and help to determine if it has been effective in enabling mass digitization as intended.
Baker, K. E. (2016) It’s a hard knock life: a critique of the legislative response to the orphan works problem in the UK. UCL Journal of Law and Jurisprudence DOI: 10.14324/111.2052-1871.061
Borghi, M., Erickson, K. and Favale, M. (2016) ‘With Enough Eyeballs All Searches Are Diligent: Mobilizing the Crowd in Copyright Clearance for Mass Digitization’, Chicago-Kent Journal of Intellectual Property, 16(1). Available at: http://scholarship.kentlaw.iit.edu/ckjip
Janssens M., Tryggvadóttir R. (2016) Orphan works and out-of-commerce works to make the European cultural heritage available: are we there yet?. In The Future of Copyright. A European Union and International Perspective. Stamatoudi I. (eds.) Wolters Kluwer , pp. 189-209
Rosati, E. (2013) The Orphan Works Directive, or throwing a stone and hiding the hand. Journal of Intellectual Property Law & Practice, Vol. 8, No. 4. pp 303-310.
Ringnalda, A. (2011) Orphan Works, Mass Rights Clearance, and Online Libraries: The Flaws of the Draft Orphan Works Directive and Extended Collective Licensing as a Solution . Medien und Recht International , Vol. 8 pp. 3-10. Available at: https://ssrn.com/abstract=2369974
Stobo, V., Deazley, R. and Anderson, I. G. (2013) Copyright & Risk: Scoping the Wellcome Digital Library Project’, 10(December). doi: 10.5281/zenodo.8380.This.
Stobo, V., Patterson, K., Erickson., K. (2017) I should like you to see them some time’: an empirical study of copyright clearance costs in the digitisation of Edwin Morgan’s scrapbooks (Journal of Documentation – forthcoming)
van Gompel, S. and Hugenholtz, P. B. (2010) The Orphan Works Problem: The Copyright Conundrum of Digitizing Large-Scale Audiovisual Archives, and How to Solve It, Popular Communication. Taylor & Francis Group , 8(1), pp. 61–71. doi: 10.1080/15405700903502361.