When the EU started planning its new Copyright Directive (the "Copyright in the Digital Single Market Directive"), a group of powerful entertainment industry lobbyists pushed a terrible idea: a mandate that all online platforms would have to create crowdsourced databases of "copyrighted materials" and then block users from posting anything that matched the contents of those databases.

At the time, we, along with academics and technologists explained why this would undermine the Internet, even as it would prove unworkable. The filters would be incredibly expensive to create, would erroneously block whole libraries' worth of legitimate materials, allow libraries' more worth of infringing materials to slip through, and would not be capable of sorting out "fair dealing" uses of copyrighted works from infringing ones.

The Commission nonetheless included it in their original draft. Two years later, after the European Parliament went back and forth on whether to keep the loosely-described filters, with German MEP Axel Voss finally squeezing a narrow victory in his own committee, and an emergency vote of the whole Parliament. Now, after a lot of politicking and lobbying, Article 13 is potentially only a few weeks away from becoming officially an EU directive, controlling the internet access of more than 500,000,000 Europeans.

The proponents of Article 13 have a problem, though: filters don't work, they cost a lot, they underblock, they overblock, they are ripe for abuse (basically, all the objections the Commission's experts raised the first time around). So to keep Article 13 alive, they've spun, distorted and obfuscated its intention, and now they can be found in the halls of power, proclaiming to the politicians who'll get the final vote that "Article 13 does not mean copyright filters."

But it does.

Here's a list of Frequently Obfuscated Questions and our answers. We think that after you've read them, you'll agree: Article 13 is about filters, can only be about filters, and will result in filters.

  1. Article 13 is about filtering, not “just” liability

    Today, most of the world (including the EU) handles copyright infringement with some sort of takedown process. If you provide the public with a place to publish their thoughts, photos, videos, songs, code, and other copyrightable works, you don't have to review everything they post (for example, no lawyer has to watch 300 hours of video every minute at YouTube before it goes live). Instead, you allow rightsholders to notify you when they believe their copyrights have been violated and then you are expected to speedily remove the infringement. If you don't, you might still not be liable for your users’ infringement, but you lose access to the quick and easy ‘safe harbor’ provided by law in the event that you are named as part of any copyright lawsuit (and since the average internet company has a lot more money than the average internet user, chances are you will be named in that suit). What you’re not expected to be is the copyright police. And in fact, the EU has a specific Europe-wide law that stops member states from forcing Internet services from having to play this role: the same rule that defines the limits of their liability, the E-Commerce Directive, in the very next article, prohibits a “general obligation to monitor.” That’s to stop countries from saying “you should know that your users are going to break some law, some time, so you should actively be checking on them all the time — and if you don’t, you’re an accomplice to their crimes.” The original version of Article tried to break this deal, by re-writing that second part. Instead of a prohibition on monitoring, it required it, in the form of a mandatory filter.

    When the European Parliament rebelled against that language, it was because millions of Europeans had warned them of the dangers of copyright filters. To bypass this outrage, Axel Voss proposed an amendment to the Article that replaced an explicit mention of filters, but rewrote the other part of the E-Commerce directive. By claiming this “removed the filters”, he got his amendment passed — including by gaining votes by MEPs who thought they were striking down Article 13.Voss’s rewrite says that sharing sites are liable unless they take steps to stop that content before it goes online.

    So yes, this is about liability, but it's also about filtering. What happens if you strip liability protections from the Internet? It means that services are now legally responsible for everything on their site. Consider a photo-sharing site where millions of photos are posted every hour. There are not enough lawyers -- let alone copyright lawyers -- let alone copyright lawyers who specialise in photography -- alive today to review all those photos before they are permitted to appear online.

    Add to that all the specialists who'd have to review every tweet, every video, every Facebook post, every blog post, every game mod and livestream. It takes a fraction of a second to take a photograph, but it might take hours or even days to ensure that everything the photo captures is either in the public domain, properly licensed, or fair dealing. Every photo represents as little as an instant's work, but making it comply with Article 13 represents as much as several weeks' work. There is no way that Article 13's purpose can be satisfied with human labour.

    It's strictly true that Axel Voss’s version of Article 13 doesn't mandate filters -- but it does create a liability system that can only be satisfied with filters.

    But there’s more: Voss’s stripping of liability protections has Big Tech like YouTube scared, because if the filters aren’t perfect, they will be potentially liable for any infringement that gets past them — and given their billions, that means anyone and everyone might want to get a piece of them. So now, YouTube has started lobbying for the original text, copyright filters and all. That text is still on the table, because the trilogue uses both Voss’ text (liability to get filters) and member states’ proposal (all filters, all the time) as the basis for the negotiation.

  1. Most online platforms cannot have lawyers review all the content they make available

    The only online services that can have lawyers review their content are services for delivering relatively small libraries of entertainment content, not the general-purpose speech platforms that make the Internet unique. The Internet isn't primarily used for entertainment (though if you're in the entertainment industry, it might seem that way): it is a digital nervous system that stitches together the whole world of 21st Century human endeavor. As the UK Champion for Digital Inclusion discovered when she commissioned a study of the impact of Internet access on personal life, people use the Internet to do everything, and people with Internet access experience positive changes across their lives: in education, political and civic engagement, health, connections with family, employment, etc.

    The job we ask, say, iTunes and Netflix to do is a much smaller job than we ask the online companies to do. Users of online platforms do sometimes post and seek out entertainment experiences on them, but as a subset of doing everything else: falling in love, getting and keeping a job, attaining an education, treating chronic illnesses, staying in touch with their families, and more. iTunes and Netflix can pay lawyers to check all the entertainment products they make available because that's a fraction of a slice of a crumb of all the material that passes through the online platforms. That system would collapse the instant you tried to scale it up to manage all the things that the world's Internet users say to each other in public.

  1. It’s impractical for users to indemnify the platforms

    Some Article 13 proponents say that online companies could substitute click-through agreements for filters, getting users to pay them back for any damages the platform has to pay out in lawsuits. They're wrong. Here's why.

    Imagine that every time you sent a tweet, you had to click a box that said, "I promise that this doesn't infringe copyright and I will pay Twitter back if they get sued for this." First of all, this assumes a legal regime that lets ordinary Internet users take on serious liability in a click-through agreement, which would be very dangerous given that people do not have enough hours in the day to read all of the supposed ‘agreements’ we are subjected to by our technology.

    Some of us might take these agreements seriously and double-triple check everything we posted to Twitter but millions more wouldn't, and they would generate billions of tweets, and every one of those tweets would represent a potential lawsuit.

    For Twitter to survive those lawsuits, it would have to ensure that it knew the true identity of every Twitter user (and how to reach that person) so that it could sue them to recover the copyright damages they'd agreed to pay. Twitter would then have to sue those users to get its money back. Assuming that the user had enough money to pay for Twitter's legal fees and the fines it had already paid, Twitter might be made whole... eventually. But for this to work, Twitter would have to hire every contract lawyer alive today to chase its users and collect from them. This is no more sustainable than hiring every copyright lawyer alive today to check every tweet before it is published.

  1. Small tech companies would be harmed even more than large ones

    It's true that the Directive exempts "Microenterprises and small-sized enterprises" from Article 13, but that doesn't mean that they're safe. The instant a company crosses the threshold from "small" to "not-small" (which is still a lot smaller than Google or Facebook), it has to implement Article 13's filters. That's a multi-hundred-million-dollar tax on growth, all but ensuring that the small Made-in-the-EU competitors to American Big Tech firms will never grow to challenge them. Plus, those exceptions are controversial in the Trilogue, and may disappear after yet more rightsholder lobbying.

  1. Existing filter technologies are a disaster for speech and innovation

    ContentID is YouTube's proprietary copyright filter. It works by allowing a small, trusted cadre of rightsholders to claim works as their own copyright, and limits users' ability to post those works according to the rightsholders' wishes, which are more restrictive than what the law’s user protections would allow. ContentID then compares the soundtrack (but not the video component) of any user uploads to the database to see whether it is a match.

    Everyone hates ContentID. Universal and the other big rightsholders complain loudly and frequently that ContentID is too easy for infringers to bypass. YouTube users point out that ContentID blocks all kind of legit material, including silence, birdsong, and music uploaded by the actual artist for distribution on YouTube. In many cases, this isn’t a ‘mistake,’ in the sense that Google has agreed to let the big rightsholders block or monetize videos that do not infringe any copyright, but instead make a fair use of copyrighted material.

    ContentID does a small job, poorly: filtering the soundtracks of videos to check for matches with a database populated by a small, trusted group. No one (who understands technology) seriously believes that it will scale up to blocking everything that anyone claims as a copyrighted work (without having to show any proof of that claim or even identify themselves!), including videos, stills, text, and more.

  1. Online platforms aren’t in the entertainment business

    The online companies most impacted by Article 13 are platforms for general-purpose communications in every realm of human endeavor, and if we try to regulate them like a cable operator or a music store, that's what they will become.

  1. The Directive does not adequately protect fair dealing and due process

    Some drafts of the Directive do say that EU nations should have "effective and expeditious complaints and redress mechanisms that are available to users" for "unjustified removals of their content. Any complaint filed under such mechanisms shall be processed without undue delay and be subject to human review. Right holders shall reasonably justify their decisions to avoid arbitrary dismissal of complaints."

    What's more, "Member States shall also ensure that users have access to an independent body for the resolution of disputes as well as to a court or another relevant judicial authority to assert the use of an exception or limitation to copyright rules."

    On their face, these look like very good news! But again, it's hard (impossible) to see how these could work at Internet scale. One of EFF’s clients had to spend ten years in court when a major record label insisted — after human review, albeit a cursory one-- that the few seconds' worth of tinny background music in a video of her toddler dancing in her kitchen infringed copyright. But with Article 13's filters, there are no humans in the loop: the filters will result in millions of takedowns, and each one of these will have to receive an "expeditious" review. Once again, we're back to hiring all the lawyers now alive -- or possibly, all the lawyers that have ever lived and ever will live -- to check the judgments of an unaccountable black box descended from a system that thinks that birdsong and silence are copyright infringements.

    It's pretty clear the Directive's authors are not thinking this stuff through. For example, some proposals include privacy rules: "the cooperation shall not lead to any identification of individual users nor the processing of their personal data." Which is great: but how are you supposed to prove that you created the copyrighted work you just posted without disclosing your identity? This could not be more nonsensical if it said, "All tables should weigh at least five tonnes and also be easy to lift with one hand."

  1. The speech of ordinary Internet users matters

    Eventually, arguments about Article 13 end up here: "Article 13 means filters, sure. Yeah, I guess the checks and balances won't scale. OK, I guess filters will catch a lot of legit material. But so what? Why should I have to tolerate copyright infringement just because you can't do the impossible? Why are the world's cat videos more important than my creative labour?"

    One thing about this argument: at least it's honest. Article 13 pits the free speech rights of every Internet user against a speculative theory of income maximisation for creators and the entertainment companies they ally themselves with: that filters will create revenue for them.

    It's a pretty speculative bet. If we really want Google and the rest to send more money to creators, we should create a Directive that fixes a higher price through collective licensing.

    But let's take a moment here and reflect on what "cat videos" really stand in for here. The personal conversations of 500 million Europeans and 2 billion global Internet users matter: they are the social, familial, political and educational discourse of a planet and a species. They have worth, and thankfully it's not a matter of choosing between the entertainment industry and all of that -- both can peacefully co-exist, but it's not a good look for arts groups to advocate that everyone else shut up and passively consume entertainment product as a way of maximising their profits.

Related Issues