There’s a lot of talk these days about “content moderation.” Policymakers, some public interest groups, and even some users are clamoring for intermediaries to do “more” to make the Internet more “civil,” though there are wildly divergent views on what that “more” should be. Others vigorously oppose such moderation, arguing that encouraging the large platforms to assert an ever-greater role as Internet speech police will cause all kinds of collateral damage, particularly to already marginalized communities.

Notably missing from most of these discussions is a sense of context. Fact is, there’s another arena where intermediaries have been policing online speech for decades: copyright. Since at least 1998, online intermediaries in the US and abroad have taken down or filtered out billions of websites, posts, and links, often based on nothing more than mere allegations of infringement. Part of this is due to Section 512 of the Digital Millennium Copyright Act (DMCA), which protects service providers from monetary liability based on the allegedly infringing activities of third parties if they “expeditiously” remove content that a rightsholder has identified as infringing. But the DMCA’s hair-trigger process did not satisfy many rightsholders, so large platforms, particularly Google, also adopted filtering mechanisms and other automated processes to take down content automatically, or prevent it from being uploaded in the first place.

As the content moderation debates proceed, we at EFF are paying attention to what we've learned from two decades of practical experience with this closely analogous form of “moderation.” Here are a few lessons that should inform any discussion of private censorship, whatever form it takes.

1. Mistakes will be made—lots of them

The DMCA’s takedown system offers huge incentives to service providers that take down content when they get a notice of infringement. Given the incentives of the DMCA safe harbors, service providers will usually respond to a DMCA takedown notice by quickly removing the challenged content. Thus, by simply sending an email or filling out a web form, a copyright owner (or, for that matter, anyone who wishes to remove speech for whatever reason) can take content offline.

Many takedowns target clearly infringing content. But there is ample evidence that rightsholders and others abuse this power on a regular basis—either deliberately or because they have not bothered to learn enough about copyright law to determine whether the content to which they object is actually unlawful. At EFF, we’ve been documenting improper takedowns for many years, and highlight particularly egregious ones in our Takedown Hall of Shame

As we have already seen, content moderation practices are also rife with errors. This is unlikely to change, in part because:

2. Robots aren’t the answer

Rightsholders and platforms looking to police infringement at scale often place their hopes in automated processes. Unfortunately, such processes regularly backfire.

For example, YouTube’s Content ID system works by having people upload their content into a database maintained by YouTube. New uploads are compared to what’s in the database and when the algorithm detects a match, copyright holders can then make a claim, forcing it to be taken down, or simply opt to make money from ads put on the video.

But the system fails regularly. In 2015, for example, Sebastien Tomczak uploaded a ten-hour video of white noise. A few years later, as a result of YouTube’s Content ID system, a series of copyright claims were made against Tomczak’s video. Five different claims were filed on sound that Tomczak created himself. Although the claimants didn’t force Tomczak’s video to be taken down they all opted to monetize it instead. In other words, ads on the ten-hour video could generate revenue for those claiming copyright on the static.

Third party tools can be even more flawed. For example, a “content protection service” called Topple Track sent a slew of abusive takedown notices to have sites wrongly removed from Google search results. Topple Track boasted that it was “one of the leading Google Trusted Copyright Program members.” In practice, Topple Track algorithms were so out of control that it sent improper notices targeting an EFF case page, the authorized music stores of both Beyonce and Bruno Mars, and a New Yorker article about patriotic songs. Topple Track even sent an improper notice targeting an article by a member of the European Parliament that was about improper automated copyright notices.

So if a platform tells you that it’s developing automated processes that will target only “bad” speech, at scale, don’t believe them.

3. Platforms must invest in transparency and robust, rapid, appeals processes

With the above in mind, every proposal and process for takedown should include a corollary plan for restoration. Here, too, copyright law and practice can be instructive. The DMCA has a counternotice provision, which allows a user who has been improperly accused of infringement to challenge the takedown and, if the sender doesn’t go to court, the platform can restore to content without fear of liability. But the counternotice process is pretty flawed: it can be intimidating and confusing, it does little good where the content in question will be stale in two weeks, and platforms are often slow to restore challenge material. One additional problem with counter-notices, particularly in the early days of the DMCA, was that users struggled to discover who was complaining, and the precise nature of the complaint. 

The number of requests, who is making them, and how absurd they can get, has been highlighted in company transparency reports. Transparency reports can  highlight extreme instances of abuse—such as in Automattic’s Hall of Shame—and/or share aggregate numbers. The former is a reminder that there is no ceiling to how rightsholders can abuse the DMCA. The latter shows trends useful for policymaking. For example, Twitter’s latest report shows a 38 percent uptick in takedowns since the last report and that 154,106 accounts have been affected by takedown notices. It’s valuable data to have to evaluate the effect of the DMCA, data we also need to see what effects “community standards” would have.

Equally important is transparency about specific takedown demands, so users who are hit with those takedowns can understand who is complaining, about what. For example, an artist might include multiple clips in a single video, believing they are protected fair uses. Knowing the nature of the complaint can help her revisit her fair analysis, and decide whether to fight back.

If platforms are going to operate as speech police based on necessarily vague “community standards,” they must ensure that users can understand what’s being taken down, and why. They should do so on a broad scale by being open about their takedown processes, and the results. And then they should put in place clear, simple procedures for users to challenge takedowns, procedures that don’t take weeks to complete.

 4. Abuse should lead to real consequences

Congress knew that Section 512’s powerful incentives could result in lawful material being censored from the Internet without prior judicial scrutiny. To inhibit abuse, Congress made sure that the DMCA included a series of checks and balances, including Section 512(f), which gives users the ability to hold rightsholders accountable if they send a DMCA notice in bad faith.

In practice, however, Section 512(f) has not done nearly enough to curb abuse. Part of the problem is that the Ninth Circuit Court of Appeals has suggested that the person whose speech was taken down must prove to a jury the subjective belief of the censor—a standard that will be all but impossible for most to meet, particularly if they lack the deep pockets necessary to litigate the question. As one federal judge noted, the Ninth Circuit’s “construction eviscerates § 512(f) and leaves it toothless against frivolous takedown notices.” For example, some rightsholders unreasonably believe that virtually all uses of copyrighted works must be licensed. If they are going to wield copyright law like a sword, they should at least be required to understand the weapon.

“Voluntary” takedown systems could do better. Platforms should adopt policies to discourage users from abusing their community standards, especially where the abuse is obviously political (such as flagging a site simply because you disagree with the view expressed).

 5. Speech regulators will never be satisfied with voluntary efforts

Platforms may think that if they “voluntarily” embrace the role of speech police, governments and private groups will back off and they can escape regulation. As Professor Margot Kaminski observed in connection with the last major effort to push through new copyright enforcement mechanisms, voluntary efforts never satisfy the censors:  

Over the past two decades, the United States has established one of the harshest systems of copyright enforcement in the world. Our domestic copyright law has become broader (it covers more topics), deeper (it lasts for a longer time), and more severe (the punishments for infringement have been getting worse).

… We guarantee large monetary awards against infringers, with no showing of actual harm. We effectively require websites to cooperate with rights-holders to take down material, without requiring proof that it's infringing in court. And our criminal copyright law has such a low threshold that it criminalizes the behavior of most people online, instead of targeting infringement on a true commercial scale.

In addition, as noted, the large platforms adopted a number of mechanisms to make it easier for rightsholders to go after allegedly infringing activities. But none of these legal policing mechanisms have stopped major content holders from complaining, vociferously, that they need new ways to force Silicon Valley to be copyright police. Instead, so-called “voluntary" efforts end up serving as a basis for regulation. Witness, for example, the battle to require companies to adopt filtering technologies across the board in the EU, free speech concerns be damned.

Sadly, the same is likely to be true for content moderation. Many countries already require platforms to police certain kinds of speech. In the US, the First Amendment and the safe harbor of CDA 230 largely prevent such requirements. But recent legislation has started to chip away at Section 230, and many expect to see more efforts along those lines. As a result, today’s “best practices” may be tomorrow’s requirements.

The content moderation debates are far from over. All involved in those discussions would do well to consider what we can learn from a related set of debates about the law and policies that, as a practical matter, have been responsible for the vast majority of online content takedowns, and still are.  

Tags