A coalition of civil rights and public interest groups issued recommendations today on policies they believe Internet intermediaries should adopt to try to address hate online. While there’s much of value in these recommendations, EFF does not and cannot support the full document. Because we deeply respect these organizations, the work they do, and the work we often do together; and because we think the discussion over how to support online expression—including ensuring that some voices aren’t drowned out by harassment or threats—is an important one, we want to explain our position.
We agree that online speech is not always pretty—sometimes it’s extremely ugly and causes real world harm. The effects of this kind of speech are often disproportionately felt by communities for whom the Internet has also provided invaluable tools to organize, educate, and connect. Systemic discrimination does not disappear and can even be amplified online. Given the paucity and inadequacy of tools for users themselves to push back, it’s no surprise that many would look to Internet intermediaries to do more.
We also see many good ideas in this document, beginning with a right of appeal. There seems to be near universal agreement that intermediaries that choose to take down “unlawful” or “illegitimate” content will inevitably make mistakes. We know that both human content moderators and machine learning algorithms are prone to error, and that even low error rates can affect large swaths of users. As such, companies must, at a minimum, make sure there’s a process for appeal that is both rapid and fair, and not only for “hateful” speech, but for all speech.
Another great idea: far more transparency. It’s very difficult for users and policymakers to comment on what intermediaries are doing if we don’t know the lay of the land. The model policy offers a pretty granular set of requirements that would provide a reasonable start. But we believe that transparency of this kind should apply to all types of speech.
Another good feature of the model policy are provisions for evaluation and training so we can figure out the actual effects of various content moderation approaches.
But there’s much to worry about too.
Companies Shouldn’t Be The Speech Police
Our key concern with the model policy is this: It seeks to deputize a nearly unlimited range of intermediaries—from social media platforms to payment processors to domain name registrars to chat services—to police a huge range of speech. According to these recommendations, if a company helps in any way to make online speech happen, it should monitor that speech and shut it down if it crosses a line.
This is a profoundly dangerous idea, for several reasons.
First, enlisting such a broad array of services to start actively monitoring and intervening in any speech for which they provide infrastructure represents a dramatic departure from the expectations of most users. For example, users will have to worry about satisfying not only their host’s terms and conditions but also those of every service in the chain from speaker to audience—even though the actual speaker may not even be aware of all of those services or where they draw the line between hateful and non-hateful speech. Given the potential consequences of violations, many users will simply avoid sharing controversial opinions altogether.
Second, we’ve learned from the copyright wars that many services will be hard-pressed to come up with responses that are tailored solely to objectionable content. In 2010, for example, Microsoft sent a DMCA takedown notice to Network Solutions, Cryptome’s DNS and hosting provider, complaining about Cryptome’s (lawful) posting of a global law enforcement guide. Network Solutions asked Cryptome to remove the guide. When Cryptome refused, Network Solutions pulled the plug on the entire Cryptome website—full of clearly legal content—because Network Solutions was not technically capable of targeting and removing the single document. The site was not restored until wide outcry in the blogosphere forced Microsoft to retract its takedown request. When the Chamber of Commerce sought to silence a parody website created by activist group The Yes Men, it sent a DMCA takedown notice to the Yes Men’s hosting service’s upstream ISP, Hurricane Electric. When the hosting service May First/People Link resisted Hurricane Electric’s demands to remove the parody site, Hurricane Electric shut down MayFirst/PeopleLink’s connection entirely, temporarily taking offline hundreds of "innocent bystander" websites as collateral damage.
Third, we also know that many of these service providers have only the most tangential relationship to their users; faced with a complaint, takedown will be much easier and cheaper than a nuanced analysis of a given user’s speech. As the document itself acknowledges and as the past unfortunately demonstrates, intermediaries of all stripes are not well-positioned to make good decisions about what constitutes “hateful” expression. While the document acknowledges that determining hateful activities can be complicated “in a small number of cases,” the number likely won’t be small at all.
Finally, and most broadly, this document calls on companies to abandon any commitment they might have to the free and open Internet, and instead embrace a thoroughly locked-down, highly monitored web, from which a speaker can be effectively ejected at any time, without any path to address concerns prior to takedown.
To be clear, the free and open Internet has never been fully free or open—hence the impetus for this document. But, at root, the Internet still represents and embodies an extraordinary idea: that anyone with a computing device can connect with the world, anonymously or not, to tell their story, organize, educate and learn. Moderated forums can be valuable to many people, but there must also be a place on the Internet for unmoderated communications, where content is controlled neither by the government nor a large corporation.
What Are “Hateful Activities”?
The document defines “hateful activities” as those which incite or engage in “violence, intimidation, harassment, threats or defamation targeting an individual or group based on their actual or perceived race, color, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation or disability.”
We may agree that speech that does any of these things is deeply offensive. But the past proves that companies are ill-equipped to make informed decisions about what falls into these categories. Take, for example, Facebook’s decision, in the midst of the #MeToo movement’s rise, that the statement “men are trash” constitutes hateful speech. Or Twitter’s decision to use harassment provisions to shut down the verified account of a prominent Egyptian anti-torture activist. Or the content moderation decisions that have prevented women of color from sharing the harassment they receive with their friends and followers. Or the decision by Twitter to mark tweets containing the word “queer” as offensive, regardless of context. These and many other decisions show that blunt policies designed to combat “hateful” speech can have unintended consequences. Furthermore, when divorced from a legal context, terms like “harassment” and “defamation” are open to a multitude of interpretations.
If You Build It, Governments Will Come
The policy document also proposes that Internet companies “combine technology solutions and human actors” in their efforts to combat hateful activities. The document rightly points out that flagging can be co-opted for abuse, and offers helpful ideas for improvement, such as more clarity around flagging policies and decisions, regular audits to improve flagging practices, and employing content moderators with relevant social, political, and cultural knowledge of the areas in which they operate.
However, the drafters are engaging in wishful thinking when they seek to disclaim or discourage governmental uses of flagging tools. We know that state and state-sponsored actors have weaponized flagging tools to silence dissent. Furthermore, once processes and tools to silence “hateful activities” are expanded, companies can expect a flood of demands to apply them to other speech. In the U.S., the First Amendment and the safe harbor of CDA 230 largely prevent such requirements. But recent legislation has started to chip away at Section 230, and we expect to see more efforts along those lines. As a result, today’s “best practices” may be tomorrow’s requirements.
Our perspective on these issues is based on decades of painful history, particularly with social media platforms. Every major social media platform sets forth rules for its users, and violations of these rules can prompt content takedowns or account suspensions. And the rules—whether they relate to “hateful activities” or other types of expression—are often enforced against innocent actors. Moreover, because the platforms have to date refused our calls for transparency, we can’t even quantify how often they fail at enforcing their existing policies.
We’ve seen prohibitions on hate speech employed to silence individuals engaging in anti-racist speech; rules against harassment used to suspend the account of an LGBTQ activist calling out their harasser; and a ban on nudity used to censor women who share childbirth images in private groups. We’ve seen false copyright and trademark allegations used to take down all kinds of lawful content, including time-sensitive political speech. Regulations on violent content have disappeared documentation of police brutality, the Syrian war, and the human rights abuses suffered by the Rohingya. A blanket ban on nudity has repeatedly been used to take down a famous Vietnam war photo.
These recommendations and model policies are trying to articulate better content moderation practices, and we appreciate that goal. But we are also deeply skeptical that even the social media platforms can get this right, much less the broad range of other services that fall within the rubric proposed here. We have no reason to trust that they will, and every reason to expect that their efforts to do so will cause far too much collateral damage.
Given these concerns, we have serious reservations about the approach the coalition is taking in this document. But there are important ideas in it as well, notably the opportunity for users to appeal content moderation decisions, and expanded transparency from corporate platforms, and we look forward to working together to push them forward.