From Cloudflare’s headline-making takedown of the Daily Stormer last autumn to YouTube’s summer restrictions on LGBTQ content, there's been a surge in “voluntary” platform censorship. Companies—under pressure from lawmakers, shareholders, and the public alike—have ramped up restrictions on speech, adding new rules, adjusting their still-hidden algorithms and hiring more staff to moderate content. They have banned ads from certain sources and removed “offensive” but legal content.

These moves come in the midst of a fierce public debate about what responsibilities platform companies that directly host our speech have to take down—or protect—certain types of expression. And this debate is occurring at a time in which only a few large companies host most of our online speech. Under the First Amendment, intermediaries generally have a right to decide what kinds of expression they will carry. But just because companies can act as judge and jury doesn’t mean they should.

To begin with, a great deal of problematic content sits in the ambiguous territory between disagreeable political speech and abuse, between fabricated propaganda and legitimate opinion, between things that are legal in some jurisdictions and not others. Or they’re things some users want to read and others don’t. If many cases are in grey zones, our institutions need to be designed for them.

We all want an Internet where we are free to meet, create, organize, share, associate, debate and learn. We want to make our voices heard in the way that technology now makes possible. No one likes being lied to or misled, or seeing hateful messages directed against them, or flooded across our newsfeeds. We want our elections free from manipulation and for the speech of women and marginalized communities to not be silenced by harassment. We should all have the ability to exercise control over our online environments: to feel empowered by the tools we use, not helpless in the face of others' use.

But in moments of apparent crisis, the first step is always to look simple solutions. In particular, in response to rising concerns that we are not in control, a groundswell of support has emerged for even more censorship by private platform companies, including pushing platforms into ever increased tracking and identification of speakers.

We are at a critical moment for free expression online and for the role of the Internet in the fabric of democratic societies. We need to get this right.

Platform Censorship Isn’t New, Hurts the Less Powerful, and Doesn’t Work

Widespread public interest in this topic may be new, but platform censorship isn’t. All of the major platforms set forth rules for their users. They tend to be complex, covering everything from terrorism and hate speech to copyright and impersonation. Most platforms use a version of community reporting. Violations of these rules can prompt takedowns and account suspensions or closures. And we have well over a decade of evidence about how these rules are used and misused.

The results are not pretty. We’ve seen prohibitions on hate speech used to shut down conversations among women of color about the harassment they receive online; rules against harassment employed to shut down the account of a prominent Egyptian anti-torture activist; and a ban on nudity used to censor women who share childbirth images in private groups. And we've seen false copyright and trademark allegations used to take down all kinds of lawful content, including time-sensitive political speech.

Platform censorship has included images and videos that document atrocities and make us aware of the world outside of our own communities. Regulations on violent content have disappeared documentation of police brutality, the Syrian war, and the human rights abuses suffered by the Rohingya. A blanket ban on nudity has repeatedly been used to take down a famous Vietnam war photo.

These takedowns are sometimes intentional, and sometimes mistakes, but like Cloudflare’s now-famous decision to boot off the Daily Stormer, they are all made without accountability and due process. As a result, most of what we know about censorship on private platforms comes from user reports and leaks (such as the Guardian’s “Facebook Files”).

Given this history, we’re worried about how platforms are responding to new pressures. Not because there’s a slippery slope from judicious moderation to active censorship — but because we are already far down that slope. Regulation of our expression, thought, and association has already been ceded to unaccountable executives and enforced by minimally-trained, overworked staff, and hidden algorithms. Doubling down on this approach will not make it better. And yet, no amount of evidence has convinced the powers that be at major platforms like Facebook—or in governments around the world. Instead many, especially in policy circles, continue to push for companies to—magically and at scale—perfectly differentiate between speech that should be protected and speech that should be erased.

If our experience has taught us anything, it’s that we have no reason to trust the powerful—inside governments, corporations, or other institutions—to draw those lines.

As people who have watched and advocated for the voiceless for well over 25 years, we remain deeply concerned. Fighting censorship—by governments, large private corporations, or anyone else—is core to EFF’s mission, not because we enjoy defending reprehensible content, but because we know that while censorship can be and is employed against Nazis, it is more often used as a tool by the powerful, against the powerless.

First Casualty: Anonymity

In addition to the virtual certainty that private censorship will lead to takedowns of valuable speech, it is already leading to attacks on anonymous speech. Anonymity and pseudonymity have played important roles throughout history, from secret ballots in ancient Greece to 18th century English literature and early American satire. Online anonymity allows us to explore controversial ideas and connect with people around health and other sensitive concerns without exposing ourselves unnecessarily to harassment and stigma. It enables dissidents in oppressive regimes to tell their stories with less fear of retribution. Anonymity is often the greatest shield that vulnerable groups have.

Current proposals from private companies all undermine online anonymity. For example, Twitter’s recent ban on advertisements from Russia Today and Sputnik relies on the notion that the company will be better at identifying accounts controlled by Russia than Russia will be at disguising accounts to promote its content. To make it really effective, Twitter may have to adopt new policies to identify and attribute anonymous accounts, undermining both speech and user privacy. Given the problems with attribution, Twitter will likely face calls to ban anyone from promoting a link to suspected Russian government content.

And what will we get in exchange for giving up our ability to speak online anonymously? Very little. Facebook for many years required individuals to use their “real” name (and continues to require them to use a variant of it), but that didn’t stop Russian agents from gaming the rules. Instead, it undermined innocent people who need anonymity—including drag performers, LGBTQ people, Native Americans, survivors of domestic and sexual violence, political dissidents, sex workers, therapists, and doctors.

Study after study has debunked the idea that forcibly identifying speakers is an effective strategy against those who spread bad information online. Counter-terrorism experts tell us that “Censorship has never been an effective method of achieving security, and shuttering websites and suppressing online content will be as unhelpful as smashing printing presses.”

We need a better way forward.

Step One: Start With the Tools We Have and Get Our Priorities Straight

Censorship is a powerful tool and easily misused. That’s why, in fighting back against hate, harassment, and fraud, censorship should be the last stop. Particularly from a legislative perspective, the first stop should be looking at the tools that already exist elsewhere, rather than rushing to exceptionalize the Internet. For example, in the United States, defamation laws reflect centuries of balancing the right of individuals to hold others accountable for false, reputation-damaging statements, and the right of the public to engage in vigorous public debate. Election laws already prohibit foreign governments or their agents from purchasing campaign ads—online or offline—that directly advocate for or against a specific candidate. In addition, for sixty days prior to an election, foreign agents cannot purchase ads that even mention a candidate. Finally, the Foreign Agent Registration Act also requires information materials distributed by a foreign entity to contain a statement of attribution and to file copies with the U.S. Attorney General. These are all laws that could be better brought to bear, especially in the most egregious situations.

We also need to consider our priorities. Do we want to fight hate speech, or do we want to fight hate? Do we want to prevent foreign interference in our electoral processes, or do we want free and fair elections? Our answers to these questions should shape our approach, so we don’t deceive ourselves into thinking that removing anonymity in online advertising is more important to protecting democracy than, say, addressing the physical violence by those who spread hate, preventing voter suppression and gerrymandering, or figuring out how to build platforms that promote more informed and less polarizing conversations between the humans that use them.

Step Two: Better Practices for Platforms

But if we aren’t satisfied with those options, we have others. Over the past few years, EFF—in collaboration with Onlinecensorship.org and civil society groups around the world—has developed recommendations to companies aimed at fighting censorship and protecting speech. Many of these are contained within the Manila Principles, which provide a roadmap for companies seeking to ensure human rights are protected on their platforms.

In 2018, we’ll be working hard to push companies toward better practices around these recommendations. Here they are, in one place.

Meaningful Transparency

Over the years, we and other organizations have pushed companies to be more transparent about the speech that they take down, particularly when it’s at the behest of governments. But when it comes to decisions about acceptable speech, or what kinds of information or ads to show us, companies are largely opaque. We believe that Facebook, Google, and others should allow truly independent researchers—with no bottom line or corporate interest—access to work with, black box test and audit their systems. Users should be told when bots are flooding a network with messages and, as described below, should have tools to protect themselves. Meaningful transparency also means allowing users to see what types of content are taken down, what’s shown in their feed and why. It means being straight with users about how their data is being collected and used. And it means providing users with the power to set limitations on how long that data can be kept and used.

Due Process

We know that companies make enforcement mistakes, so it’s shocking that most lack robust appeals processes—or any appeals processes at all. Every user should have the right to due process, including the option to appeal a company's takedown decision, in every case. The Manila Principles provide a framework for this.

Empower Users With Better Platform Tools

Platforms are building tools that let user filter ads and other content, and this should continue. This approach has been criticized for furthering “information bubbles,” but those problems are less worrisome when users are in charge and informed, than when companies are making these decisions for users with one eye on their bottom lines. Users should be in control of their own online experience. For example, Facebook already allows users to choose what kinds of ads they want to see—a similar system should be put in place for content, along with tools that let users make those decisions on the fly rather than having to find a hidden interface. Use of smart filters should continue, since they help users can better choose content they want to see and filter out content they don’t want to see. Facebook’s machine learning models can recognize the content of photos, so users should be able to choose an option for "no nudity" rather than Facebook banning it wholesale. (The company could still check that by default in countries where it's illegal.)

When it comes to political speech, there is a desperate need for more innovation. That might include user interface designs and user controls that encourage productive and informative conversations; that label and dampen the virality of wildly fabricated material while giving readers transparency and control over that process. This is going to be a very complex and important design space in years to come, and we’ll probably have much more to say about it in future posts.

Empower Users With Third-Party Tools

Big platform companies aren’t the only place where good ideas can grow. Right now, the larger platforms limit the ability of third parties to offer alternative experiences on the platforms, using closed APIs, blocking scraping and limiting interoperability. They enforce their power to limit innovation on the platform through a host of laws, including the Computer Fraud and Abuse Act (CFAA), copyright regulations, and the Digital Millennium Copyright Act (DMCA). Larger platforms like Facebook, Twitter and YouTube should facilitate user empowerment by opening their APIs even to competing services, allowing scraping and ensuring interoperability with third party products, even up to forking of services.

Forward Consent

Community guidelines and policing are touted as a way to protect online civility, but are often used to take down a wide variety of speech. The targets of reporting often have no idea what rule they have violated, since companies often fail to provide adequate notice. One easy way that service providers can alleviate this is by having users affirmatively accept the community guidelines point by point, and accept them again each time they change.

Judicious Filters

When implemented by the platform, we worry about filtering technologies that automatically take down speech, because the default for online speech should always to be to keep it online until a human has reviewed it. Some narrow exceptions may be appropriate, e.g., where the content is illegal in every context. But in general platforms can and should simply use smart filters to better flag potentially unlawful content for human review and to recognize when their user flagging systems are being gamed by those seeking to get the platform to censor others.

Platform Competition and User Choice

Ultimately, users also need to be able to leave when a platform isn’t serving them. Real data portability is key here and this will require companies to agree to standards for how social graph data is stored. Fostering competition in this space could be one of the most powerful incentives for companies to protect users against bad actors on their platform, be they fraudulent, misleading or hateful. Pressure on companies to allow full interoperability and data portability could lead to a race to the top for social networks.

No Shadow Regulations

Over the past decade we have seen the emergence of the secretive web of backroom agreements between companies that seeks to control our behavior online, often driven by governments as a shortcut and less accountable alternative to regulation. One example among many: under pressure from the UK Intellectual Property Office, search engines agreed last year to a "Voluntary Code of Practice" that requires them to take additional steps to remove links to allegedly unlawful content. At the same time, domain name registrars are also under pressure to participate in copyright enforcement, including “voluntarily” suspending domain names. Similarly, in 2016, the European Commission struck a deal with the major platforms, which, while ostensibly about addressing speech that is illegal in Europe, had no place for judges and the courts, and concentrated not on the letter of the law, but the companies' terms of service.

Shadow regulation is dangerous and undemocratic; regulation should take place in the sunshine, with the participation of the various interests that will have to live with the result. To help alleviate the problem, negotiators should seek to include meaningful representation from all groups with a significant interest in the agreement; balanced and transparent deliberative processes; and mechanisms of accountability such as independent reviews, audits, and elections.

Keep Core Infrastructure Out of It

As we said last year, the problems with censorship by direct hosts of speech are tremendously magnified when core infrastructure providers are pushed to censor. The risk of powerful voices squelching the less powerful is greater, as are the risks of collateral damage. Internet speech depends on an often-fragile consensus among many systems and operators. Using that system to edit speech, based on potentially conflicting opinions about what can be spoken on the Internet, risks shattering that consensus. Takedowns by some intermediaries—such as certificate authorities or content delivery networks—are far more likely to cause collateral censorship. That’s why we’ve called these parts of the Internet free speech’s weakest links.

The firmest, most consistent, defense these potential weak links can take is to simply decline all attempts to use them as a control point. They can act to defend their role as a conduit, rather than a publisher. Companies that manage domain names, including GoDaddy and Google, should draw a hard line: they should not suspend or impair domain names based on the expressive content of websites or services.

Toward More Accountability

There are no perfect solutions to protecting free expression, but as this list of recommendations should suggest, there’s a lot that companies—as well as policymakers—can do to protect and empower Internet users without doubling down on the risky and too-often failing strategy of censorship.

We'll continue to refine, and critique the proposals that we and others make, whether they're new laws, new technology, or new norms. But we also want to play our part to ensure that these debates aren't dominated by existing interests and a simple desire for rapid and irrevocable action. We'll continue to highlight the collateral damage of censorship, and especially to highlight the unheard voices who have been ignored in this debate—and have the most to lose.

Note: Many EFF staff contributed to this post. Particular thanks to Peter Eckersley, Danny O’Brien, David Greene, and Nate Cardozo.