This is one of a series of blog posts about President Trump's May 28 Executive Order. Links to other posts are below.

The inaptly named Executive Order on Preventing Online Censorship (EO) is a mess on many levels: it’s likely unconstitutional on several grounds, built on false premises, and bad policy to boot. We are no fans of the way dominant social media platforms moderate user content. But the EO, and its clear intent to retaliate against Twitter for marking the president’s tweets for fact-checking, demonstrates that governmental mandates are the wrong way to address concerns about faulty moderation practices.

The EO contains several key provisions. We will examine them in separate posts linked here:

1. The FCC rule-making provision
2.
The misinterpretation of and attack on Section 230
3. Threats to pull government advertising
4. Review of unfair or deceptive practices

Although we will focus on the intended legal consequences of the EO, we must also acknowledge the danger the Executive Order poses even if it is just political theater and never has any legal effect. The mere threat of heavy-handed speech regulation can inhibit speakers who want to avoid getting into a fight with the government, and deny readers information they want to receive. The
Supreme Court has recognized that “people do not lightly disregard public officers’ thinly veiled threats” and thus even “informal contacts” by government against speakers may violate the First Amendment.

The EO’s threats to free expression and retaliation for constitutionally-protected editorial decisions by a private entity are not even thinly veiled: they should have no place in any serious discussion about concerns over the dominance of a few social media companies and how they moderate user content.

That said, we too are disturbed by the current state of content moderation on the big platforms. So, while we firmly disagree with the EO, we have been highly critical of the platforms’ failure to address some of the same issues targeted in the EO’s policy statement, specifically: first, that users deserve more transparency about how, when and how much content is moderated; second, that decisions often appear inconsistent; and, third, that content guidelines are often vague and unhelpful. Starting long before the president got involved, we have said repeatedly that the content moderation
system is broken and called for platforms to fix it. We have documented a range of egregious content moderation decisions (see our onlinecensorship.org, Takedown Hall of Shame, and TOSsed Out projects). We have proposed a human rights framing for content moderation called the Santa Clara Principles, urged companies to adopt it, and then monitored whether they did so (see our 2018 and 2019 Who Has Your Back reports).

But we have rejected government mandates as a solution, and this EO demonstrates why it is indeed the wrong approach. In the hands of a retaliatory regime, government mandates on speech will inevitably be used to punish disfavored speakers and platforms, and for other oppressive and repressive purposes. Those decisions will disproportionately impact the marginalized. Regardless of the dismal state of content moderation, it is truly dangerous to put the government in control of online communication channels.

The EO requires the Attorney General to “develop a proposal for Federal legislation that would be useful to promote the policy objectives of this order.” This is a dangerous idea generally because it represents another unwarranted government intrusion into private companies’ decisions to moderate and curate user content. But it’s a particularly bad idea in light of the current Attorney General’s very public animus toward tech companies and their efforts to provide Internet users with secure ways to communicate, namely through end-to-end encryption. Attorney General William Barr already has plenty of motivation to break encryption, including through the proposed EARN IT Act; the EO’s mandate gives Barr more ammunition to target Internet users’ security and privacy in the name of promoting some undefined “neutrality.”

Some have proposed that the EO is simply an attempt to bring some due process and transparency to content moderation. However, our analysis of the various parts of the EO illuminate why that’s not true. 

What about Competition?

For all its bluster, the EO doesn’t address one of the biggest underlying threats to online speech and user rights: the concentration of power in a few social media companies.

If the president and other social media critics really want to ensure that all voices have a chance to be heard, if they are really concerned that a few large platforms have too much practical power to police speech, the answer is not to create a new centralized speech bureaucracy, or promote the creation of fifty separate ones in the states. A better and actually constitutional option is to reduce the power of the social media giants and increase the power of users by promoting real competition in the social media space. This means eliminating the
legal barriers to the development of tools that will let users control their own Internet experience. Instead of enshrining Google, Facebook, Amazon, Apple, Twitter, and Microsoft as the Internet’s permanent overlords, and then striving to make them as benign as possible, we can fix the Internet by making Big Tech less central to its future.

The Santa Clara Principles provide a framework for making content moderation at scale more respectful of human rights. Promoting competition provides a way to make the problems caused by content moderation by the big tech companies less important. Neither of these seem likely to be accomplished by the EO. But the chilling effect the EO will likely have on hosts of speech, and, consequently, the public—which relies on the Internet to speak out and be heard—is likely very real.