A (Very) Narrow Path to Holding Social Media Companies Legally Liable for Collaborating with Government in Content Moderation

For the last several years we have seen numerous arguments that social media platforms are "state actors" that “must carry” all user speech. According to this argument, they are legally required to publish all user speech and treat it equally. Under U.S. law, this is almost always incorrect. The First Amendment generally requires only governments to honor free speech rights and protects the rights of private entities like social media sites to curate content on their sites and impose content rules on their users. 

Among the state actor theories presented is one based on collaboration with the government on content moderation. “Jawboning”—or when government authorities influence companies’ social media policies—is extremely common. At what point, if any, does a private company become a state actor when they act according to it?

Deleting posts or cancelling accounts because a government official or agency requested or required it—just like spying on people’s communications on behalf of the government—raises serious human rights concerns. The newly revised Santa Clara Principles, which outline standards that tech platforms must consider to make sure they provide adequate transparency and accountability, specifically scrutinize “State Involvement in Content Moderation.” As set forth in the Principles: “Companies should recognise the particular risks to users’ rights that result from state involvement in content moderation processes. This includes a state’s involvement in the development and enforcement of the company’s rules and policies, either to comply with local law or serve other state interests. Special concerns are raised by demands and requests from state actors (including government bodies, regulatory authorities, law enforcement agencies and courts) for the removal of content or the suspension of accounts.”

So, it is important that there be a defined, though narrow, avenue for holding social media companies liable for certain censorial collaborations with the government. But the bar for holding platforms accountable for such conduct must be high to preserve their First Amendment rights to edit and curate their sites. 

Testing Whether a Jawboned Platform is a State Actor

We propose the following test. At a minimum: (1) the government must replace the intermediary’s editorial policy with its own, (2) the intermediary must willingly cede the editorial implementation of that policy to the government regarding the specific user speech, and (3) the censored party lacks an adequate remedy against the government. These findings are necessary, but not per se sufficient to establish the social media service as a state actor; there may always be “some countervailing reason against attributing activity to the government.” 

In creating the test, we had two guiding principles.

First, when the government coerces or otherwise pressures private publishers to censor, the censored party’s first and favored recourse is against the government. Governmental manipulation of the already fraught content moderation systems to control public dialogue and silence disfavored voices raises classic First Amendment concerns, and both platforms and users should be able to sue the government for this. In First Amendment cases, there is a low threshold for suits against government agencies and officials that coerce private censorship: the government may violate speakers’ First Amendment rights with “system[s] of informal censorship” aimed at speech intermediaries. In 2015, for example, EFF supported a lawsuit by Backpage.com after the Cook County sheriff pressured credit card processors to stop processing payments to the website. 

Second, social media companies should retain their First Amendment rights to edit and curate the user posts on their sites as long as they are the ones controlling the editorial process. So, we sought to distinguish those situations where the platforms clearly abandoned editorial power and ceded editorial control to the government from those in which the government‘s desires were influential but not determinative. 

We proposed this test in an amicus brief recently filed in the Ninth Circuit in a case in which YouTube has been accused of deleting QAnon videos at the request and compulsion of individual Members of Congress. We argued in that brief that the test was not met in that case and that YouTube could not be liable as a state actor under the facts alleged. 

However, even though they are not legally liable, social media companies should voluntarily disclose to a user when a government has demanded or requested action on their post, or whether the platform’s action was required by law. Platforms should also report all government demands for content moderation, and any government involvement in formulating or enforcing editorial policies, or flagging posts. Each of these recommendations is set out in the revised Santa Clara Principles.

The Santa Clara Principles also calls on governments to limit their involvement in content moderation. The Principle for Governments and Other State Actors states that governments “must not exploit or manipulate companies’ content moderation systems to censor dissenters, political opponents, social movements, or any person.” The Santa Clara Principles go on to urge governments to disclose their involvement in content moderation and to remove any obstacles they have placed on the companies to do so, such as gag orders.

Our position with respect to state action being established by government collaboration stands in contrast to the more absolute positions we have taken against other state action theories.

Although we have been sharp critics of how the large social media companies curate user speech and its differential impacts on those traditionally denied voice, we are also concerned that holding social media companies to the legal standards of the First Amendment would hinder their ability to moderate content in ways that serves users well: by removing or downranking posts that although legally protected were harassing or abusive to other users, or which were just offensive to many users the company sought to reach; or by adopting policies or community standards that focus on certain subject matters or communities, and excluding off-topic posts. Many social media companies offer curation services that suggest or prioritize certain posts over others, whether through Facebook’s Top Stories feed, or Twitter’s Home feed, etc., that some users seem to like. Plus, there are numerous practical problems. First, clear distinctions between legal and illegal speech are often elusive. Law enforcement often gets these wrong and judges and juries struggle with these distinctions. Second, it just doesn’t reflect reality: every social media service has an editorial policy that excludes or at least disfavors certain legal speech, and always have had such policies.

We filed our first amicus brief setting out this position in 2018 and wrote about it here. And we’ve been asserting that position in various US legal matters ever since. That first case and others like it argued incorrectly that social media companies functioned like public forums, places open to the public to associate and speak to each other, and thus should be treated like government controlled public forums like parks and sidewalks. 

Other cases and the social media laws passed by Florida and Texas also argued that social media services, at least the very large ones, were “common carriers” which are open to all users on equal terms. In those cases, our reasoning for challenging the laws remained the same: users are best served when social media companies are shielded from governmental interference with their editorial policies and decisions.

This policy-based position was consistent with what we saw as the correct legal argument: that social media companies themselves have the First Amendment right to adopt editorial policies, and to curate and edit the user speech that get submitted to them. And it’s important to defend that First Amendment right so as to shield these services from becoming compelled mouthpieces or censors of the government: if they didn’t have their own First Amendment rights to edit and curate their sites as they saw fit, then governments could tell them how to edit and curate their sites according to the government’s wishes and desires.

We stand by our position that social media platforms have the right to moderate content, and believe that allowing the government to dictate what speech platforms can and can’t publish is anathema to our democracy. But when censorship is a collaboration between private companies and the government, there should be a narrow, limited path to hold them accountable.