Former EFF intern Shashank Sirivolu contributed to this blog post.
Social media users who have sued companies for deleting, demonetizing, and otherwise moderating their content have tried several arguments that this violates their constitutional rights. Courts have consistently ruled against them because social media platforms themselves have the First Amendment right to moderate content. The government and the courts cannot tell them what speech they must remove or, on the flip side, what speech they must carry. And when the government unlawfully conspires with or coerces a platform to censor a user, the user should only be able to hold the platform liable for the government’s interference in rare circumstances.
In some cases, based on the “state action” doctrine, courts can treat a platform’s action as that of the government. This may allow a user to hold the platform liable for what would otherwise be a the platform’s private exercise of their First Amendment rights. These cases are rare and narrow. “Jawboning,” or when the government influences content moderation policies, is common. We have argued that courts should only hold a jawboned social media platform liable as a state actor if: (1) the government replaces the intermediary’s editorial policy with its own, (2) the intermediary willingly cedes its editorial implementation of that policy to the government regarding the specific user speech, and (3) the censored party has no remedy against the government.
To ensure that the state action doctrine does not nullify social media platforms’ First Amendment rights, we recently filed two amicus briefs in the Ninth Circuit in Huber v. Biden and O'Handley v. Weber. Both briefs argued that these conditions were not met, and the courts should not hold the platforms liable under a state action theory.
In Huber v. Biden, the plaintiff accused Twitter of conspiring with the White House to suspend a user’s account for violating the company’s policy against disseminating harmful and misleading information related to COVID-19. Our brief argued that the plaintiff’s theory was flawed for several reasons. First, the government did not replace Twitter’s editorial policy with its own, but, at most, advised the company about its concerns regarding the harm of misinformation about the virus. Second, Huber does not allege that the government ever read, much less talked to Twitter about, the tweet at issue. Finally, because Huber brought a claim against the government directly, she may have a remedy for her claim.
In O’Handley v. Weber, the plaintiff accused Twitter of conspiring with the California Secretary of State to censor and suspend a user’s Twitter account for violating the company’s policies regarding election integrity. In direct response to concerns about election interference in the 2016 Presidential election, the California Legislature established the Office of Election Cybersecurity within the California Secretary of State's office. While the Office of Election Cybersecurity notified Twitter about one of the plaintiff’s tweets that it believed contained potential misinformation, there is nothing unconstitutional about the government speaking about its concerns to a private actor. And even if the government did cross the line, O'Handley did not demonstrate that this one notification got Twitter to cede its editorial decision-making to the government. Rather, Twitter may have considered the government’s view but ultimately made its own decision to suspend O’Handley. Finally, because O’Handley brought a claim against the Secretary of State directly, he may have a remedy.
While it is important that internet users have a well-defined avenue for holding social media companies liable for harmful collaborations with the government, it must be narrow enough to preserve the platforms’ First Amendment rights to curate and edit their content. Otherwise, users themselves will end up being harmed because they will lose access to platforms with varied forums for speech.