EFF, along with the ACLU, urged the U.S. Court of Appeals for the Second Circuit to find a New York statute that compels platforms to moderate online speech that falls within the state’s particular definition of “hateful conduct” unconstitutional.
The statute itself requires covered social media platforms to develop a mechanism that allows users to report incidents of “hateful conduct” (as defined by the state), and to publish a policy detailing how the platform will address such incidents in direct responses provided to each individual complainant. Noncompliance with the statute is enforceable through Attorney General investigations, subpoenas, and daily fines of $1000 per violation. The statute is part of a broader scheme by New York officials, including the Governor and the Attorney General, to unlawfully coerce online platforms into censoring speech that the state deems “hateful.”
The bill was rushed through the New York legislature in the aftermath of last year’s tragic mass shooting at a Buffalo, NY supermarket. At the same time, the state launched an investigation into social media platforms’ “civil or criminal liability for their role in promoting, facilitating, or providing a platform to plan or promote violence.” In the months that followed, state officials alleged that it was their perceived “lack of oversight, transparency, and accountability” over social media platforms’ content moderation policies that had caused such “dangerous and corrosive ideas to spread,” and held up this “hateful conduct” law as the regulatory solution to online hate speech. And, when the investigation into such platform liability concluded, Attorney General Letitia James called for platforms to be held accountable and threatened to push for measures that would ensure they take “reasonable steps to prevent unlawful violent criminal content from appearing on their platforms.”
EFF and ACLU filed a friend-of-the-court brief in support of the plaintiffs: Eugene Volokh, a First Amendment scholar who runs the legal blog Volokh Conspiracy, the video sharing site Rumble, and the social media site Local. In the brief we urged the court to affirm the trial court’s preliminary injunction of the law. As we have explained many times before, any government involvement in online intermediaries’ content moderation processes—regardless of the form or degree—raises serious First Amendment and broader human rights concerns.
Despite the New York officials’ seemingly good intention here, there are several problems with this law.
First, the law broadly defines “hateful conduct” as the “use of a social media network to vilify, humiliate, or incite violence against a group or a class of persons,” a definition that could encompass a broad range of speech not typically considered “hate speech.”
Next, the bill unconstitutionally compels platforms’ speech by forcing them to replace their own editorial policies with the state’s. Social media platforms and other online intermediaries subject to this bill have a long-protected First Amendment right to curate the speech that others publish on their sites—regardless of whether they curate a lot or a little, and regardless of whether their editorial philosophy is readily discernible or consistently applied. Here, by requiring publishers to develop, publish, and enforce an editorial standard at all—much less one that must adopt the state’s view of “hateful conduct”—this statute unlawfully compels speech and chills platforms’ First Amendment-protected exercise of editorial freedom.
Finally, the thinly veiled threats from officials designed to coerce websites to adopt the state’s editorial position is unconstitutional coercion.
We agree that many internet users want the online platforms they use to moderate certain hateful speech; but those decisions must be made by the platforms themselves, not the government. Platforms’ editorial freedom is staunchly protected by the First Amendment; to allow government to manipulate social media curation for its own purposes threatens fundamental freedoms. Therefore, to protect our online spaces, we must strictly scrutinize all government attempts to co-opt platforms’ content moderation policies—whether by preventing moderation, as in Texas and Florida, or by compelling moderation, as New York has done here.