Recently, nearly every week brings a new effort to undercut or overhaul a key U.S. law—47 U.S.C. § 230 (“Section 230”)—that protects online services and allows Internet users to express themselves. Many of these proposals jeopardize users’ free speech and privacy, while others are thinly-veiled attacks against online services that the President and other officials do not like.

These attacks on user expression and the platforms we all rely on are serious. But they do not represent serious solutions to a real problem: that a handful of large online platforms dominate users’ ability to organize, connect, and speak online.

The Platform Accountability and Consumer Transparency (PACT) Act, introduced last month by Senators Brian Schatz (D-HI) and John Thune (R-SD) is a serious effort to tackle that problem. The bill builds on good ideas, such as requiring greater transparency around platforms’ decisions to moderate their users’ content—something EFF has championed as a voluntary effort as part of the Santa Clara Principles.

The PACT Act’s implementation of these good ideas, however, is problematic. The bill’s modifications of Section 230 will lead to greater, legally required online censorship, likely harming disempowered communities and disfavored speakers. It will also imperil the existence of small platforms and new entrants trying to compete with Facebook, Twitter, and YouTube by saddling them with burdensome content moderation practices and increasing the likelihood that they will be dragged through expensive litigation. The bill also has several First Amendment problems because it compels online services to speak and interferes with their rights to decide for themselves when and how to moderate the content their users post.

The PACT Act has noble intentions, but EFF cannot support it. This post details what the PACT Act does, and the problems it will create for users online.

The PACT Act’s Amendments to Section 230 Will Lead to Greater Online Censorship

Before diving into the PACT ACT’s proposed changes to Section 230, let’s review what Section 230 does: it generally protects online platforms from liability for hosting user-generated content that others claim is unlawful. For example, if Alice has a blog on WordPress, and Clyde accuses Bob of having said something terrible in the blog’s comments, Section 230(c)(1) ensures that neither Alice nor WordPress are liable for Bob’s statements about Clyde.

The PACT Act ends Section 230(c)(1)’s immunity for user-generated content if services fail to remove the material upon receiving a notice claiming that a court declared it illegal. Platforms with more than a million monthly users or that have more than $25 million in annual revenue have to respond to these takedown requests and remove the material within 24 hours. Smaller platforms must remove the materials “within a reasonable period of time based on size and capacity of provider.” If the PACT ACT passes, Clyde could be able to sue Alice over something she didn’t even say—Bob’s statement in the comments section that Clyde claims is unlawful.

On first blush, this seems uncontroversial—after all, why should services enjoy special immunity for continuing to host content that is deemed unlawful or is otherwise unprotected by the First Amendment? The problem is that the PACT Act poorly defines what qualifies as a court order, fails to provide strong protections, and sets smaller platforms up for even greater litigation risks than their larger competitors.

EFF has seen far too often that notice-and-takedown regimes result in greater censorship of users’ speech. The reality is that people abuse these systems to remove speech they do not like, rather than content that is in fact illegal. And when platforms face liability risks for continuing to host speech that has been flagged, they are likely to act cautiously and remove the speech.

The PACT Act’s thumb on the scale of removing speech means that disempowered communities will be most severely impacted, as they historically have been targeted by claims that their speech is illegal or offensive. Moreover, the PACT Act fails to include any penalties for abusing its new takedown regime, which should be a bare minimum to ensure protection for users’ free expression.

Further, the PACT Act’s loose definition of what would qualify as a judicial order creates its own headaches. The First Amendment generally protects speech until it is declared unprotected at the end of a case, usually after a trial. And those decisions are regularly appealed and higher courts often reverse them, as most famously seen in New York Times v. Sullivan. The Act contains no requirement that the court’s order be a final judgment or order, nor does it carve out preliminary or other non-final orders or default judgments. We know that people often use default judgments—in which one party gets a court order that is not meaningfully contested by anyone—to attempt to remove speech they do not like.

The PACT Act does not account for these concerns. Instead, it puts platforms in the position of having to decide—on risk of losing Section 230 immunity—which judicial orders are legitimate, a recipe that will lead to abuse and the removal of lawful speech.

Additionally, the PACT Act’s different deadlines for removing material based on platforms’ size will create a torrent of litigation for smaller services. For larger platforms, the PACT Act is clear: takedowns of materials claimed to be illegal must occur within 24 hours. For smaller platforms, the deadline is open-ended: a reasonable time period based on the size and capacity of the provider.

In theory, the bill’s flexible standard for smaller services is a good thing, since smaller services have less resources to moderate and remove their users’ content. But in practice, it will expose them to greater, more expensive litigation. This is because determining what is “reasonable” based on the specific size and capacity of a provider will require more factual investigation, including costly discovery, before a court and/or jury decides whether the service acted reasonably. For larger platforms, the legal question is far simpler: did they remove the material within 24 hours?

These are not abstract concerns. As one report found, litigating a case through discovery based on claims that user-generated material was illegal can cost platforms more than $500,000. Faced with those potentially crippling legal bills, smaller platforms will immediately remove content that is claimed to be illegal, resulting in greater censorship.

Requiring Content Moderation and Takedowns Will Hamper Much-Needed Competition for Online Platforms

The PACT Act’s goal of making online platforms more accountable to their users is laudable, but its implementation will likely reinforce the dominance of the largest online services.

In addition to requiring that nearly all services hosting user-generated content institute a notice-and-takedown scheme, the PACT Act also requires them to create comprehensive processes for responding to complaints about offensive content and the services’ moderation decisions. Specifically, the bill requires services to (1) create policies that detail what is permitted on their services, (2) respond to complaints regarding content on their platform that others do not like and user complaints about improper moderation, (3) quickly act on and report back on their reasons for decisions in response to a complaint, while also permitting users to appeal those decisions, and (4) publish transparency reports regarding their takedowns, appeals, and other moderation decisions. A failure to implement any of these policies is grounds for investigation and potential enforcement by the Federal Trade Commission. 

The PACT Act further requires large platforms to respond to user complaints within 14 days, while smaller platforms have an open-ended deadline that depends on their size and capacity.

The burdens associated with complying with all of these requirements cannot be overlooked. Responding to each and every user complaint about content they believe is offensive, or a takedown they believe was in error, requires all platforms to employ content moderators and institute systems to ensure that each request is resolved in a short time period.

Unfortunately, the only platforms that could easily comply with the PACT Act’s moderation requirements are the same dominant platforms that already employ teams of content moderators. The PACT Act’s requirements would further cement the power of Facebook, YouTube, and Twitter, making it incredibly difficult for any new competitor to unseat them. The bill’s reach means that even medium-sized news websites with comment boards would likely have to comply with the PACT Act’s moderation requirements, too. 

What’s missing from the PACT Act’s approach is that Section 230 already levels the playing field by providing all online services the same legal protections for user-generated content that the dominant players enjoy. So instead of increasing legal and practical burdens on entities trying to unseat Facebook, YouTube, and Twitter, Congress should be more focused on using antitrust law and other mechanisms to reverse the outsized market power those platforms have. 

Legally Mandating Content Moderation And Transparency Reporting Would Violate the First Amendment

The PACT Act’s aim, increasing online services’ accountability for their speech moderation decisions, is commendable. EFF has been part of a broader coalition calling for this very same type of accountability, including efforts to push services to have clear policies, to allow users to appeal moderation decisions they think are wrong, and to publish reports about those removal decisions.

But we don’t believe these principles should be legally mandated, which is precisely what the PACT Act does. As described above, the bill requires online services to publish policies describing what content is permitted, respond to complaints about content and their moderation decisions, explain their moderation decisions to affected users, and publish reports about their moderation decisions. If they don’t do all those things to the government’s satisfaction, online services could face FTC enforcement.

If legally mandated, the requirements would be a broad and unconstitutional intrusion on the First Amendment rights of platforms to decide for themselves whether and how to manage their users’ content, including how to respond to complaints about violations of their policies or mistakes they’ve made in moderating content. The First Amendment protects services’ decisions on whether to have such policies in the first place, when to change them, or whether to enforce them in specific situations. The First Amendment also gives platforms the right to make inconsistent decisions.

More than 40 years ago, the Supreme Court struck down a Florida law that required newspapers to include responses from political candidates because it interfered with their First Amendment rights to make decisions about the content they published. Courts have repeatedly ruled that the same principle applies to online platforms hosting user-generated content. This includes the 9th Circuit’s recent decision in Prager U v. Google, and a May decision by the U.S. Court of Appeals for the District of Columbia in a case brought by Freedom Watch and Laura Loomer against Google. In another case, a court ruled that when online platforms "select and arrange others’ materials, and add the all-important ordering that causes some materials to be displayed first and others last, they are engaging in fully protected First Amendment expression—the presentation of an edited compilation of speech generated by other persons." 

The PACT Act creates other First Amendment problems by compelling platforms to speak. For instance, the bill requires that platforms publish policies setting out what type of content is acceptable on their service. The bill also requires platforms to publish transparency reports detailing their decisions. To be clear, having clear policies and publishing transparency reports are a clear benefit to users. But the First Amendment generally “prohibits the government from telling people what they must say.” And to the extent the PACT Act dictates that online services must speak in certain ways, it violates the Constitution.

The PACT Act’s sponsors are not wrong in wanting online services to be more accountable to their users. Nor are they incorrect in their assessment that a handful of services serve as the gatekeepers for much of our free expression online. But the PACT Act’s attempts to reach those concerns creates more problems and would end up harming users’ speech. The way to address these concerns is to encourage competition, and make users less reliant on a handful of services—not to institute legal requirements that further entrench the major platforms.

Related Issues