Among the dozens of bills introduced last Congress to amend a key internet law that protects online services and internet users, the Platform Accountability and Consumer Transparency Act (PACT Act) was perhaps the only serious attempt to tackle the problem of a handful of dominant online services hosting people’s expression online.

Despite the PACT Act’s good intentions, EFF could not support the original version because it created a censorship regime by conditioning the legal protections of 47 U.S.C. § 230 (“Section 230”) on a platform’s ability to remove user-generated content that others claimed was unlawful. It also placed a number of other burdensome obligations on online services.

To their credit, the PACT Act’s authors—Sens. Brian Schatz (D-HI) and John Thune (R-SD)—listened to EFF and others’ criticism of the bill and amended the text before introducing an updated version earlier this month. The updated PACT Act, however, contains the same fundamental flaws as the original: creating a legal regime that rewards platforms for over-censoring users’ speech. Because of that, EFF remains opposed to the bill.

Notwithstanding our opposition, we agree with the PACT Act’s sponsors that internet users currently suffer from the vagaries of Facebook, Google, and Twitter’s content moderation policies. Those platforms have repeatedly failed to address harmful content on their services. But forcing all services hosting user-generated content to increase their content moderation doesn’t address Facebook, Google, and Twitter’s dominance—in fact, it only helps cement their status. This is because only well-resourced platforms will be able to meet the PACT Act’s requirements, notwithstanding the bill’s attempt to treat smaller platforms differently.

The way to address Facebook, Google, and Twitter’s dominance is to enact meaningful antitrust, competition, and interoperability reforms that reduce those services’ outsized influence on internet users’ expression.

Notice and Takedown Regimes Result in Censoring Those With Less Power

As with its earlier version, the revised PACT Act’s main change to Section 230 involves conditioning the law’s protections on whether online services remove content when they receive a judicial order finding that the content is illegal. As we’ve said before, this proposal, on its face, sounds sensible. There is likely to be little value in hosting user-generated content that a court has determined is illegal.

But the new PACT Act still fails to provide sufficient safeguards to stop takedowns from being abused by parties who are trying to remove other users’ speech that they do not like.

In fairness, it appears that Sens. Schatz and Thune heard EFF’s concerns with the previous version, as the new PACT Act requires that any takedown orders be issued by courts after both sides have litigated a case. The new language also adds additional steps for takedowns based on default judgments, a scenario in which the defendant never shows up to defend against the suit. The bill also increases the time a service would have to respond to a notice, from 24 hours to 4 days in the case of large platforms.

These marginal changes, however, fail to grapple with the free expression challenges of trying to implement a notice and takedown regime where a platform’s legal risk is bound up with the risk of being liable for a particular user’s speech.

The new PACT Act still fails to require that takedown notices be based on final court orders or adjudications that have found content to be unlawful or unprotected by the First Amendment. Courts issue preliminary orders that they sometimes later reverse. In the context of litigation about whether certain speech is legal, final orders issued by lower courts are often reversed by appellate courts. The PACT Act should have limited takedown notices to final orders in which all appeals have been exhausted. It didn’t, and is thus a recipe for taking down lots of lawful expression.

there is very little incentive for platforms to do anything other than remove user-generated content in response to a takedown notice, regardless of whether the notices are legitimate

More fundamentally, however, the PACT Act’s new version glosses over the reality that by linking Section 230’s protections to a service’s ability to quickly remove users’ speech, there is very little incentive for platforms to do anything other than remove user-generated content in response to a takedown notice, regardless of whether the notices are legitimate.

To put it another way: the PACT Act places all the legal risk on a service when it fails to comply with a takedown demand. If that happens, the service will lose Section 230’s protections and be treated as if it were the publisher or speaker of the content. The safest course for the intermediary will be to avoid that legal risk and always remove the user-generated content, even if it comes at the expense of censoring large volumes of legitimate expression.

The new PACT Act’s safeguards around default judgments are unlikely to prevent abusive takedowns. The bill gives an online service provider 10 days from receiving a notice to intervene in the underlying lawsuit and move to vacate the default judgment. If a court finds that the default judgment was “sought fraudulently,” the online service could seek reimbursements of its legal costs and attorney’s fees.

This first assumes that platforms will have the resources required to intervene in federal and state courts across the country in response to suspect takedown notices. But a great number of online services hosting user-generated speech are run by individuals, nonprofits, or small businesses. They can’t afford to pay lawyers to fight back against these notices.

The provision also assumes services will have the incentive to fight back on their users’ behalf. The reality is that it’s always easier to remove the speech than to fight back. Finally, the ability of a provider to recoup its legal costs depends on a court finding that an individual fraudulently sought the original default judgment, a very high burden that would require evidence and detailed findings by a judge.

Indeed, the PACT Act’s anti-abuse provision seems to be even less effective than the one contained in the Digital Millennium Copyright Act, which requires copyright holders to consider fair use before sending a takedown notice. We know well that the DMCA is a censorship regime rife with overbroad and abusive takedowns, even though the law has an anti-abuse provision. We would expect the PACT Act’s takedown regime to be similarly abused.

Who stands to lose here? Everyday internet users, whose expression is targeted by those with the resources to obtain judicial orders and send takedown notices.

We understand and appreciate the sponsors’ goal of trying to help internet users who are the victims of abuse, harassment, and other illegality online. But any attempts to protect those individuals should be calibrated in a way that would prevent new types of abuse, by those who seek to remove lawful expression.

More Transparency and Accountability Is Needed, But It Shouldn’t Be Legally Mandated

EFF continues to push platforms to adopt a human rights framing for their content moderation decisions. That includes providing users with adequate notice and being transparent about their moderation decisions. It is commendable that lawmakers also want platforms to be more  transparent and responsive to users, but we do not support legally mandating those practices.

Just like its earlier version, the new PACT Act compels services to disclose their content moderation rules, build a system for responding to complaints about user-generated content, and publish transparency reports about their moderation decisions. The bill would also require large platforms to operate a toll-free phone number to receive complaints about user-generated content. And as before, platforms that fail to implement these mandates would be subject to investigation and enforcement actions by the Federal Trade Commission.

As we said about the original PACT Act, mandating services to publish their policies and transparency reports on pain of enforcement by a federal law enforcement agency creates significant First Amendment concerns. It would intrude on the editorial discretion of services and compel them to speak.

The new version of the PACT Act doesn’t address those constitutional concerns. Instead, it changes how frequently a service has to publish a transparency report, from four times a year to twice a year. It also creates new exceptions for small businesses, which would not have to publish transparency reports, and for individual providers, which would be exempt from the new requirements.

As others have observed, however, the carveouts may prove illusory because the bill’s definition of a small business is vague and the cap is set far too low. And small businesses that meet the exception would still have to set up systems to respond to complaints about content on their platform, and permit appeals by users who believe the original complaint was wrong or mistaken. For a number of online services—potentially including local newspapers with comment sections—implementing these systems will be expensive and burdensome. They may simply choose not to host user-generated content at all.

Reducing Large Service’s Dominance Should Be the Focus, Rather Than Mandating That Every Service Moderate User Content Like Large Platforms

We get that given the current dominance of a few online services that host so much of our digital speech, communities, and other expression, it’s natural to want new laws that make those services more accountable to their users for their content moderation decision. This is particularly true because, as we have repeatedly said, content moderation at scale is impossible to do well.

Yet if we want to break away from the concentration of Facebook, Google, and Twitter hosting so much of our speech, we should avoid passing laws that assume we will be forever beholden to these services’ moderation practices. The PACT Act unfortunately makes this assumption, and it demands that every online service adopt content moderation and transparency policies resembling the major online platforms around today.

we should avoid passing laws that assume we will be forever beholden to these services’ moderation practices

Standardizing content moderation will only ensure those platforms with the resources and ability to meet the PACT Act’s legal requirements will survive. And our guess is those platforms will look a lot more like Facebook, Google, and Twitter rather than being diverse. They aren’t likely to have different or innovative content moderation models that might serve internet users better.

Instead of trying to legally mandate certain content moderation practices, lawmakers must tackle Facebook, Google, and Twitter’s dominance and the resulting lack of competition head on by updating antitrust law, and embracing pro-competitive policies. EFF is ready and willing to support those legislative efforts.

Related Issues

Tags