Section 230, a key law protecting free speech online since its passage in 1996, has been the subject of numerous legislative assaults over the past few years. The attacks have come from all sides. One of the latest, the SAFE Tech Act, seeks to address real problems Internet users experience, but its implementation would harm everyone on the Internet. 

The SAFE Tech Act is a shotgun approach to Section 230 reform put forth by Sens. Mark Warner, Mazie Hirono and Amy Klobuchar earlier this month. It would amend Section 230 through the ever-popular method of removing platform immunity from liability arising from various types of user speech. This would lead to more censorship as social media companies seek to minimize their own legal risk. The bill compounds the problems it causes by making it more difficult to use the remaining immunity against claims arising from other kinds of user content. 

Addressing Big Tech’s surveillance-based business models can’t, and shouldn’t, be done through amendments to Section 230—but that doesn’t mean it shouldn’t be done at all. 


The act would not protect users’ rights in a way that is substantially better than current law. And it would, in some cases, harm marginalized users, small companies, and the Internet ecosystem as a whole. Our three biggest concerns with the SAFE Tech Act are: 1) its failure to capture the reality of paid content online, 2) the danger that an affirmative defense requirement creates and 3) the lack of guardrails around injunctive relief that would open the door for a host of new suits that simply remove certain speech.

Section 230 Benefits Everyone

Before considering what this bill would change, it’s useful to take a look at the benefits that Section 230 provides for all internet users. The Internet today allows people everywhere to connect and share ideas—whether that’s for free on social media platforms and educational or cultural platforms like Wikipedia and the Internet Archive, or on paid hosting services like Squarespace or Patreon. Section 230’s legal protections benefit Internet users in two ways. 

Section 230 Protects Intermediaries That Host Speech: Section 230 enables services to host the content of other speakers—from writing, to videos, to pictures, to code that others write or upload—without those services generally having to screen or review that content before being published. Without this partial immunity, all of the intermediaries who help the speech of millions and billions of users reach their audiences would face unworkable content moderation requirements that inevitably lead to large scale censorship. The immunity has some important exceptions, including for violations of federal criminal law and intellectual property claims. But the legal immunity’s protections extend to services far beyond social media platforms. Thus everyone who sends an email, makes a Kickstarter, posts on Medium, shares code on Github, protects their site from DDOS attacks with Cloudflare, makes friends on Meetup, or posts on Reddit, benefits from Section 230’s immunity for all intermediaries. 

Section 230 Protects Users Who Create Content: Section 230 directly protects Internet users who themselves act as online intermediaries from being held liable for the content created by others. So when people publish a blog and allow reader comments, for example, Section 230 protects them. This enables Internet users to create their own platforms for others’ speech, such as when an Internet user created the Shitty Media Men list that allowed others to share their own experiences involving harassment and sexual assault. 

The SAFE Tech Act Fails to Capture the Reality of Paid Content Online

In what appears to be an attempt to limit deceptive advertising, the SAFE Tech Act would amend Section 230 to remove the service’s immunity for user-generated content when that content is paid speech. According to the senators, the goal of this change is to stop Section 230 from applying to ads, “ensuring that platforms cannot continue to profit as their services are used to target vulnerable consumers with ads enabling frauds and scams.” 

With this change, even if the defendant ultimately prevails against a plaintiff’s claims, they will have to defend themselves in court for longer, driving up their costs.

But the language in the bill is much broader than just ads. The bill says Section 230’s platform immunity for user-generated content does not apply if, “the provider or user has accepted payment to make the speech available or, in whole or in part, created or funded the creation of the speech.” Much, much more of the Internet is likely included behind this definition than advertising, and it is unclear how much paid or sponsored content this language would sweep up. This change would undoubtedly force a massive, and dangerous, overhaul to Internet services at every level. 

Although much of the legislative conversation around Section 230 reform focuses on the dominant social media services that are generally free to users, most of the intermediaries people rely on involve some form of payment or monetization: from more obvious content that sits behind a paywall on sites like Patreon, to websites that pay for hosting from providers like GoDaddy, to the comment section of a newspaper only available to subscribers. If all companies that host speech online and whose businesses depend on user payments lose Section 230 protections, the relationship between users and many intermediaries will change significantly, in several unintended ways:

Harm to Data Privacy: Services that previously accepted payments from users may decide to change to a different business model based on collecting and selling users’ personal information. So in seeking to regulate advertising, the SAFE TECH Act may perversely expand the private surveillance business model to other parts of the Internet, just so those services can continue to maintain Section 230’s protections. 

Increased Censorship: Those businesses that continue to accept payments will have to make new decisions about what speech they can risk hosting and how they vet users and screen their content. They would be forced to monitor and filter all content that appears whenever money has exchanged hands—a dangerous and unworkable solution that would find much important speech disappeared, and would turn everyone from web hosts to online newspapers into censors. The only other alternative—not hosting user speech—would also not be a step forward. 

As we’ve said many times, censorship has been shown to amplify existing imbalances in society. History shows us that when faced with the prospect of having to defend lawsuits, online services (like offline intermediaries before them) will opt to remove and reject user speech rather than try to defend it, even when it is strongly defensible. These decisions, as history has shown us, are applied disproportionately against the speech of marginalized speakers. Immunity, like that provided by Section 230, alleviates that prospect of having to defend such lawsuits. 

Unintended Burdens on a Complex Ecosystem: While minimizing dangerous or deceptive advertising may be a worthy goal, and even if the SAFE Tech Act were narrowed to target ads in particular, it would not only burden sites like Facebook that function as massive online advertising ecosystems; it would also burden the numerous companies that comprise the complex online advertising ecosystem. There are numerous intermediaries between the user seeing an ad on a website and the ad going up. It is unclear which companies would lose Section 230 immunity under the SAFE TECH Act; arguably it would be all of them. The bill doesn’t reflect or account for the complex ways that publishers, advertisers, and scores of middlemen actually exchange money in today’s online ad ecosystem, which happens often in a split second through Real-Time Bidding protocols. It also doesn’t account for more nuanced advertising regimes. For example, how would an Instagram influencer—someone who is paid by a company to share information about a product—be affected by this loss of immunity? No money has exchanged hands with Instagram, and therefore one can imagine influencers and other more covert forms of advertising becoming the norm to protect advertisers and platforms from liability. 

For a change in Section 230 to work as intended and not spiral into a mass of unintended consequences, legislators need to have a greater understanding of the Internet ecosystem of paid and content, and the language needs to be more specifically and narrowly tailored.

The Danger That an Affirmative Defense Requirement Creates 

The SAFE Tech Act also would alter the legal procedure around when Section 230’s immunity for user-generated content would apply in a way that would have massive practical consequences for users’ speech. Many people upset about user-generated content online bring cases against platforms, hosts, and other online intermediaries. Congressman Devin Nunes’ repeated lawsuits against Twitter for its users’ speech are a prime example of this phenomenon. 

The increased legal costs of even meritless lawsuits will have serious consequences for users’ speech. 

Under current law, Section 230 operates as a procedural fast-lane for online services—and users who publish another user’s content—to get rid of frivolous lawsuits. Platforms and users subjected to these lawsuits can move to dismiss the cases before having to even respond to the legal complaint or going through the often expensive fact-gathering portion of a case, known as discovery. Right now, if it’s clear from the face of a legal complaint that the underlying allegations are based on a third party’s content, the statute’s immunity requires that the case against the platform or user who hosted the complained-of content be dismissed. Of course, this has not stopped plaintiffs from bringing (often unmeritorious) lawsuits in the first place. But in those cases, Section 230 minimizes the work the court must go through to grant a motion to dismiss the case, and minimizes costs for the defendant. This protects not only platforms but users; it is the desire to avoid litigation costs that leads intermediaries to default to censoring user speech.

The SAFE Tech Act would subject both provider and user defendants to much more protracted and expensive litigation before a case could be dismissed. By downgrading Section 230’s immunity to an “affirmative defense … that an interactive computer service provider has a burden of proving by a preponderance of the evidence,” defendants could no longer use Section 230 to dismiss cases at the beginning of a suit and would be required to prove—with evidence—that Section 230 applies. Right now, Section 230 saves companies and users significant legal costs when they are subjected to frivolous lawsuits. With this change, even if the defendant ultimately prevails against a plaintiff’s claims, they will have to defend themselves in court for longer, driving up their costs.

The increased legal costs of even meritless lawsuits will have serious consequences for users’ speech. An online service that cannot quickly get out of frivolous litigation based on user-generated content is likely going to take steps to prevent such content from becoming a target of litigation in the first place, including screening user’s speech or prohibiting certain types of speech entirely. And in the event that someone upset by a user’s speech sends a legal threat to an intermediary, the service is likely to be much more willing to remove the speech—even when it knows the speech cannot be subject to legal liability—just to avoid the new, larger expense and time to defend against the lawsuit.

As a result, the SAFE Tech Act would open the door for a host of new suits that by design are not filed to vindicate a legal wrong but simply to remove certain speech from the Internet—also called SLAPP lawsuits. These would remove a much greater volume of speech that does not, in fact, violate the law. Large services may find ways to absorb these new costs. But for small intermediaries and growing platforms that may be competing with those large companies, a single costly lawsuit, even if the defendant small company eventually prevails, may be the difference between success and failure. This is not to mention the many small businesses who use social media to market their company or service to respond to (and moderate) comments on their pages or sites, and who would likely be in danger of losing immunity from liability under this change. 

No Guardrails Around Injunctive Relief Would Open the Door to Dangerous Takedowns

The SAFE Tech Act also modifies Section 230’s immunity in another significant way, by permitting aggrieved individuals to seek non-monetary relief from platforms whose content has harmed them. Under the bill, Section 230 would not apply when a plaintiff seeks injunctive relief to require an online service to remove or restrict user-generated content that is “likely to cause irreparable harm.” 

The SAFE Tech Act’s injunctive relief carveout fails to account for how the provision will be misused to suppress lawful speech.

This extremely broad change may be designed to address a legitimate concern about Section 230. Some people who are harmed online simply want the speech taken down instead of seeking monetary compensation. While giving certain Internet users an effective remedy that they currently lack under 230, the SAFE Tech Act’s injunctive relief carveout fails to account for how the provision will be misused to suppress lawful speech.

The SAFE Tech Act’s language appears to permit enforcement of all types of injunctive relief at any stage in a case. Litigants often seek emergency and temporary injunctive relief at an extremely early stage of the case, and judges frequently grant it without giving the speaker or platform an opportunity to respond. Courts already issue these kinds of takedown orders against online platforms, and they are prior restraints in violation of the First Amendment. If Section 230 does not bar these types of preliminary takedown orders, plaintiffs are likely to misuse the legal system to force down legal content without a final adjudication about the actual legality of the user-generated content.

Also, the injunctive relief carveout could be abused in another type of case, known as a default judgment, to remove speech without any judicial determination that the content is illegal. Default judgments are when the defendant does not fight the case, allowing the plaintiff to win without any examination of the underlying merits. In many cases, defendants avoid litigation simply because they don’t have the time or money for it. 

Because of its one-sided nature, default judgments are subject to great fraud and abuse. Others have documented the growing phenomenon of fraudulent default judgments, typically involving defamation claims, in which a meritless lawsuit is crafted for the specific purpose of getting a default judgment and to avoid a consideration of its merits. If the SAFE Tech Act were to become law, fraudulent lawsuits like these would be incentivized and become more common, because Section 230 would no longer provide a barrier against their use to legally compel intermediaries to remove lawful speech.

A recent Section 230 case called Hassel v. Bird illustrates how a broad injunctive relief carveout to the law that would apply to default judgments would incentivize censorship of protected user speech. In Hassel, a lawyer sued a user of Yelp (Bird) who gave her law office a bad review, claiming defamation. The court never ruled on whether the speech was defamatory, but because the reviewer did not defend the lawsuit, the trial judge entered a default judgment against the reviewer, ordering the removal of the post.  Section 230 prevented a court from ordering Yelp to remove the post. 

Despite the potential for litigants to abuse the SAFE Tech Act’s injunctive relief carveout, the bill contains no guardrails for online intermediaries hosting legitimate speech targeted for removal. As it stands, the injunctive relief exception to Section 230 poses a real danger to legitimate speech. 

In Conclusion, For Safer Tech, Look Beyond Section 230

This only scratches the surface of the SAFE Tech Act. But the bill’s shotgun approach to amending Section 230, and the broadness of its language, make it impossible to support as it stands. 

If legislators take issue with deceptive advertisers, they should use existing laws to protect users from them. Instead of making sweeping changes to Section 230, they should update antitrust law to stop the flood of mergers and acquisitions that have made competition in Big Tech an illusion, creating much of the problems we see in the first place. If they want to make Big Tech more responsive to the concerns of consumers, they should pass a strong consumer data privacy law with a robust private right of action.

If they disagree with the way that large companies like Facebook benefit from Section 230, they should carefully consider that changes to Section 230 will mostly burden smaller platforms and entrench the large companies that can absorb or adapt to the new legal landscape (large companies continue to support amendments to Section 230, even as those companies simultaneously push back against substantive changes that actually seek to protect users, and therefore harm their bottom line). Addressing Big Tech’s surveillance-based business models can’t, and shouldn’t, be done through amendments to Section 230—but that doesn’t mean it shouldn’t be done at all. 

It’s absolutely a problem that just a few tech companies wield such immense control over what speakers and messages are allowed online. And it’s a problem that those same companies fail to enforce their own policies consistently or offer users meaningful opportunities to appeal bad moderation decisions. But this bill would not create a fairer system.

Tags