The Keys to a Healthy Internet Are User Empowerment and Competition, Not Censorship

The House Energy and Commerce Committee held a legislative hearing today over what to do with one of the most important Internet laws, Section 230. Members of Congress and the testifying panelists discussed many of the critical issues facing online activity like how Internet companies moderate their users’ speech, how Internet companies and law enforcement agencies are addressing online criminal activity, and how the law impacts competition. 

EFF Legal Director Corynne McSherry testified at the hearing, offering a strong defense of the law that’s helped create the Internet we all rely on today. In her opening statement, McSherry urged Congress not to take Section 230’s role in building the modern Internet lightly:

We all want an Internet where we are free to meet, create, organize, share, debate, and learn. We want to have control over our online experience and to feel empowered by the tools we use. We want our elections free from manipulation and for women and marginalized communities to be able to speak openly about their experiences.

Chipping away at the legal foundations of the Internet in order to pressure platforms to play the role of Internet police is not the way to accomplish those goals. 

Privacy info. This embed will serve content from

Recognizing the gravity of the challenges presented, Ranking Member Cathy McMorris Rodgers (R-WA) aptly stated: “I want to be very clear: I’m not for gutting Section 230. It’s essential for consumers and entities in the Internet ecosystem. Misguided and hasty attempts to amend or even repeal Section 230 for bias or other reasons could have unintended consequences for free speech and the ability for small businesses to provide new and innovative services.” 

We agree. Any change to Section 230 risks upsetting the balance Congress struck decades ago that created the Internet as it exists today. It protects users and Internet companies big and small, and leaves open the door to future innovation. As Congress continues to debate Section 230, here are some suggestions and concerns we have for lawmakers willing to grapple with the complexities and get this right.

Facing Illegal Activity Online: Focus on the Perpetrators

Much of the hearing focused on illegal speech and activity online. Representatives and panelists mentioned examples like illegal drug sales, wildlife sales, and fraud. But there’s an important distinction to make between holding Internet intermediaries, such as social media companies and classified ads sites, liable for what their users say or do online, and holding users themselves accountable for their behavior.

Section 230 has always had a federal criminal law carve out. This means that truly culpable online platforms can already be prosecuted in federal court, alongside their users, for illegal speech and activity. For example, a federal judge in the Silk Road case correctly ruled that Section 230 did not provide immunity against federal prosecution to the operator of a website that hosted other people’s ads for illegal drugs.

But EFF does not believe prosecuting Internet intermediaries is the best answer to the problems we find online. Rather, both federal and state government entities should allocate sufficient resources to target the direct perpetrators of illegal online behavior; that is, the users themselves who take advantage of open platforms to violate the law. Section 230 does not provide an impediment to going after these bad actors. McSherry pointed this out in her written testimony: “In the infamous Grindr case... the abuser was arrested two years ago under criminal charges of stalking, criminal impersonation, making a false police report, and disobeying a court order.”

Weakening Section 230 protections in order to expand the liability of online platforms for what their users say or do would incentivize companies to over-censor user speech in an effort to limit the companies’ legal exposure. Not only would this be harmful for legitimate user speech, it would also detract from law enforcement efforts to target the direct perpetrators of illegal behavior. As McSherry noted regarding the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA):

At this committee’s hearing on November 30, 2017, Tennessee Bureau of Investigation special agent Russ Winkler explained that online platforms were the most important tool in his arsenal for catching sex traffickers. One year later, there is anecdotal evidence that FOSTA has made it harder for law enforcement to find traffickers. Indeed, several law enforcement agencies report that without these platforms, their work finding and arresting traffickers has hit a wall.

Speech Moderation: User Choice and Empowerment

In her testimony, McSherry stressed that the Internet is a better place for online community when numerous platforms are available with a multitude of moderation philosophies. Section 230 has contributed to this environment by giving platforms the freedom to moderate speech the way they see fit.

The  freedom  that Section 230 afforded to Internet startups to choose their own moderation strategies has led to a multiplicity of options  for users—some more restrictive and sanitized, some more laissez-faire.  That  mix of  moderation philosophies contributes to a healthy environment for free expression and association online.

Reddit’s Steve Huffman echoed McSherry’s defense of Section 230 (PDF), noting that its protections have enabled the company to improve on its moderation practices over the years. He explained that the company’s speech moderation philosophy is one that prioritizes users making decisions about how they’d like to govern themselves:

The way Reddit handles content moderation today is unique in the industry. We use a governance model akin to our own democracy—where everyone follows a set of rules, has the ability to vote and self-organize, and ultimately shares some responsibility for how the platform works.

In an environment where platforms have their own approaches to content moderation, users have the ultimate power to decide which ones to use. McSherry noted in her testimony that while Grindr was not held liable for the actions of one user, that doesn’t mean that Grindr didn’t suffer. Grindr lost users, as they moved to other dating platforms. One reason why it’s essential that Congress protect Section 230 is to preserve the multitude of platform options.

“As a litigator, [a reasonableness standard] is terrifying. That means a lot of litigation risk, as courts try to figure out what counts as reasonable.”

Later in the hearing, Rep. Darren Soto (D-FL) asked each of the panelists who should be “the cop on the beat” in patrolling online speech. McSherry reiterated that users themselves should be empowered to decide what material they see online: “A cardinal principle for us at EFF is that at the end of the day, users should be able to control their Internet experience, and we need to have many more tools to make that possible.”

If some critics of Section 230 get their way, users won’t have that power. Prof. Danielle Citron offered a proposal (PDF) that Congress implement a “duty of care” regimen, where platforms would be required to show that they’re meeting a legal “reasonableness” standard in their moderation practices in order to keep their Section 230 protection. She proposes that courts look at what platforms are doing generally to moderate content and whether their policies are reasonable, rather than what a company did with respect to a particular piece of user content.

But inviting courts to determine what moderation practices are best would effectively do away with Section 230’s protections, disempowering users in the process. In McSherry’s words, “As a litigator, [a reasonableness standard] is terrifying. That means a lot of litigation risk, as courts try to figure out what counts as reasonable.”

Robots Won’t Fix It

There was plenty of agreement that current moderation was flawed, but much disagreement about why it was flawed. Subject-matter experts on the panel frequently described areas of moderation that were not in their purview as working perfectly fine, and questioning why those techniques could not be applied to other areas.

The deeper you look at current moderation—and listen carefully to those directly silenced by algorithmic solutions—the more you understand that robots won’t fix it.

In one disorienting moment, Gretchen Peters of the Alliance to Counter Crime Online asked the congressional committee when they’d last seen a “dick pic” on Facebook, and took their silence as an indication that Facebook had solved the dick pic problem. She then suggested Facebook could move on to scanning for other criminality. Professor Hany Farid, an expert in at-scale, resilient hashing of child exploitative imagery, wondered why the tech companies could not create digital fingerprinting solutions for opioid sales.

Many cited Big Tech’s work to automatically remove what they believe to be copyright-infringing material as a potential model for other areas—perhaps unaware that the continuing failure of copyright bots is one of the few areas where EFF and the entertainment industry agree (though we think they take down too much entirely lawful material, and Hollywood thinks they’re not draconian enough.)

The truth is that the deeper you look at current moderation—and listen carefully to those directly silenced by algorithmic solutions—the more you understand that robots won’t fix it. Robots are still terrible at understanding context, which has resulted in everything from Tumblr flagging pictures of bowls of fruit as “adult content” to YouTube removing possible evidence of war crimes because it categorized the videos as “terrorist content.” Representative Lisa Blunt Rochester (D-DE) pointed out the consequences of having algorithms police speech, “Groups already facing prejudice and discrimination will be further marginalized and censored.” A lot of the demand for Big Tech to do more moderation is predicated on the idea that they’re good at it, with their magical tech tools. As our own testimony and long experience points out—they’re really not, with bots or without.

Could they do better? Perhaps, but as Reddit’s Huffman noted, doing so means that the tech companies need to be able to innovate without having those attempts result in a hail of lawsuits. That is, he said, “exactly the sort of ability that 230 gives us.”

Reforming 230 with Big Tech as the Focus Would Harm Small Internet Companies

Critics of 230 often fail to acknowledge that many of the solutions they seek are not within reach of startups and smaller companies. Techniques like preemptive blocking of content, persistent policing of user posts, and mechanisms that analyze speech in real time to see what needs to be censored are extremely expensive.

That means that controlling what users do, at scale, will only be doable by Big Tech. It’s not only cost prohibitive, it will carry a high cost of liability if they get it wrong. For example, Google’s ContentID is often held up in the copyright context as a means of enforcement, but it required a $100 million investment by Google to develop and deploy—and it still does a bad job.

Google’s Katherine Oyama testified that Google already employs around 10,000 people that work on content moderation—a bar that no startup could meet—but even that appears insufficient to some critics. By comparison, a website like Wikipedia, which is the largest repository of information in human history, employs just about 350 staff for its entire operation, and is heavily reliant on volunteers.

A set of rules that would require a Google-sized company to expend even more resources means that only the most well-funded firms could maintain global platforms. A minimally-staffed nonprofit like Wikipedia could not continue to operate as it does today. The Internet would become more concentrated, and further removed from the promise of a network that empowers everyone.

As Congress continues to examine the problems facing the Internet today, we hope lawmakers remember the role that Section 230 plays in defending the Internet’s status as a place for free speech and community online. We fear that undermining Section 230 would harden today’s largest tech companies from future competition. Most importantly, we hope lawmakers listen to the voices of the people they risk pushing offline.

Read McSherry’s full written testimony.

Related Issues