The Big Internet Companies Are Too Powerful, But Undermining Section 230 Won’t Help

The Senate Commerce Committee met this week to question the heads of Facebook, Twitter, and Google about Section 230, the most important law protecting free speech online. Section 230 reflects the common-sense principle that legal liability for unlawful online speech should rest with the speaker, not the Internet services that make online speech possible. Section 230 further protects Internet companies’ ability to make speech moderation decisions by making it clear that platforms can make those decisions without inviting liability for the mistakes they will inevitably make.

Even President Trump has called multiple times for a repeal of Section 230, though repealing the law would certainly mean far fewer places for conservatives to share their ideas online, not more.

Section 230’s importance to free speech online can’t be overstated. Without Section 230, social media wouldn’t exist, at least in its current form. Neither would most forums, comment sections, or other places where people are allowed to interact and share their ideas with each other. The legal risk involved with operating such a space would simply be too high. Simply put, the Internet would be a less interactive, more restrictive place without Section 230.

If some critics of Section 230 get their way, the Internet will soon become a more restrictive place. Section 230 has become a lightning rod this year: it’s convenient for politicians and commentators on both sides of the aisle to blame the law—which protects a huge range of Internet services—for the decisions of a few very large companies (usually Google, Twitter, and Facebook). Republicans make questionable claims about bias against conservatives, arguing that platforms should be required to moderate less speech in order to maintain liability protections. Even President Trump has called multiple times for a repeal of Section 230, though repealing the law would certainly mean far fewer places for conservatives to share their ideas online, not more. Democrats often argue the opposite, saying that the law should be changed to require platforms to do more to patrol hate speech and disinformation.

The questions and comments in this week’s hearing followed that familiar pattern, with Republicans scolding Big Tech for “censoring” and fact-checking conservative speech—the president’s in particular—and Democrats demanding that tech companies do more to curb misleading and harmful statements on their platforms—the president’s in particular. Lost in the charade is a stark reality: undermining 230 wouldn’t necessarily improve platforms’ behavior but would bring severe consequences for speech online as a whole.

Three Internet Giants Don’t Speak for the Internet

One of the problems with the current debate over Section 230 is that it’s treated as a discussion about big tech companies. But Section 230 affects all of us. When volunteer moderators take action in small web discussion forums—they’re protected by Section 230. When a hobbyist blogger removes spam or other inappropriate material from a comment section—again, protected by Section 230. And when an Internet user retweets a tweet or even forwards an email, Section 230 protects that user too.

Internet censorship invariably harms the least powerful members of society first.

There were a lot of disappointing mistakes made in this hearing, but the first and worst mistake was deciding who to call to speak in the first place. Hauling the CEOs of Facebook, Google, and Twitter in front of Congress to be the sole witnesses defending a law that affects the entire Internet may be good political theatre, but it's a terrible way to make technology policy. By definition, the three largest tech companies alone can’t provide a clear picture of what rewriting Section 230 would mean to the entire Internet.

All three of these CEOs run companies that will be able to manage nearly any level of government regulation or intervention. That’s not true of the thousands of small startups that don’t have the resources of world-spanning companies. It’s no surprise that Facebook, for instance, is now signaling an openness to more government regulation of the Internet—far from being a death blow for the company, it may well help the social media giant isolate itself from competition.

Dorsey, Zuckerberg, and Pichai don’t just fail to represent the breadth of Internet companies that would be affected by a change to Section 230; as three men in positions of enormous power, they also fail to represent the Internet users that Congress would harm. Internet censorship invariably harms the least powerful members of society first—the rural LGBTQ teenager who depends on the acceptance of Internet communities, the activist using social media to document government abuses of human rights. When online platforms clamp down on their users’ speech, it’s the marginalized voices that disappear first.

Facebook’s Call for Regulation Doesn’t Address the Big Problems

In his opening testimony, Facebook CEO Mark Zuckerberg made it clear why we can’t trust him to be a witness on Section 230, endorsing changes to Section 230 that would give Facebook an advantage over would-be competitors. Zuckerberg acknowledged the foundational role that Section 230 has played in the development of the Internet, but he also suggested that “Congress should update the law to make sure that it’s working as intended.” He went on to propose that Congress should pass laws requiring platforms to be more transparent in their moderation decisions. He also advocated legislation to “separate good actors from bad actors.” From Zuckerberg’s written testimony (PDF):

At Facebook, we don’t think tech companies should be making so many decisions about these important issues alone. I believe we need a more active role for governments and regulators, which is why in March last year I called for regulation on harmful content, privacy, elections, and data portability.

EFF recognizes some of what Zuckerberg identifies as the problems with today’s social media platforms—indeed, we have criticized Facebook for providing inadequate transparency into its own moderation processes—but that doesn’t mean Facebook should speak for the Internet. Any reform that puts more burden on platforms in order to maintain the Section 230 liability shield will mean a higher barrier for new startups to overcome before they can compete with Facebook or Google, be it in the form of moderation staff, investment in algorithmic filters, or the high-powered lawyers necessary to fight off new lawsuits. If it becomes harder for platforms to receive and maintain Section 230 protections, then Google, Facebook, and Twitter will become even more dominant over the Internet. This conglomeration happened very dramatically after Congress passed SESTA/FOSTA. The law, which Facebook supported, made it much harder for online dating services to operate without massive legal risk, and multiple small dating sites recognized that reality and immediately shut down. Just a few weeks later, Facebook announced that it was entering the online dating market.

Unsurprisingly, while the witnesses paid lip service to the importance of competition among online services, none of them proposed solutions to their outsized control over online speech. None of them proposed comprehensive data privacy legislation that would let users sue the big tech companies when they violate our privacy rights. None of them proposed modernizing antitrust law to stop the familiar pattern of big tech companies simply buying their would-be competitors. Many members of Congress—Republican and Democrat—recognized that the failings they blamed on large tech companies were magnified by the lack of meaningful competition among online speech platforms, but unfortunately, very little of the hearing focused on real solutions to that lack of competition.

Give Users Real Control Over Their Online Experience

Republican Senators’ insistence on framing the hearing around “censorship of conservatives” precluded a more serious discussion of the problems with online platforms’ aggressive speech enforcement practices. As we’ve said before, online censorship magnifies existing imbalances in society: it’s not the liberal silencing the conservative; it’s the powerful silencing the powerless. Any serious discussion of the problems with platforms’ moderation practices must consider the ways in which powerful players—including state actors—take advantage of those moderation practices in order to censor their enemies.

EFF told Congress in 2018 that although it’s not lawmakers’ place to tell Internet companies what speech to keep up or take down, there are serious problems with how social media platforms enforce their policies and what voices disappear from the Internet. We said that Internet companies had a lot more work to do to ensure that their moderation decisions were fair and transparent and that when mistakes inevitably happened, users would be able to appeal moderation decisions to a real person, not an algorithm.

Social media companies make thousands of decisions on our behalf. Why not expose those decisions to users and let us have some say in them?

We also suggested that platforms ought to put more control over what speech we’re allowed to see into the hands of us, the users. Today, social media companies make thousands of decisions on our behalf—about whether we’ll see sexual speech and imagery, whether we’ll see certain types of misinformation, what types of extremist speech we’ll be exposed to. Why not expose those decisions to users and let us have some say in them? As we wrote in our letter to the House Judiciary Committee, “Facebook  already  allows  users  to  choose  what  kinds  of  ads  they want to see—a similar system should be put in place for content, along with tools that let  users  make  those  decisions  on  the  fly  rather  than  having  to  find  a  hidden  interface.”

We were gratified, then, that Twitter’s Jack Dorsey identified “empowering algorithmic choice” as a high priority for the company. From Dorsey’s written testimony (PDF):

In December 2018, Twitter introduced an icon located at the top of everyone’s timeline that allows individuals using Twitter to easily switch to a reverse chronological order ranking of the Tweets from accounts or topics they follow. This improvement gives people more control over the content they see, and it also provides greater transparency into how our algorithms affect what they see. It is a good start. We believe this points to an exciting, market-driven approach where people can choose what algorithms filter their content so they can have the experience they want. [...] Enabling people to choose algorithms created by third parties to rank and filter their content is an incredibly energizing idea that is in reach.

We hope to see more of that logical approach to empowering users from a major Internet company, but we also need to hold Twitter and its peers accountable to it. As Dorsey noted in its testimony, true algorithmic choice would mean not only letting people customize how Twitter filters their news feed, but letting third parties create alternative filters too. Unfortunately, the large Internet companies have a poor track record when it comes to letting their services play well with others, employing a mix of copyright abuse, end-user license agreements, and computer crime laws to prevent third parties from creating tools that interact with their own. While we applaud Twitter’s interest in giving users more control over their timeline, we must ensure that its solutions actually promote a competitive landscape and don’t further lock users into Twitter’s own walled garden.

While Internet companies’ moderation practices are a problem, the Senate Commerce Committee hearing ultimately failed to address the real problems with speech moderation at scale. While members of Congress acknowledge that a few flawed companies have too much control over online speech, they’ve consistently failed to enact structural solutions to the lack of meaningful competition in social media. Before they consider making further changes to Section 230, lawmakers must understand that the decisions they make could spell disaster—not just for Facebook and Twitter’s would-be competitors but, most importantly, for the voices they risk kicking offline.

No matter what happens in next week’s election, we are certain to see more misguided attempts to undermine Section 230 in the next Congress. We hope that Congress will take the time to listen to experts and users that don’t run behemoth Internet companies. In the meantime, please take a moment to write to your members of Congress and tell them why a strong Section 230 is important to you.

Take Action

Tell Congress to Reject the Earn It Act

Related Issues