In April 2018, House Republicans held a hearing on the “Filtering Practices of Social Media Platforms” that focused on misguided claims that Internet platforms like Google, Twitter, and Facebook actively discriminate against conservative political viewpoints. Now, a year later, Senator Ted Cruz is poised to take the Senate down the same path: he's leading a hearing this week on “Stifling Free Speech: Technological Censorship and the Public Discourse.”

While we certainly agree that online platforms have created content moderation systems that remove speech, we don’t see evidence of systemic political bias against conservatives. In fact, the voices that are silenced more often belong to already marginalized or less-powerful people.  

Given the lack of evidence of intentional partisan bias, it seems likely that this hearing is intended to serve a different purpose: to build a case for making existing platform liability exemptions dependent on "politically neutral" content moderation practices. Indeed, Senator Cruz seems to think that’s already the law. Questioning Facebook CEO Mark Zuckerberg last year, Cruz asserted that in order to enjoy important legal protections for free speech, online platforms must adhere to a standard of political neutrality in their moderation decisions. Fortunately for Internet users of all political persuasions, he’s wrong.

Section 230—the law that protects online forums from many types of liability for their users’ speech—does not go away when a platform decides to remove a piece of content, whether or not that choice is “politically neutral.” In fact, Congress specifically intended to protect platforms’ right to moderate content without fear of taking on undue liability for their users’ posts. Under the First Amendment, platforms have the right to moderate their online platforms however they like, and under Section 230, they’re additionally shielded from some types of liability for their users’ activity. It’s not one or the other. It’s both.

In recent months, Sen. Cruz and a few of his colleagues have suggested that the rules should change, and that platforms should lose Section 230 protections if those platforms aren’t politically neutral. While such proposals might seem well-intentioned, it’s easy to see how they would backfire. Faced with the impossible task of proving perfect neutrality, many platforms—especially those without the resources of Facebook or Google to defend themselves against litigation—would simply choose to curb potentially controversial discussion altogether and even refuse to host online communities devoted to minority views. We have already seen the impact FOSTA has had in eliminating online platforms where vulnerable people could connect with each other.

To be clear, Internet platforms do have a problem with over-censoring certain voices online. These choices can have a big impact in already marginalized communities in the U.S., as well as in countries that don’t enjoy First Amendment protections, such as places like Myanmar and China, where the ability to speak out against the government is often quashed. EFF and others have called for Internet companies to provide the public with real transparency about whose posts they’re taking down and why. For example, platforms should provide users with real information about what they are taking down and a meaningful opportunity to appeal those decisions. Users need to know why some language is allowed and the same language in a different post isn’t. These and other suggestions are contained in the Santa Clara Principles, a proposal endorsed by more than 75 public interest groups around the world. Adopting these Principles would make a real difference in protecting people’s right to speak online, and we hope at least some of the witnesses tomorrow will point that out.