As we've noted before, online harassment is a pressing problem—and a problem that, thankfully and finally, many are currently working on together to mitigate and resolve. Part of the long road to creating effective tools and policies to help users combat harassment is drawing attention to just how bad it can be, and using that spotlight to propose fixes that might work for everyone affected.

But not all of the solutions now being considered will work. In fact, some of them will not only fail to fix harassment, but they will actually place drastic limitations on the abilities of ordinary users to work together, using the Net, to build and agitate real, collective solutions.

One proposal raised recently by Arthur Chu falls into this latter category. Chu suggests that intermediaries—anyone who runs an online server that acts as a host for users' speech—should be made jointly responsible for the content of that speech. His theory is that if you make a middle-man—such as an ISP like Facebook, Twitter, or YouTube, the host of a discussion forum, or a blog author—legally responsible for what their users' or commenters say, then they will have a strong incentive to make sure that their users don't harass others using their platform.

In the United States, the primary protection against such broad intermediary liability is CDA 230—a statute which Chu thinks should be dismantled. As courts in the United States have recognized, the immunity granted under CDA 230 actually encourages service providers and other online intermediaries to self-regulate—i.e., to police and monitor third-party content posted through their services. Prior to the enactment of CDA 230, a service provider could be held liable if it tried to self-regulate but did so unsuccessfully—a framework which actively discouraged intermediaries from regulating potentially objectionable content on their sites.

CDA 230 is currently one of the most valuable tools for protecting freedom of expression and innovation on the Internet. Countries around the world have similar protections for online intermediaries, and, through initiatives including the Manila Principles, human rights groups around the world have campaigned in favor of such protections. Such groups include Article 19, the Association for Progressive Communications, the Committee to Protect Journalists, Change.org and Free Press, as well as national human rights organizations from various counties, ranging from Egypt and Pakistan to Australia and Canada.

And there is good reason for the law to protect online intermediaries from liability based on what other people say online. The primary reason so many around the world have fought for, and continue to fight for, intermediary liability protections is straightforward. If you're speaking up for an unpopular truth against powerful interests, one of the best ways they have to silence your objections is to legally intimidate intermediaries. Intermediaries' interests differ from their users, and while many hosts of Internet content make strong statements in favor of their users -- in defense of their ability to speak freely or feel safe online -- given the choice between defending the users and fighting an expensive legal battle (even if there is a good chance they’d win), it's clear which route many will take. Some intermediaries may not have the resources to even consider going to court. In other words, getting rid of CDA 230 liability protection acts as a de facto tax on intermediaries that wish to protect the free expression of their users.

But it's worse than that. An intermediary fearful of liability based on the actions of its users may also proactively modify its site to remove the possibility of a lawsuit altogether. Intermediaries have a powerful ability to control what conversations take place on their networks. In a world where litigation is an ever-present threat, intermediaries will set up environments where contentious conversations never happen in the first place.

For those arguing against CDA 230, that's the whole point of weakening its protections. They argue that intermediaries are best suited to filter and guide online speech until the risk of harassment is eliminated. Hang the sword of litigation over those potential gatekeepers, proponents say, and they'll quickly put in place rules and algorithms to permanently dismiss harassment from their networks, even before it occurs.

But victims of harassment are very low down the list of potential litigants that intermediaries would have to listen to in a post-CDA 230 world. Attorney Ken White phrases it this way in his analysis of Chu’s proposal: “Justice may not depend entirely on how much money you have, but that is probably the most powerful factor.” Before them would come the rich and influential, such as politicians and, in many cases, the harassers themselves. After all, much online harassment is intended to silence the victims and intimidate them into leaving the network. What better way to do so than to threaten the host with a lawsuit if they don't throw the victim offline? The threat might be empty, but if the host isn’t willing to engage in expensive litigation, one threatening letter might be sufficient. Today, CDA 230 helps ensure this can’t happen.

Chu makes the argument that CDA 230 is not universal, and that other countries survive without it. This is technically true. CDA 230 is a U.S. law. But it has been a critical law for protecting online speech and innovation in the US, and most countries either have some form of intermediary liability protection or are gradually increasing such provisions through new law or judgments.

Notably, countries that have experimented with markedly lesser intermediary protections have not solved or mitigated the problem of online harassment. To the contrary, the evidence suggests that opening intermediaries to liability leaves harassment unaffected, while enabling others, including governments and powerful political interests, to use the law to harass and silence legitimate speakers—often those in a less powerful position. For example, India's fifteen year experiment with broad intermediary liability, in the form of the 2000 IT Act, saw no effective limits on harassment. A 2011 study of intermediaries' response to take-down requests under the Indian law showed that out of 7 intermediaries sent a flawed takedown notices, 6 over-complied with the removal request. The laws' key provisions regarding liability and censorship were successfully challenged by Shreya Singhal, after two women were arrested for a comment critical of the public response to the death of a local politician (one of the women had only "liked" the post).

In Thailand, intermediaries share liability with their users. Such shared liability has resulted in cases against the representatives of intermediaries, from private prosecutions against Google's entire board by ex-pat businessmen seeking to shut down a critical blog, to the criminal prosecution of Chiranuch Premchaipron, the manager of a non-profit newspaper whose discussion forum included user comments critical of the monarchy. Yet, Thai intermediaries have not taken any novel steps to combat personal harassment, and the frequency of reported incidents of individual harassment remain no different or may even be higher than comparable countries.

Chu does at least acknowledge the wider effect of overturning CDA 230. In online conversation after posting his op-ed, he conceded that a huge chunk of social media would disappear and challenged the idea that a world without YouTube, Twitter, Vine, and Internet comments in general would be such a great loss. To Chu, it seems that an Internet that worked the same way as a letter to the editor—where only those who reach certain standards of editorial acceptability would have a voice—would be a better Internet.

On this point we respectfully, and strongly, disagree. Those fighting harassment, both offline and on, have used the Internet incredibly effectively to speak up and organize. Raising the issue has led to an uncomfortable conversation which those targeted for criticism: intermediaries, the powerful, and politicians must often wish could go away. Those fighting harassment have been threatened with lawsuits and worse to silence them. An online world without CDA 230 would still allow harassment to exist, but it would profoundly limit the ability of any of us to hear from its targets, recognize the scale of the problem, and identify real, practical solutions.

Speaking up and organizing is what gets pervasive issues like harassment dealt with in the long run. Making intermediaries responsible for policing their users will silence such efforts and lead to an Internet that is safer for powerful actors, but not an Internet that is safe for everyday users.