In the nearly 25 years that EFF has been defending digital rights, our belief in the promise of the Internet has only grown stronger. The digital world frees users from many limits on communication and creativity that exist in the offline world. But it is also an environment that reflects the problems in wider society and grants them new dimensions. Harassment is one of those problems.

Online harassment is a digital rights issue. At its worst, it causes real and lasting harms to its targets, a fact that must be central to any discussion of harassment. Unfortunately, it's not easy to craft laws or policies that will address those harms without inviting government or corporate censorship and invasions of privacy—including the privacy and free speech of targets of harassment. But, as we discuss below, there are ways to craft effective responses, rooted in the core ideals upon which the Internet was built, to protect the targets of harassment and their rights.

This post explains our thinking about combating online harassment, and what we hope EFF's role can be in that effort given the scope of our work. It isn't our last word, nor should it be; this is not a simple issue. Instead, we want to outline some of the things that we consider when looking at the problem and sketch out some elements for effective responses to it.

Harassment Is a Serious Problem

Let’s be explicit about what we mean by “harassment.” We’re not talking about a few snarky tweets or the give and take of robust online debate, even when that debate includes harsh language or obscenities. Ugly or offensive speech doesn’t always rise to the level of harassment.

The kind of harassment we are worried about happens when Internet users attract the attention of the wrong group or individual, and find themselves enduring extreme levels of targeted hostility, often accompanied by the exposure of their private lives. Some victims are bombarded by violent, personalized imagery and numerous disturbing comments. The addresses of their homes and workplaces may be publicized, along with threats of violence. And such online harassment can escalate to offline stalking, physical assault, and more.

These kinds of harassment can be profoundly damaging to the free speech and privacy rights of the people targeted. It is frequently used to intimidate those with less political or social power, and affects some groups disproportionately, including women and racial and religious minorities.1 That means that not everyone appreciates the level to which it negatively affects the lives of others.

“Don’t feed the trolls”—while it may work in some situations—is an insufficient response to this level of abuse, especially when a situation escalates from a few comments into an ongoing campaign. Some people have even been chased offline completely by the cumulative effects of unrelenting personal attacks or by serious threats to their safety, or the safety of their loved ones. When that happens, their voices have effectively been silenced.

The sad irony is that online harassers misuse the fundamental strength of the Internet as a powerful communication medium to magnify and co-ordinate their actions and effectively silence and intimidate others.

But that same strength offers one way for Internet communities to fight back: when we see harassing behavior, we can speak up to challenge it. In fact, one of the most effective methods to address online harassment is counter-speech. Counter-speech happens when supporters of targeted groups or individuals deploy that same communicative power of the Net to call out, condemn, and organize against behavior that silences others.  That’s why, contrary to some mistaken assumptions, the fight for free expression and combating harassment online are not opposites, but complements.

Just because the law sometimes allows a person to be a jerk (or worse) doesn’t mean that others in the community are required to be silent or to just stand by and let people be harassed. We can and should stand up against harassment. Doing so is not censorship—it’s being part of the fight for an inclusive and speech-supporting Internet.

The Pitfalls of Legal Regulation of Online Harassment

Many people have looked to the law to address online harassment, and EFF is regularly called upon to evaluate proposed laws or regulations. Given our years of experience with poorly written laws that fail to reflect the realities of the digital environment, we are very cautious in approving such measures.

Some forms of abusive speech are already covered by existing law. In the United States, for example, threats of violence intended to put the target in a state of fear are not protected speech and are illegal under federal and state laws. Anti-harassment laws also exist in many jurisdictions. People can sue civilly over false statements of fact that injure a person’s reputation. And new laws aimed at Internet behavior have already been passed in the United States. For example, 37 states have online harassment laws, and 41 have online stalking laws.

But offline and online we see the same problem: laws aimed at combating harassment are rarely enforced at all, or are enforced unfairly and ineffectively. All around the world, law enforcement officers frequently fail to take seriously complaints about online threats or often simply do not understand their gravity. As Danielle Citron notes, the police tell complainants to simply “go home and turn off the computer” or tell them it is just “boys being boys”.

The failure of current enforcement results in new calls for stronger regulations, including laws that broadly target speech. But laws that don’t carefully delineate between harassment and protected speech can end up snaring protected speech while failing to limit the behavior of the harassers.

Powerful people, corporations, governments, and online mobs are all adept at finding the best tools for censorship and using them to stifle criticism and dissent. They are also perfectly willing to use tools developed for one purpose for their own ends. (For example, we have long experience with copyright and trademark law being used to stifle criticism and parody. In fact, we have a whole “hall of shame” devoted to those misuses.)

Regulation of online anonymity is also very likely to cause collateral damage. It’s tempting to assume that eliminating anonymity will reduce harassment. Our experience is different: we see a great need for strong protections for online anonymity, so that those being harassed, as well as for those facing domestic violence, human rights abuses, and other consequences for speaking out, can nonetheless do so with less fear of exposure. That is why, when concerned advocates call for legislation aimed at combating harassment that requires sites to compulsorily log all visitor IP addresses, our first thought is to worry that such legislation will be misused to target victims, not the perpetrators of harassment. With strategies like these, we risk not only failing to solve the problem at hand, but along the way hurting some of the very people we hoped to help.

That’s one reason we fight for caution and clarity in all legal areas that can potentially impact protected speech. When it comes to online harassment, which is so often aimed at silencing voices of people without power, this concern is especially important.

We oppose laws that attempt to address online harassment but do it carelessly, with little regard for the risks for legitimate speech. For example, recently the New York Court of Appeals struck down a cyberbullying law that made it a crime to "harass, annoy, threaten...or otherwise inflict significant emotional harm on another person,” because it reached “far beyond the cyberbullying of children.” After all, protected speech could very well be “annoying,” but that is hardly enough reason to outlaw it.

But in addition to policing bad proposals, we do think about better possible legal solutions. There could certainly be better enforcement of existing laws regarding harassment—a concern that extends beyond the online world (as noted above.) We hope that the courts will eventually integrate the reality of the new ways that individuals can be targeted into their decisions and precedents, and that law enforcement agencies will systemically educate officers about online harassment.

After years of experience, though, we are pessimistic that laws drafted to deal with apparently new “cyber” threats are little more than grandstanding that allows politicians to say that they did something. That’s why these laws are so often the worst of both worlds: they are largely or entirely ineffective at addressing harassment but are still so poorly drafted that they threaten legally protected behavior and empower powerful interests to prosecute on a whim or to punish viewpoints they disfavor.

As Glenn Greenwald noted in a recent article about how Arabs and Muslims, in particular, are targeted for criminal investigation for their online speech, “Like the law generally, criminalizing online speech is reserved only for certain kinds of people (those with the least power) and certain kinds of views (the most marginalized and oppositional).” While this may not always be true, it’s true often enough that we approach legal solutions with extreme caution.

Companies Are Bad at Regulating Speech

We also understand why people look to the popular social media platforms themselves for solutions, since so much harassment occurs there. Again, our experience leaves us skeptical about company-managed, centralized “solutions.”

Currently, most online hosting providers—including platforms like Facebook and Twitter—ban harassment in their terms of service, but do not proactively police user behavior. Instead, they rely on community policing, or flagging, to locate and remove content or user accounts that violate their terms of service. Reports are sent to moderation teams that are often poorly supported, remotely managed, and paid considerably less than most other tech workers. Decisions about content are made quickly, and erroneous takedowns of flagged content or accounts are fairly common.

In the US, companies generally have the legal right to choose to host, or not host, online speech at their discretion. We have spent considerable time looking at how they make those choices and have found that their practices are uneven at best, and biased at worst. Political and religious speech is regularly censored, as is nudity. In Vietnam, Facebook’s reporting mechanisms have been used to silence dissidents. In Egypt, the company’s “real name” policy, ostensibly aimed at protecting users from harassment, once took down the very page that helped spark the 2011 uprising. And in the United States, the policy has led to the suspension of the accounts of LGBTQ activists. Examples like these abound, making us skeptical that a heavier-handed approach by companies would improve the current state of abuse reporting mechanisms.

Trolls and online mobs, almost by definition, are groups that are skilled in efficiently directing concentrated fire against others. That means that voices that are facing harassment can be the ones ejected from online discussion, as the weight of the mob makes it look like they are the ones who are radical and outside the mainstream. To find examples of this, one need only look to the governments—such as China, Israel and Bahrain—that employ paid commenters to sway online opinion in their favor. And of course, there are plenty of trolls willing to do it for free.

We also worry that the business models of the current batch of centralized, monolithic, and multi-national (but US-based) social networks potentially work against both the preservation of free speech and the safety and privacy of those targeted by harassment. Companies’ primary focus is on revenue and legal safety. Many would be happy to sacrifice free expression if it became too expensive.

Some have suggested revising Section 230 of the Communications Decency Act (CDA 230) to get companies more interested in protecting the targets of harassment. CDA 230 establishes that intermediaries like ISPs, Web forums, and social media sites are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. These proposals would make intermediaries at least partially liable for the actions of their users. Such a change would be a serious threat to companies’ financial bottom line.

Unfortunately, instead of strengthening commitment to combating harassment, that financial risk would likely instead devastate online communities. Faced with a risk of such liability, many companies will choose to expel all forms of controversial speech from their platforms, including legitimate anger and political organizing. If, for example, any mention of Israel and Palestine sparked a barrage of harassment and then legal claims, how long would it be before service providers would ban mention of that political situation? When a magnet for harassment like Gamergate takes place on a social platform, will that platform's operators seek to uncover who the wrongdoers are—or will they simply prohibit all from speaking out and documenting their experience?

Starting Points for Good Solutions          

We think that the best solutions to harassment do not lie with creating new laws, or expecting corporations to police in the best interests of the harassed. Instead, we think the best course of action will be rooted in the core ideals underpinning the Internet: decentralization, creativity, community, and user empowerment.

Law Enforcement and Laws

Law enforcement needs to recognize and get smarter about the reality of online harassment, so it can identify real threats to safety and protect people in danger—rather than going after community members criticizing police actions or kids who post rap lyrics on Facebook. Time-tested legal precepts (such as defamation law) should be thoughtfully applied to the online world; the fact that something is said online should neither be a complete shield against liability, nor an excuse to lower the bar for criminalizing speech. And courts must become comfortable with handling cases involving online behavior.

Empower Users, Really

Users should be empowered to act for themselves, rather than having to rely on corporate enforcement teams for protection. Tools for defending against harassment should be under the control of users, instead of depending on aggressive centralized content removal, which can be so easily misused. Platforms bear a responsibility to work on such features, but we expect—as ever—that the best solutions will come from users themselves.

How could technology help defend the harassed? Innovation is hard to predict, but here are some directions that user empowerment could take:

  • More powerful, user-controlled filtering of harassing messages. There are plenty of ideas already for how sites could allow more configurable blocking. If platforms aren’t willing to provide these solutions, they should open up their platforms so that others can.
  • Better ways for communities to collectively monitor for harassing behavior and respond to it— rather than, as now, placing the burden on individuals policing their own social media streams.
  • Automated tools that let people track and limit the availability of personal information about them online (including public sources of data), to better allow themselves to defend themselves against threats of doxxing.
  • Tools that allow targets of harassment to preserve evidence in a way that law enforcement can understand and use. Abuse reports are currently designed for Internet companies’ internal processes, not the legal system.
  • Improved usability for anonymity and pseudonymity-protecting tools. When speakers choose to be anonymous to protect themselves from offline harassment, they should be able to do so easily and without deep technical knowledge.

All these technical solutions are being worked on now, but their progress is sometimes limited by external factors. Major sites block tools like Tor out of fear of abuse, thereby locking out speakers too frightened to share their location. Social media platforms hinder new tool development by locking down APIs and restricting the third-party use of user content.

Widen the Pool of Toolmakers

The maintainers of social media need to better understand the behavior that harassed individuals face, and the world of online toolmakers should better reflect the diversity of the users of the Internet. One of the best way of doing that is to ensure that everybody online has the ability and the right to innovate – though centralized companies should widen their horizons too.

Embrace Counter-Speech

There’s nothing inconsistent in both loving free speech and speaking out against harassment. We support people who stand up and speak out against harassment in our own communities—especially those who can do so without becoming targets for harassment themselves. Making violent threats and engaging in mob abuse isn't some noble act of free speech. Calling out such behavior is the right thing to do.

Looking Ahead

EFF will continue to be a staunch advocate for free speech and privacy online, because we sincerely believe those values protect everyone, including the most vulnerable. We’ll also remain critical of new regulation, and of handing over the reins of online policing to private corporations. We'll continue to work to support the development and propagation of technological solutions that can assist targets of harassment, by campaigning for user empowerment, innovation and open networks. We'll try to directly help with practical advice in resources like Surveillance Self-Defense, including creating resources that address the concerns of vulnerable groups. We know that we’re not the only ones concerned about this topic, and we’re happy that there are many other knowledgeable groups and individuals stepping up to fight harassment online.

Since EFF was founded in 1990, people around the world have come together to build an amazing set of tools that allow for more communication by more people than at any other time in history. The benefits of this digital revolution are tremendous and we’re still just scratching the surface. We’re also only beginning to understand how to mitigate its downsides. If the aims of online harassers are to silence and isolate their targets, we think the best opposition is to defend the very rights that let us innovate, work together, and speak up against abuse online.

  • 1. According to studies conducted by the Bureau of Justice Statistics in 2006-2009, the prevalence of stalking and connected harassment of all kinds (including online harassment) in the United States varies by gender, age, income level, and race—with women, the young, the poor, and minority groups such as Native Americans and multi-racial families being more commonly affected. A recent Pew Study of online harassment indicated that women between the ages of 18-24 are targeted for online harassment and stalking at a higher rate than other groups. The Pew survey also notes that in the United States, African-American and Hispanic Internet users report harassment at higher levels (54% and 51%) compared to white users (34%).