EFF is deeply saddened and disturbed by the massacre in New Zealand. We offer our condolences to the survivors and families of victims.

This horrific event had an online component; one gunman livestreamed the event, and it appears that he had an active and hateful online presence. Enforcing their terms of use, most web platforms appear to have removed the horrendous video and related content.

Incidents involving extreme violence invite hard questions about how platforms can enforce their policies without unfairly silencing innocent voices. Online platforms have the right to remove speech that violates their community standards, as is happening here.

But times of tragedy often bring calls for platforms to ramp up their speech-policing practices. Those practices often expand to silence legitimate voices—including those that have long sought to overcome marginalization.

It’s understandable to call for more aggressive moderation policies in the face of horrifying crimes. Unfortunately, history has shown that those proposals frequently backfire. When platforms over-censor, they often disproportionately silence the speech of their most vulnerable, at-risk users.

It is difficult to draw lines between the speech of violent extremists and those commenting on, criticizing, or defending themselves from such attacks.

Egyptian journalist and anti-torture advocate Wael Abbas was kicked off YouTube for posting videos of police brutality. Twitter suspended his account, which contained thousands of photos, videos, and livestreams documenting human rights abuses. In 2017, YouTube inadvertently removed thousands of videos used by human rights groups to document atrocities in Syria. It is difficult to draw lines between the speech of violent extremists and those commenting on, criticizing, or defending themselves from such attacks. It’s much more difficult to make those judgment calls at the scale of a large Internet platform.

To make matters worse, bad actors can often take advantage of overly restrictive rules in order to censor innocent people—often the members of society who are most targeted by organized hate groups. It’s not just 8chan-style trolls, either: state actors have systematically abused Facebook’s flagging process to censor political enemies. On today’s Internet, if platforms don’t carefully consider the ways in which a takedown mechanism invites abuse, creating one risks doing more harm than good. And attempts to use government pressure to push platforms to more exhaustively police speech inevitably result in more censorship than intended.

Along with the American Civil Liberties Union, the Center for Democracy and Technology, and several other organizations and experts, EFF endorses the Santa Clara Principles, a simple set of guidelines for how online platforms should handle removal of speech. Simply put, the Principles say that platforms should:

  • provide transparent data about how many posts and accounts they remove;
  • give notice to users who’ve had something removed about what was removed, under what rules; and
  • give those users a meaningful opportunity to appeal the decision.

The Santa Clara Principles help ensure that platforms’ content moderation decisions are consistent with human rights standards. Moderation decisions are one of the most difficult problems on the Internet today. Well-meaning platforms and organizations may disagree on specific community standards, but we should all work together to take steps to ensure that those rules aren’t wielded against the most vulnerable members of society.