Policymakers around the world are contemplating a wide variety of proposals to address “harmful” online expression. Many of these proposals are dangerously misguided and will inevitably result in the censorship of all kinds of lawful and valuable expression. And one of the most dangerous proposals may be adopted in Canada. How bad is it? As Stanford’s Daphne Keller observes, “It's like a list of the worst ideas around the world.” She’s right.

These ideas include:

  • broad “harmful content” categories that explicitly include speech that is legal but potentially upsetting or hurtful
  • a hair-trigger 24-hour takedown requirement (far too short for reasonable consideration of context and nuance)
  • an effective filtering requirement (the proposal says service providers must take reasonable measures which “may include” filters, but, in practice, compliance will require them)
  • penalties of up to 3 percent of the providers' gross revenues or up to 10 million dollars, whichever is higher
  • mandatory reporting of potentially harmful content (and the users who post it) to law enforcement and national security agencies
  • website blocking (platforms deemed to have violated some of the proposal’s requirements too often might be blocked completely by Canadian ISPs)
  • onerous data-retention obligations

All of this is terrible, but perhaps the most terrifying aspect of the proposal is that it would create a new internet speech czar with broad powers to ensure compliance, and continuously redefine what compliance means.

These powers include the right to enter and inspect any place (other than a home):

“in which they believe on reasonable grounds there is any document, information or any other thing, including computer algorithms and software, relevant to the purpose of verifying compliance and preventing non-compliance  . . . and examine the document, information or thing or remove it for examination or reproduction”; to hold hearing in response to public complaints, and, “do any act or thing . . . necessary to ensure compliance.”

But don’t worry—ISPs can avoid having their doors kicked in by coordinating with the speech police, who will give them "advice" on their content moderation practices. Follow that advice and you may be safe. Ignore it and be prepared to forfeit your computers and millions of dollars.

The potential harms here are vast, and they'll only grow because so much of the regulation is left open. For example, platforms will likely be forced to rely on automated filters to assess and discover "harmful" content on their platforms, and users caught up in these sweeps could end up on file with the local cops—or with Canada’s national security agencies, thanks to the proposed reporting obligations.

Private communications are nominally excluded, but that is cold comfort—the Canadian government may decide, as contemplated by other countries, that encrypted chat groups of various sizes are not ‘private.’ If so, end-to-end encryption will be under further threat, with platforms pressured to undermine the security and integrity of their services in order to fulfill their filtering obligations. And regulators will likely demand that Apple expand its controversial new image assessment tool to address the broad "harmful content" categories covered by the proposal.

In the United States and elsewhere, we have seen how rules like this hurt marginalized groups, both online and offline. Faced with expansive and vague moderation obligations, little time for analysis, and major legal consequences if they guess wrong, companies inevitably overcensor—and users pay the price.

For example, a U.S. law intended to penalize sites that hosted speech related to child sexual abuse and trafficking led large and small internet platforms to censor broad swaths of speech with adult content. The consequences of this censorship have been devastating for marginalized communities and groups that serve them, especially organizations that provide support and services to victims of trafficking and child abuse, sex workers, and groups and individuals promoting sexual freedom. For example, the law prevented sex workers from organizing and utilizing tools that have kept them safe. Taking away online forums, client-screening capabilities, "bad date" lists, and other intra-community safety tips means putting more workers on the street, at higher risk, which leads to increased violence and trafficking. The impact was particularly harmful for trans women of color, who are disproportionately affected by this violence.

Indeed, even “voluntary” content moderation rules are dangerous. For example, policies against hate speech have shut down online conversations about racism and harassment of people of color. Ambiguous “community standards” have prevented Black Lives Matter activists from showing the world the racist messages they receive. Rules against depictions of violence have removed reports about the Syrian war and accounts of human rights abuses of Myanmar's Rohingya. These voices, and the voices of aboriginal women in Australia, Dakota pipeline protestors and many others, are being erased online. Their stories and images of mass arrests, military attacks, racism, and genocide are being flagged for takedown.

The powerless struggle to be heard in the first place; platform censorship ensures they won’t be able to take full advantage of online spaces either.

Professor Michael Geist, who has been doing crucial work covering this and other bad internet proposals coming out of Canada, notes that the government has shown little interest in hearing what Canadians think of the plans. Nonetheless, the government says it is taking comments. We hope Canadians will flood the government with responses.

But it's not just Canadians who need to worry about this. Dangerous proposals in one country have a way of inspiring other nations' policymakers to follow suit—especially if those bad ideas come from widely respected democratic countries like Canada.

Indeed, it seems like the people who drafted this policy themselves looked to other countries for inspiration—but ignored the criticism those other policies have received from human rights defenders, the UN, and a wide range of civil society groups. For example, the content monitoring obligations echo proposals in India and the UK that have been widely criticized by civil society, not to mention three UN Rapporteurs. The Canadian proposal seeks to import the worst aspects of Germany’s Network Enforcement Act, ("NetzDG"), which deputizes private companies to police the internet, following a rushed timeline that precludes any hope of a balanced legal analysis, leading to takedowns of innocuous posts and satirical content. The law has been heavily criticized in Germany and abroad, and experts say it conflicts with the EU’s central internet regulation, the E-Commerce Directive. Canada's proposal also bears a striking similarity to France's "hate speech" law, which was struck down as unconstitutional.

These regulations, like Canada’s, depart significantly from the more sensible, if still imperfect, approach being contemplated in the European Union’s Digital Services Act (DSA). The proposal sets limits on content removal and allows users to challenge censorship decisions. Although it contains some worrying elements that could result in content over-blocking, the DSA doesn’t follow the footsteps of other disastrous European internet legislation that has endangered freedom of expression by forcing platforms to monitor and censor what users say or upload online.

Canada also appears to have lost sight of its trade obligations. In 2018, Canada, the United States and Mexico finalized the USMCA agreement, an updated version of NAFTA. Article 19.17 of the USMCA prohibits treating platforms as the originators of content when determining liability for information harms. But this proposal does precisely that—in multiple ways, a platforms’ legal risk depends on whether it properly identifies and removes harmful content it had no part in creating.

Ironically, perhaps, the proposal would also further entrench the power of U.S. tech giants over social media, because they are the only ones who can afford to comply with these complex and draconian obligations.

Finally, the regulatory scheme would depart from settled human rights norms. Article 19 of the International Covenant on Civil and Political Rights allows states to limit freedom of expression under select circumstances, provided they comply with a three-step test: be prescribed by law; have legitimate aim; and be necessary and proportionate. Limitations must also be interpreted and applied narrowly.

Canada’s proposal falls far short of meeting these criteria. The UN Special Rapporteur on free expression has called upon companies to recognize human rights law as the authoritative global standard for freedom of expression on their platforms. It’s profoundly disappointing to see Canada force companies to violate human rights law instead.

This law is dangerous to internet speech, privacy, security, and competition. We hope our friends in the Great White North agree, and raise their voices to send it to the scrap heap of bad internet ideas from around the globe.

Tags