The upcoming U.S. elections have invited broad attention to many of the questions with which civil society has struggled for years: what should companies do about misinformation and hate speech? And what, specifically, should be done when that speech is coming from the world’s most powerful leaders?

Silicon Valley companies and U.S. policymakers all too often view these questions through a myopic lens, focusing on troubles at home as if they are new—when there are countless lessons to be learned from their actions in the rest of the world. The U.S. is not the first nation to deal with election-related misinformation; nor are this or past U.S. elections the only times the major platforms have had to deal with it.

When false news leads to false takedowns

As we noted recently, even the most well-meaning efforts to control misinformation often have the effect of silencing key voices, as happened earlier this year when Facebook partnered with a firm to counter election misinformation emanating from Tunisia. At the time, we joined a coalition in asking Facebook to be transparent about their decision-making, and explain how they had mistakenly identified some of Tunisia’s key thought leaders as bots—but we’re still waiting for answers.

More recently, Nigerians using Instagram to participate in the country’s #ENDSARS movement found their speech removed without warning—again, a victim of overzealous moderation of misinformation. Facebook, which owns Instagram, partners with fact checkers to counter misinformation—a good idea in theory, but in practice, strong independent oversight seems increasingly necessary to ensure that mistakes like this don’t become de rigeur.

Dangerous speech

Many observers in the U.S. fear violence in the event of a contested election. Social media platforms have responded by making myriad policy changes, sometimes with little clarity or consistency and, for some critics, with too little meaningful effect. For example, Twitter announced last year that it would no longer serve political ads, while in May the company began labeling tweets from President Trump that contained misinformation, after years of criticism.  

Meanwhile, social media users elsewhere are subjected to even more dangerous misinformation from their countries’ leaders, with little or slow response from the platforms that host it. And that inaction has proven dangerous—in the Philippines, where politicians regularly engage in disinformation on social media platforms (and where Facebook is the most popular virtual space for political discourse), a phenomenon called “red tagging” has emerged, in which alleged communists are falsely labeled on a list put out by the country’s Justice department and circulated in social media. Although the Philippine DOJ rolled back many of the accusations, they continued to circulate—with one recent incident ending in the accused.

We support the right of platforms to curate content as they see fit, and it is understandable for them to want to remove violent incitement—which can cause real-world harm rapidly if allowed to proliferate, particularly when it comes from public figures and politicians. But if they are going to take those steps, they should do so consistently. While the media has spent months debating whether Trump’s tweets should be removed, very little attention has been paid to those places in which violent incitement from politicians is resulting in actual violence—including in the Philippines, India, Sri Lanka, and Myanmar.

Of course, when the violence is coming from the state itself, then the state cannot be trusted to mitigate its harms—which is where platforms can play a crucial role in protecting human rights. Former Special Rapporteur on Freedom of Expression David Kaye proposed guidelines to the UN General Assembly in 2019 for companies dealing with dangerous speech: Any decisions should fit the framework of necessity and proportionality, and should be dealt with within the context of existing human rights frameworks. 

There are numerous projects researching the impact of violent incitement on social media. The Dangerous Speech Project focuses on speech that has the ability to incite real-world violence, while the Early Warning Project conducts risk assessments to provide early warnings of where online rhetoric may lead to offline violence. There is also ample research to suggest that traditional media in the U.S. is the biggest vector for misinformation.

A key lesson here is that the current strategy of most Silicon Valley platforms—that is, treating politicians as a class apart—is both unfair and unhelpful. As we’ve said before, politicians’ speech can have more severe consequences than that of the average person—which is why companies should apply their rules consistently to all users.

Companies must listen to their users...everywhere

But perhaps the biggest lesson of all is that companies need to listen to their users all over the world and work with local partners to look for solutions that suit the local context. Big Tech cannot simply assume that ideas that work (or, that don’t work) in the United States will work everywhere. It is imperative that tech companies stop viewing the world through the lens of American culture, and start seeing it in all its complexity.

Finally, companies should adhere to the Santa Clara Principles on Transparency and Accountability in Content Moderation and provide users with transparency, notice, and appeals in every instance, including misinformation and violent content. With more moderation inevitably comes more mistakes; the Santa Clara Principles are a crucial step toward addressing and mitigating those mistakes fairly.

Tags