It seems like every week there’s another Big Tech hearing accompanied by a flurry of mostly bad ideas for reform. Two events set last week’s hubbub apart, both involving Facebook. First, Mark Zuckerberg took a new step in his blatant effort to use 230 reform to entrench Facebook’s dominance. Second, new reports are demonstrating, if further demonstration were needed, how badly Facebook is failing at policing the content on its platform with any consistency whatsoever. The overall message is clear: if content moderation doesn’t work even with the kind of resources Facebook has, then it won’t work anywhere.

Inconsistent Policies Harm Speech in Ways That Are Exacerbated the Further Along the Stack You Go

Facebook has been swearing for many months that it will do a better job of rooting out “dangerous content.” But a new report from the Tech Transparency Project demonstrates that it is failing miserably. Last August, Facebook banned some militant groups and other extremist movements tied to violence in the U.S. Now, Facebook is still helping expand the groups’ reach by automatically creating new pages for them and directing people who “like” certain militia pages to check out others, effectively helping these movements recruit and radicalize new members. 

These groups often share images of guns and violence, misinformation about the pandemic, and racist memes targeting Black Lives Matter activists. QAnon pages also remain live despite Facebook’s claim to have taken them down last fall. Meanwhile, a new leak of Facebook’s internal guidelines shows how much it struggles to come up with consistent rules for users living under repressive governments. For example, the company forbids “dangerous organizations”—including, but not limited to, designated terrorist organizations—but allows users in certain countries to praise mass murderers and “violent non-state actors” (designated militant groups engaged that do not target civilians) unless their posts contain an explicit reference to violence.

A Facebook spokesperson told the Guardian: “We recognise that in conflict zones some violent non-state actors provide key services and negotiate with governments – so we enable praise around those non-violent activities but do not allow praise for violence by these groups.”

The problem is not that Facebook is trying to create space for some speech – they should probably do more of that. But the current approach is just incoherent. Like other platforms, Facebook does not base its guidelines on international human rights frameworks, nor do the guidelines necessarily adhere to local laws and regulations. Instead, they seem to be based upon what Facebook policymakers think is best.

The capricious nature of the guidelines is especially clear with respect to LGBTQ+ content. For example, Facebook has limited use of the rainbow “like” button in certain regions, including the Middle East, ostensibly to keep users there safe. But in reality, this denies members of the LGBTQ+ community there the same range of expression as other users and is hypocritical given the fact that Facebook refuses to bend its "authentic names" policy to protect the same users.

Whatever Facebook’s intent, in practice, it is taking sides in a region that it doesn’t seem to understand. Or as Lebanese researcher Azza El Masri put it on Twitter: “The directive to let pro-violent/terrorist content up in Myanmar, MENA, and other regions while critical content gets routinely taken down shows the extent to which [Facebook] is willing to go to appease our oppressors.”

This is not the only example of a social media company making inconsistent decisions about what expression to allow. Twitter, for instance, bans alcohol advertising from every Arab country, including several (such as Lebanon and Egypt) where the practice is perfectly legal. Microsoft Bing once limited sexual search terms from the entire region, despite not being asked by governments to do so.

Now imagine the same kinds of policies being applied to internet access. Or website hosting. Or cloud storage.

All the Resources in the World Can’t Make Content Moderation Work at Scale

Facebook’s lopsided policies are deserving of critique and point to a larger problem that too much focus on specific policies misses: if Facebook, with the money to hire thousands of moderators, implement filters, and fund an Oversight Board can’t manage to develop and implement a consistent, coherent and transparent moderation policy, maybe we should finally admit that we can’t look to social media platforms to solve deep-seated political problems – and we should stop trying.

Even more importantly, we should call a halt to any effort to extend this mess beyond platforms. If two decades of experience with social media has taught us anything, it is that the companies are bad at creating and implementing consistent, coherent policies. But at least, when a social media company makes an error in judgement, its impact is relatively limited. But at the infrastructure level, however, those decisions necessarily hit harder and wider. If an internet service provider (ISP) shut down access to LGTBQ+ individuals using the same capricious whims as Facebook, it would be a disaster.

What Infrastructure Companies Can Learn

The full infrastructure of the internet, or the “full stack” is made up of a range of companies and intermediaries that range from consumer facing platforms like Facebook or Pinterest to ISPs, like Comcast or AT&T. Somewhere in the middle are a wide array of intermediaries, such as upstream hosts like Amazon Web Services (AWS), domain name registrars, certificate authorities (such as Let’s Encrypt), content delivery networks (CDNs), payment processors, and email services.

For most of us, most of the stack is invisible. We send email, tweet, post, upload photos and read blog posts without thinking about all the services that have to function to get the content from the original creator onto the internet and in front of users’ eyeballs all over the world. We may think about our ISP when it gets slow or breaks, but day-to-day, most of us don’t think about AWS at all. We are more aware of the content moderation decisionsand mistakesmade by the consumer facing platforms.

We have detailed many times the chilling effect and other problems with opaque, bad, or inconsistent content moderation decisions from companies like Facebook. But when ISPs or intermediaries decide to wade into the content moderation game and start blocking certain users and sites, it’s far worse. For one thing, many of these services have few, if any, competitors. For example, too many people in the United States and overseas only have one choice for an ISP. If the only broadband provider in your area cuts you off because they (or your government) didn’t like what you said onlineor what someone else whose name is on the account saidhow can you get back online? Further, at the infrastructure level, services usually cannot target their response narrowly. Twitter can shut down individual accounts; when those users migrate to Parler and continue to engage in offensive speech, AWS can only deny service to the entire site including speech that is entirely unobjectionable. And that is exactly why ISPs and intermediaries need to stay away from this fight entirely. The risks from getting it wrong at the infrastructure level are far too great.

It is easy to understand why repressive governments (and some advocates) want to pressure ISPs and intermediaries in the stack to moderate content: it is a broad, blunt and effective way to silence certain voices. Some intermediaries might also feel compelled to moderate aggressively in the hopes of staving off criticism down the line.  As last week’s hearing showed, this tactic will not work. The only way to avoid the pressure is to stake out an entirely different approach.

To be clear, in the United States, businesses have a constitutional right to decide what content they want to host. That’s why lawmakers who are tempted to pass laws to punish intermediaries beyond platforms in the stack for their content moderation decisions would face the same kind of First Amendment problems as any bill attempting to meddle with speech rights.

But, just because something is legally permissible does not mean it is the right thing to do, especially when implementation will vary depending on who is asking for it, when. Content moderation is empirically impossible to do well at scale; given the impact of the inevitable mistakes, ISPs and infrastructure intermediaries should not try. Instead, they should reject pressure to moderate like platforms, and clarify that they are much more like the local power company. If you wouldn’t want the power company shutting off service to a house just because someone doesn’t like what’s going on inside, you shouldn’t want a domain name registrar freezing a domain name because someone doesn’t like a site, or an ISP shutting down an account. And if you would hold the power company responsible for the behavior you don’t like just because that behavior relied on electricity, you shouldn’t hold an ISP or a domain name registrar or CDN, etc, responsible for behavior or speech that relies on their services either.  

If more than two decades of social media content moderation has taught us anything, it is that we cannot tech our way out of a fundamentally political problem. Social media companies have tried and failed to do so; beyond the platform, companies should refuse to replicate those failures.

Related Issues

Tags