Every major social media platform—from Facebook to Reddit, Instagram to YouTube—moderates and polices content shared by users. Platforms do so as a matter of self-interest, commercial or otherwise. But platforms also moderate user content in response to pressure from a variety of interest groups and/or governments. 

As a consequence, social media platforms have become the arbiters of speech online, defining what may or may not be said and shared by taking down content accordingly. As the public has become increasingly aware and critical of the paramount role private companies play in defining the freedom of expression of millions of users online, social media companies have been facing increased pressure to stand accountable for their content moderation practices. In response to such demands, and partially to fulfill legal requirements stipulated by regulations like Germany’s NetzDG, Facebook and other social media companies publish detailed ‘transparency reports’ meant to give some insight into their moderation practices.

Transparency does not always equal transparency

Facebook’s most recent 'community standards enforcement report', which was released in August and also covers its subsidiary Instagram, is emblematic of some of the deficits of companies reporting on their own content moderation practices. The report gives a rough overview of the numbers of posts deleted, broken down according to the 10 policy areas Facebook uses to categorize speech (it is unclear whether or not Facebook uses more granular categories internally). These categories can differ between Facebook and Instagram. Per category, Facebook also reports how prevalent content of a certain type is on its platforms, which percentage of allegedly infringing content was removed before it was reported by users, and how many pieces of supposedly problematic content were restored later on.

But content moderation, and its impact, is always contextual. While Facebook’s sterile reporting of numbers and percentages can give a rough estimate of how many pieces of content of which categories get removed, it does not tell us why or how these decisions are taken. Facebook’s approach to transparency thus misses the mark, as actual transparency should allow outsiders to see and understand what actions are performed, and why. Meaningful transparency inherently implies openness and accountability, and cannot be satisfied by simply counting takedowns. That is to say that there is a difference between corporately sanctioned ‘transparency,’ which is inherently limited, and meaningful transparency that empowers users to understand Facebook’s actions and hold the company accountable.

This is especially relevant in light of the fundamental shift in Facebook’s approach to content moderation during the COVID-19 pandemic. As companies were unable to rely on their human content moderators, Facebook, Twitter and YouTube began relying much more heavily on automated moderation tools—despite the documented shortcomings of AI tools to consistently judge the social, cultural, and political context of speech correctly. As social media platforms ramp up their use of automated content moderation tools, it is especially crucial to provide actual explanations for how these technologies shape (and limit) people's online experiences.

True transparency must provide context

So what would a meaningful transparency report look like? First of all, it should clarify the basics—how many human moderators are there, and how many cover each language? Are there languages for which no native speaker is available to judge the context of speech? What is the ratio of moderators per language? Such information is important in order to help understand (and avoid!) crises like when Facebook’s inability to detect hate speech directed against the Rohingya contributed to widespread violence in Myanmar.

Real transparency should also not stop at shedding light on the black boxes that algorithmic content moderation systems appear to be from the outside. In order to give users agency vis-à-vis automated tools, companies should explain what kind of technology and inputs are used at which point(s) of content moderation processes. Is such technology used to automatically flag suspicious content? Or is it also used to judge and categorize flagged content? When users report content takedowns, to which extent are they dealing with automated chat bots, and when are complaints reviewed by humans? Users should also be able to understand the relationship between human and automated review—are humans just ‘in the loop’, or do they exercise real oversight and control over automated systems?

Another important pillar of meaningful transparency are the policies that form the basis for content takedowns. Social media companies often develop these policies without much external input, and adjust them constantly. Platforms’ terms of service or community guidelines usually also don’t go into great detail or provide examples to clearly delineate acceptable speech. Transparency reports could, for example, include information on how moderation policies are developed, whether and which external experts or stakeholders have contributed, how often they are amended, and to which extent.

Closely related: transparency reports should describe and explain how human and machine-based moderators are trained to recognize infringing content. In many cases, the difference is between, for example, incitement to terrorism and counterspeech against extremism. For moderators—who work quickly—the line between the two can be difficult to judge, and is dependent on the context of the statement in question. That’s why it’s crucial to understand how platforms are preparing their moderators to understand and correctly judge such nuances.

Real transparency is empowering, not impersonal

Meaningful transparency should empower people to understand how a social media platform is governed, to know their rights according to that governance model, and to hold companies accountable whenever they transgress it. We welcome companies’ efforts to at least offer a glimpse into their content moderation machine room. But they still have a long way to go.

This is why we have undertaken a process to review the Santa Clara Principles on Accountability and Transparency in Content Moderation, a process that is currently underway. We look forward to sharing the conclusions of our research and contributing to the future of corporate transparency.

Tags