For the first time, Facebook has published detailed information about how it enforces its own community standards. On Tuesday, the company announced the release of its Community Standards Enforcement Preliminary Report, covering enforcement efforts between October 2017 and March 2018 in six areas: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam, and fake accounts.

Facebook follows YouTube in releasing content enforcement numbers; last month, the video-sharing platform put out its first transparency report on community guidelines enforcement, showing the total number of videos taken down, the percentage of videos removed after being flagged by automated tools, and other details.

What’s good

The publication marks a sea change in how companies approach transparency reporting and is a good first step. Although advocates have long pushed for Facebook and other social media platforms to release details on how they enforce their guidelines—culminating with the recently-released Santa Clara Principles on Transparency and Accountability in Content Moderation—companies have largely been reticent to publish those numbers. It is undoubtedly a result of pushes from advocacy organizations, academics, and other members of civil society that has led us to this moment.

The report aims to address four points for enforcement of each of the six aforementioned community standards: The prevalence of Community Standards violations; the amount of content upon which action is taken; the amount of violating content found and flagged by automated systems and human content moderators before users report it; and how quickly the company takes action on Community Standards violations.

Looking at the first of the six categories—graphic violence—as an example, some of the numbers are staggering. In the first three months of this year, Facebook took action on more than 3 million pieces of content, up from just a little over 1 million in the last three months of 2017. The company notes that disparities in numbers can be affected by external factors—“such as real-world events that depict graphic violence”—and internal factors, such as the effectiveness of their technology to find violations. Facebook also offers insight into the 70% increase in the first quarter of this year, noting that their photo-matching software is now used to cover certain graphic images with warnings.

The metrics offer a fascinating look into the capabilities of automated systems. When it comes to imagery—be it graphic violence or sexually explicit content—Facebook’s success rate in detecting and flagging content is incredibly high: Well over 90% in every category except hate speech which, in quarter one, the company only detected 38% of violating content. This makes sense: As opposed to imagery, Standards-violating speech is more complicated to detect and often requires the nuanced eye of a human moderator. It’s a good thing that the company isn’t relying on technology here.

What's not-so-good

Although Facebook’s content enforcement report offers an unprecedented look into how the company adjudicates certain types of content, there’s still much to be desired. The Santa Clara Principles offer guidance on other details that free speech advocates would like to see reported, such as the source of flagging (i.e., governments, users, trusted flaggers, and different types of automated systems).

Second, the report deals well with how the company deals with content that violates the rules, but fails to address how the company’s moderators and automated systems can get the rules wrong, taking down content that doesn’t actually violate the Community Standards. Now that Facebook has begun offering appeals, its next report could set a new standard by also including the number of appeals that resulted in content being restored.

The report repeatedly refers to the company taking “action,” but only clarifies what that means in a separate document linked from the report (for the record, it’s a little better than it sounds: “taking action” might mean removing the content, disabling the account or merely covering content with a warning).

Furthermore, while the introduction to the report states that it will address how quickly the company takes action on a given item, it doesn’t really do that, at least not in measure of time. Instead, that metric seems to refer to Facebook identifying and flagging content before users do, and even this metric is "not yet available."

Savvy readers will notice that in the report, Facebook conflates violations of their “authentic identity” rule with impersonation and other fake accounts. While they note that “[b]ad actors try to create fake accounts in large volumes automatically using scripts or bots,” it would be useful to understand how many users are still being kicked off the service for more benign violations of the company’s “authentic identity” policy, such as using a partial name, a performance name, or another persistent pseudonym.

Finally, transparency isn’t just about reports. Facebook still must become more accountable to its users, notifying them clearly when they violate a rule and demonstrating which rule was violated. Overall, Facebook’s report (and YouTube’s before it) is a step in the right direction, but advocates should continue to demand more.


 

Related Issues