Two weeks ago, Gawker’s Adrian Chen published a leaked copy of Facebook’s Operations Manual for Live Content Moderators, which the company uses to implement the rules and guidelines that determine which content will be allowed on the platform. The document was widely ridiculed for a variety of reasons, from the attitudes expressed toward sex and nudity (photos containing female nipples are banned, as is any “blatant (obvious) depiction of camel toes or moose knuckles”), to its lenient attitude towards gore (crushed heads and limbs are permitted “so long as no insides are showing”), to its arbitrary ban on photos depicting drunk, unconscious, or sleeping people with things drawn on their faces.

Facebook has a long history of banning—among other things—sexual content, which has angered many users over the years. In 2009, more than 11,000 Facebook users participated in a virtual “nurse-in,” changing their user pictures to photos depicting women breastfeeding in response to Facebook’s policy of taking down such photos to comply with their obscenity guidelines. In May 2011, Facebook deleted a picture of a gay couple kissing because it allegedly violated their community standards, prompting widespread outrage from gay rights groups, and an apology from Facebook, which reinstated the photo.

The leaked document also gave some insight into Facebook’s processes in respect to complying with international law. As Chen writes:

Perhaps most intriguing is the category dedicated to "international compliance." Under this category, any holocaust denial which "focuses on hate speech," all attacks on the founder of Turkey, Ataturk, and burning of Turkish flags must be escalated. This is likely to keep Facebook in line with international laws; in many European countries, holocaust denial is outlawed, as are attacks on Attaturk in Turkey.

Unlike Google and Twitter, Facebook does not have the ability to take down content on a country-by-country basis. If they takedown something in response to the laws of one country, it is taken down for everyone. So if you criticize Ataturk on Facebook, even if you are located in the United States, you are out of luck.

NOTE: Facebook tells that this paragraph is mistaken about how they do their takedowns. We apologize for the error.

Shortly after the Facebook leak, blogging platform Tumblr published a draft copy of a policy against blogs that “actively promote self harm,” including eating disorders, sparking intense debate in the Tumblr community. Users expressed concern that the policy could lead to the deletion of blogs that merely discuss self-harm. One user observed that the line between discussion and glorification is blurry and subjective:

“…where does Tumblr plan to draw the line between what is acceptable and what is not? There are no clear cut specifics as to what you will and will not able to post, so how are we as the users of this website supposed to follow this new policy if put into effect. How is the staff going to determine a person’s definition of “promoting” when everyone has a different view on what should and should not be tolerated? Some users may believe that pictures or even general posts about these issues are a means of promoting them, yet others may see these pictures and posts as nothing more than another post on their dash.”

To be clear, Facebook and Tumblr have a right to decide what kinds of content they allow on their platforms. They are private companies and can generally control and limit the kind of speech they allow without regard to the First Amendment or other constraints. But content policies run the risk of angering and alienating longtime users, and they tend to be an increasing burden over time because the decision by the company to police on one topic leads to pressure to police on more topics. They also require deep training of the people involved to recognize the context and be sensitive to ambiguity. As a result, they are very difficult to automate.

Facebook, at least, does not seem to be prepared to properly train and sensitize those who will be responsible for taking down content on their websites. Instead, they appear to be relying upon an underpaid army of inexperienced content moderators—a choice that seems likely to lead to inconsistent and even unfair implementation of the policies. It’s not hard to imagine a moderator who fails to appreciate the difference between commentary and promotion, or even one who uses his or her takedown power to play out a personal grudge or political belief. Even well-intentioned moderators may become overwhelmed with the sheer volume of material on a platform the size of Facebook.

NOTE: After speaking with Facebook, we decided to remove this paragraph.

The simple fact is that there will be mistakes and misuses of any content review system, even if the companies invest in more training. As a result, it is not enough for companies to simply implement takedown rules—they must develop a robust, easy-to-use avenue for error correction, misuse detection, and appeal. For more recommendations on creating and implementing rights-respecting content moderation guidelines, read the Berkman Center's Account Deactivation and Content Removal: Guiding Principles and Practices for Companies and Users.

Content moderation policies are always evolving. EFF will be watching these systems carefully and users should too. Developing a fair and effective approach to content moderation is considerably harder than it looks. The history of the Internet is littered with well-intentioned content policing systems that went awry.