San Francisco—The Electronic Frontier Foundation (EFF) announced today a call for feedback and recommendations from organizations and individuals around the world to update the landmark 2018 Santa Clara Principles on Transparency and Accountability, which established due process standards for moderating users’ online speech.
Parties interested in strengthening and expanding the standards can submit feedback, comments, and recommendations starting today at https://santaclaraprinciples.org/cfp/.
The Santa Clara Principles, created by EFF and a small group of organizations and advocates, established a set of practices that social media platforms should undertake to provide transparency about why and how often they take down users’ posts, photos, videos and other content. The principles call for companies to, at a minimum, disclose information about how many posts are removed, notify users about content removal, and give users meaningful opportunities to appeal take downs and have content restored. They have been implemented by Reddit and endorsed by Apple, Github, Twitter, YouTube, and several other platforms.
But content moderation practices are changing in response to world events and misinformation campaigns. For example, tech companies are stepping up the use of artificial intelligence to sift through and remove content as they send home human moderators amid the COVID-19 pandemic.
“We recognize the need to take a global approach to protecting users from vague and unfair content moderation practices,” said EFF Director for International Freedom of Expression Jillian York. “The time is now to analyze and further develop the standards in tandem with global allies and the international community, especially those representing marginalized communities heavily impacted by commercial content moderation practices.”
EFF is seeking feedback on whether the principles’ three core areas of focus—numbers, notice, and appeals—should be expanded or revised. Other topics include whether the Santa Clara Principles should include standards for reporting the use of AI in content moderation and ranking, or be applied to the moderation of advertising. Are there specific risks to human rights that the principles could address, and are there regional, national, or cultural considerations that should be included in the standards?
The call for submissions will be open through June 30. Comments and recommendations will be reviewed by organizations including Global Partners Digital, Open Technology Institute, Brennan Center for Justice, Article 19, Center for Democracy and Technology, Ranking Digital Rights, ACLU of Northern California, Witness, and AccessNow. The feedback will be used to assess if and how the principles should be amended.
For more on the Santa Clara Principles:
For more on content moderation: