As the EU is gearing up for a major reform of key Internet regulation, we are introducing the principles that will guide our policy work surrounding the Digital Services Act. In this post, we set out our vision for the rules that should govern how Internet platforms engage with user content. Our goal is to empower users to speak up and assert their rights when confronted with platforms’ broken content moderation systems

New Rules for Online Platforms

In a much anticipated move, the European Commission has recently announced its intention to update Europe’s framework for regulating online platforms—for the first time in two decades. A new legal initiative—the Digital Services Act—will spell out new responsibilities and rules for online platforms and will likely shape the realities of millions of users across the EU for years to come. 

This is an important chance to update the legal responsibilities of platforms and enshrine users’ rights vis-à-vis the powerful gatekeeper platforms that control much of our online environment. But there is also a risk that the Digital Services Act will follow the footsteps of the recent regulatory developments in Germany in France. The German NetzDG and the French Avia bill (which we helped bring down in court) show a worrying trend in the EU to force platforms to police users’ content without considering what matters most: giving a voice to users affected by content take-downs. 

As Europe is gearing up for this major reform, EFF will work with EU institutions to fight for users rights, procedural safeguards, transparency, and interoperability while preserving the elements that have made Europe’s Internet regulation a success until now: limited liability for online platforms for user generated content, and a clear ban on filtering and monitoring obligations.

Fair and Just Notice and Action Procedures

According to the current backbone of Internet regulation in the EU, the e-Commerce Directive, platforms are not held legally responsible for what users post and share under one condition: platforms must remove or disable illegal content (or activities) when they become aware of such content. The rules also stress that when platforms choose to remove or disable access to information, they must observe the principle of free expression. But the e-Commerce Directive is silent on how platforms should make sure that they respect and protect users’ freedom of expression when they moderate their content.

The EU must adopt strong safeguards to protect user rights when their content is taken down or made inaccessible. EFF has long advocated for greater transparency and accountability in content moderation. Together with an international coalition of researchers and digital rights organizations, we have formulated the Santa Clara Principles to encourage companies to provide meaningful due process in the context of their content moderation systems.

The Digital Services Act is a crucial opportunity to translate some of the ideas that guide the Santa Clara Principles into law. It is essential that the EU does not leave this important question up to Member States; users across the EU should be able to rely on a consistent and fair set of rules, across platforms and borders alike. 

Principle 1: Reporting Mechanisms 

Intermediaries should not be held liable for choosing not to remove content simply because they received a private notification by a user. Save for exceptions, the EU should adopt the principle that actual knowledge of illegality is only obtained by intermediaries if they are presented with a court order.

However, the EU should adopt harmonized rules on reporting mechanisms that help users to notify platforms about potentially illegal content and behaviour. Reporting potentially illegal content online sounds simple, but can be daunting in practice. Different platforms use different systems to report content or activities, and the categories used to differentiate between different types of content can differ widely - but can also be confusing and hard to grasp. Some platforms don’t provide meaningful notification options at all. Reporting potentially illegal content should be easy, and any follow-up actions by the platform transparent for its users.

Principle 2: A Standard for Transparency and Justice in Notice and Action 

Content moderation is often opaque - companies generally do not give users enough information about what speech is permissible, or why certain pieces of content has been taken down. To make content moderation more transparent, platforms should provide users with a notice when content has been removed (or their account has been suspended). Such a notice should identify the content removed, the specific rule that it was found to violate, and how the content was detected. It should also offer an easily accessible explanation of the process through which the user can appeal the decision.

Platforms should provide a user-friendly, visible, and swift appeals process to allow for the meaningful resolution of content moderation disputes. Appeals mechanisms must also be accessible, easy to use and follow a clearly communicated timeline. They should allow users to present additional information, and must include human review. At the end of the appeals process, users should be notified, and should be provided with a statement explaining the reasoning behind the decision taken in a language the user can understand. It is also crucial that users are informed that even if they choose to partake in a dispute resolution process, they don’t forfeit their  rights to seek justice before independent judicial authorities, like a court in their home jurisdiction.

Principle 3: Open the Blackbox that is Automated Decision Making

Most major platforms use algorithms to automate part of their content moderation practices. Content moderation is a precarious and risky job, and many hope that automated content moderation tools could be the silver bullet that will solve content moderation’s many problems. Unfortunately, content moderation is messy, highly context-dependent and incredibly hard to do right, and automated moderation tools make many, many mistakes. These challenges have become especially apparent during the COVID-19 pandemic, as many platforms replaced human moderators with automated content moderation tools.

In the light of automated content moderation’s fundamental flaws, platforms should provide as much transparency as possible about how they use algorithmic tools. If platforms use automated decision making to restrict content, they should flag at which step of the process algorithmic tools were used, explain the logic behind the automated decisions taken, and also explain how users can contest the decision.

 Principle 4: Reinstatement of Wrongfully Removed Content

Content moderation systems make mistakes all the time - regardless of whether they are human or automated--that can cause real harm. Efforts to moderate content deemed offensive or illegal consistently have disproportionate impacts on already marginalized groups. Content moderation often interferes with counterspeech, attempts to reclaim specific terms, or calling out racism by sharing the racist statements made.

Because erroneous content moderation decisions are so common and have such negative effects, it is crucial that platforms reinstate users’ content when the removal decision cannot be justified by a sensible interpretation of the platforms’ rules or the removal was simply in error. The Digital Services Act should promote quick and easy reinstatement of wrongfully removed content or wrongly disabled accounts.

Principle 5: Coordinated and Effective Regulatory Oversight

Good laws are crucial, but their enforcement is just as important. The European legislators should therefore make sure that independent authorities can hold platforms accountable. Coordination between independent national authorities should be strengthened to enable EU-wide enforcement, and platforms should be incentivized to follow their due diligence duties through, for example, meaningful sanctions harmonized across the European Union.