As the European Union is gearing up for a major reform of the current backbone of the EU’s Internet regulation—the e-Commerce Directive will be replaced by the Digital Services Act (DSA)—there are choices to be made. Rather than following in the footsteps of recent disastrous Internet legislation (such as the Copyright Directive), the EU should focus on how to put users back in control of their online experiences. Rather than giving more executive power to large platforms that have monopolized the digital space, the EU should protect the public interest Internet by focusing on users’ rights to self-determination and measures on transparency, anonymity, and interoperability.

We are hopeful that the EU will move in the right direction on Internet policy, especially given that member countries have championed Internet bills that seek to create  a more restrictive European Internet. In a recent victory for free speech and the Internet, EFF helped to strike down core provisions of a French bill meant to curb hate speech, arguing that it would unconstitutionally catch legal speech in its net. Meanwhile, the infamous German law NetzDG, which requires companies to respond to reports of illegal speech within 24 hours, has been toughened to make platforms not only delete suspected criminal content, but also send reports to the federal police, including information about the user.

Rules on Liability and Monitoring: An Opportunity to Make it Right

The upcoming reform of EU Internet legislation is a great opportunity to redo some of the damage done by bad Internet bills, and to acknowledge that it is a bad idea to turn platforms into the Internet police that scan and censor billions of users’ social media posts, videos, or other forms of communication. It is also a good occasion to modernize some outdated rules and to make sure that the Internet remains an open platform for free expression.

In this post, we will explain in more detail our position against mandated monitoring and filtering of user content and why the EU should not hold platforms liable for content provided by users.

Principle 1: Online Intermediaries Should Not Be Held Liable for User Content

Intermediaries have a pivotal role to play in ensuring the availability of content and the development of the Internet. They are a driver of free speech, as they enable people to share content with audiences at an unprecedented scale. One of the reasons for the success of online intermediaries is the immunity they enjoy for third-party content. This is one of the fundamental principles that we believe must continue to underpin Internet regulation: Platforms should not be held responsible for the ideas, images, videos, or speech that users post or share online. If such a principle were not in place, platforms would be pushed to affirmatively monitor how users behave; would filter and check users’ content; and would block and remove everything that is controversial, objectionable, or potentially illegal to avoid legal responsibility. By the same token, users would likely not feel inclined to speak freely in the first place; they would avoid sharing their artistic expression or publishing a critical essay about political developments. Worse yet, without legal protection, service providers could easily become targets for corporations, governments, or bad actors who want to target and silence users.

The EU should therefore make sure that online intermediaries continue to benefit from comprehensive liability exemptions and not be held liable for content provided by users. The current nebulous distinction between passive and active host providers for exemptions to apply should be given up: Intermediaries should not be held liable for user content as long as they are not involved in co-creating or modifying that content in a way that substantially contributes to illegality, and provided that they do not have actual knowledge about its illegal or infringing character. Any additional obligations must be proportionate and not curtail the free expression of users and innovation.

Principle 2: Only Court Orders Should Trigger Liability

Intermediaries should not be held liable for choosing not to remove content simply because they received a private notification by a user. In order to protect freedom of speech, the EU should adopt the principle that actual knowledge of illegality is only obtained by intermediaries if they are presented with a court order. It should be up to independent judicial entities, not platforms or disgruntled users, to decide the legality of any other user’s content. Any exceptions to this principle should be limited to content that is manifestly unlawful; that is, content that is obviously illegal irrespective of the context. Notices about such content should be sufficiently precise and substantiated.

Principle 3: No Mandatory Monitoring or Filtering

The ban on general monitoring under the current e-Commerce Directive has the purpose of protecting users, by guaranteeing their freedom of expression and their rights to personal data as memorialized in the Fundamental Rights Charter. Should this important principle be abandoned, it would not only have disastrous consequences for the freedom of users, but would also inevitably lead to shadow regulation; that is, privatized enforcement by platforms without transparency, accountability, or other safeguards.

The Member States of the European Union should thus not be permitted to impose obligations on digital service providers to affirmatively monitor their platforms or networks for illegal content that users post, transmit, or store. Nor should there be a general obligation for platforms to actively monitor facts or circumstances indicating illegal activity by users. The ban on general monitoring obligations should include a ban on mandated automated filter systems that evaluate the legality of third-party content or which prevent the (re)upload of illegal content. Additionally, no liability should be based on an intermediary’s failure to detect illegal content. Related privacy rights, such as the right not to be subjected to automated individual decision-making, must also be protected in this context.

Principle 4: Limit the Scope of Takedown Orders

Recent cases have demonstrated the perils of worldwide content takedown orders. In Glawischnig-Piesczek v Facebook, the Court of Justice of the EU held that a court of a Member State can order platforms not only to take down defamatory content globally, but also to take down identical or “equivalent” material. This was a terrible outcome as the content in question may be deemed illegal in one State, but is clearly lawful in many other States. Also, by referring to “automated technologies” to detect similar language, the court opened the gates of monitoring by filters, which are notoriously inaccurate and prone to overblocking legitimate material.

The reform of EU Internet legislation is an opportunity to acknowledge that the Internet is global and takedown orders of global reach are immensely unjust and impair users’ freedom. New rules should make sure that court orders—and particularly injunctions—should not be used to superimpose the laws of one country on every other state in the world. Takedown orders should be limited to the content in question and based on the principles of necessity and proportionality in terms of its geographical scope. Otherwise, it is possible that we will see one country’s government dictating what residents of other countries can say, see, or share online. This would lead to a “race to the bottom” toward creating an ever more restrictive global Internet.

Conclusion: Protect the Internet

Facing the most significant reform project of Internet law undertaken in two decades, the EU should choose to protect the Internet rather coercing online platforms into policing their users. EFF intends to fight for users’ rights, transparency, anonymity, and limited liability for online platforms every step of the way.