This blog post was co-written by Dr. Aleksandra Kuczerawy (Senior Fellow and Researcher at KU Leuven) and inspired by her publication at Verfassungsblog.

Suspension of Trump’s Social Media Accounts: Controversial, but not unprecedented

The suspension of the social media accounts of former U.S. President Donald Trump by Twitter, Facebook, Instagram, Snapchat, and others sparked a lot of controversy not only in the U.S, but also in Europe. German Chancellor Angela Merkel considered the move, which is not unprecedented, "problematic." The EU Commissioner for the internal market, Thierry Breton, found it “perplexing” that Twitter’s CEO Jack Dorsey could simply pull the plug on POTUS’s loudspeaker “without any checks and balances.” Some went a step further and proposed new rules seeking to prevent platforms from removing content that national laws deem legitimate: a recent proposal by the Polish government would ban social media companies from deleting content unless the content is illegal under Polish law. As a result, non-illegal hate speech—for example, insults directed at LGBTQ+ groups—could no longer be removed by social media platforms based on their community standards.

All these comments were articulated using the argument that without intervention by governments, freedom of expression rights would be at risk. But does the lockout from certain social media channels actually constitute an interference with or even a violation of free expression rights in Europe?

Freedom of Expression: Negative and Positive Obligations

The right to freedom of expression is embodied in the European Convention of Human Rights: everyone has the right to freedom of expression (Article 10(1) ECHR). Freedom of expression in Article 10 ECHR, interestingly, is a compound freedom. This means that Article 10 includes the right to hold and express opinions, to impart information and ideas, and to seek and receive information, even if they are not explicitly listed in the provision. Yet, this right is not absolute. Restrictions could take the form of ‘formalities, conditions, restrictions or penalties’ (para. 2), and are permissible if they comply with three conditions: They must be (1) prescribed by law, (2) introduced for protection of one of the listed legitimate aims, and (3) necessary in a democratic society. Legitimate grounds that could justify interference include national security, territorial integrity or public safety, and the prevention of disorder or crime.

Similar to the U.S, the right to freedom of expression is a negative right; that is to say, states cannot place undue restrictions on expression. Accordingly, it prevents only government restrictions on speech and not action by private companies. However, in Europe the right also entails a positive obligation. States are required to also protect the right from interference by others, including by private companies or individuals. Extending the scope of the ECHR to private relationships between individuals is referred to as the “horizontal effect.” According to the interpretation of the European Court of Human Rights (ECtHR), the horizontal effect is indirect, meaning that individuals can enforce human rights provisions against other individuals only indirectly, by relying on the positive obligations of the State. If the State fails to protect the right from interference by others, the ECtHR may attribute this interference to the State. The ECtHR specifically found the positive obligation present in relation to the right to freedom of expression (e.g. Dink v. Turkey). The duty to protect the right to freedom of expression involves an obligation for governments to promote this right and to provide for an environment where it can be effectively exercised without being unduly curtailed. Examples include cases of states’ failure to implement measures protecting journalists against unlawful violent attacks (Özgür Gündem v. Turkey), or failure to enact legislation resulting in refusal to broadcast by a commercial television company (Verein gegen Tierfabriken Schweiz v. Switzerland).

No Must-Carry, No Freedom of Forum

The doctrine of positive obligations and the horizontal effect of the ECHR could support the argument that rules may be necessary to prevent arbitrary decisions by platforms to remove content (or ban users). 

However, it does not support the argument that platforms have an obligation to host all the (legal) content of their users. The European Court of Human Rights (ECtHR) elucidated that Article 10 ECHR does not provide a “freedom of forum” for the exercise of the right to freedom of expression. This means that Article 10 ECHR does not guarantee any right to have one’s content broadcasted on any particular private forum. Private platforms, such as social media companies like Twitter or Facebook, therefore, cannot be forced to carry content by third parties, even if that content is not actually illegal. This makes sense: it is hard to imagine that a platform for dog owners would be forced to allow cat pictures (despite what internet cat overlords might think about that). A positive obligation by platforms to do so would lead to an interference with the freedom to conduct business under the EU Charter of Fundamental Rights and, potentially, the right to private property under the ECHR (Article 1 of Protocol 1 to the ECHR).

Viable Alternatives                       

In a case concerning prohibition to distribute leaflets in a private shopping center (Appleby and others v. the UK), the Court did not consider lack of the State’s protection as a failure to comply with positive obligation to protect Article 10 ECHR. This was because the Court considered that a lack of protection did not destroy the “essence” of the right to freedom of expression. However, the Court did not entirely exclude that “a positive obligation could arise for the State to protect the enjoyment of the Convention rights by regulating property rights.” The Court examined such a conflict in the Swedish case Khurshid Mustafa & Tarzibachi, which involved the termination of a tenancy agreement because of the tenants’ refusal to dismantle a satellite dish installed to receive television programs from the tenants’ native country. To decide which right takes precedence in particular circumstances, the property right of the landlord or the right to access information by the tenant, the Court conducted a test of “viable alternatives.” This test basically analyzes if parties were able to exercise their right to freedom of expression through alternative means. While in Appleby such alternative expression opportunities existed, in Tarzibachi, the existence of information alternatives functionally equivalent to a satellite dish could not be demonstrated. Noting that the applicant’s right to freedom of information was not sufficiently considered in the national proceeding, the Court concluded that Sweden failed in its positive obligation to protect that right.

What does this mean for Trump’s ban on Twitter and Facebook? Clearly, as the then-President of the U.S., Trump had ample opportunities to communicate his message to the world, whether through a broadcaster or an official press conference, or other social media platforms. While those alternatives might, in terms of impact or outreach, not be equivalent to the most popular social media platforms, it can hardly be argued that the essence of the right to freedom of expression was destroyed. For an ex-President, some expression opportunities might be limited but Trump’s options still put him in advantage in comparison with an average user deplatformed by Twitter or Facebook. Such bans do happen, whether for clear violations of the Terms and Conditions or the most absurd reasons, but they rarely reach similar levels of controversy.

Hate Speech and Incitement to Violence

Article 10 ECHR protects expressions that offend, shock, or disturb. The scope for restrictions on political speech is narrow and requires strict scrutiny. However, hate speech and incitement to violence do not constitute an expression worthy of protection (see here). The ECHR does not provide a specific definition of hate speech but instead prefers a case-by-case approach. Moreover, per Article 17 ECHR, the Convention does not protect activity aimed at the destruction of any of the rights and freedoms contained in the Convention. This provision has been interpreted to exclude protection of speech that endangers free operation of democratic institutions or attempts to destroy the stability and effectiveness of a democratic system. It goes beyond the scope of this blog post to analyze if Trump’s tweets and posts actually fall within this category of expression.

National and EU Legislation Require Proactive Stance

The critical statements by EU politicians following the decision to ban Trump’s account are not exactly consistent with a general trend in Europe in recent years. For some time now, European politicians and the EU have been trying to convince online platforms to “do more” to police the content of their users. National laws such as the German NetzDG, the Austrian KoPlG and the unconstitutional French Avia Bill all require more effective moderation of online spaces. This means, more and faster removals. Under the threat of high fines, these laws require platforms to limit dissemination of illegal content as well as harmful content, such as disinformation. In an attempt to catch up with national legislation, the EU has been steadily introducing mechanisms encouraging online platforms to (more or less) voluntarily moderate content, for example the 2016 Code of Conduct on hate speech, the 2018 Code of Practice on Disinformation, the update to the AVMS Directive and the proposal on Terrorist Content Regulation.

Can the EU Digital Services Act Help?

One would think that Twitter’s proactive approach, in light of these initiatives, would be appreciated. The somewhat confusing political reaction has led to questions whether the recently proposed Digital Services Act (DSA) would address the problem of powerful platforms making arbitrary decisions about speech they allow online.

The DSA is the most significant reform of Europe’s internet legislation, the e-Commerce Directive, that the EU has undertaken in twenty years. It aims at rebalancing the responsibilities of users, platforms and public authorities according to European values. If done right, the Digital Services Act could offer solutions to complex issues like transparency failures, privatized content moderation, and gatekeeper-dominated markets. And the EU Commission’s draft Proposal got several things right: mandatory general monitoring of users is not a policy option and liability for speech still rests with the speaker, and not with platforms that host what users post or share online. At least as a principle. The introduction of special type and size-oriented obligations for online platforms, including the very large ones, seems to be the right approach. It is also in line with the proposal for a Digital Markets Act (DMA), which presented a new standard for large platforms that act as gatekeepers in an attempt to create a fairer and more competitive market for online platforms in the EU.

It’s noteworthy that the DSA includes mechanisms to encourage online platforms to conduct voluntary monitoring and moderation of the hosted content. Article 6, in particular, introduces an EU version of Section 230’s good samaritan principle: providers of online intermediary services should not face liability solely because they carry out voluntary own-initiative investigations or other activities aimed at detecting, identifying and removing, or disabling access to, illegal content. There is a risk that such an encouragement could lead to more private censorship and over-removal of content. As explained in the preamble of the DSA, such voluntary actions can lead to awareness about illegal activity and thus trigger liability consequences (in the EU, knowledge of illegality deprives platforms of liability immunity).

At the same time, the DSA clearly states its goal to ensure more protection for fundamental rights online. Recital 22, in particular, explains that the “removal or disabling of access should be undertaken in the observance of the principle of freedom of expression.” How could the DSA ensure more protection to the right to freedom of expression, and what would it mean for banned accounts? Would it privilege certain actors?

Regulation of Process, Not of Speech

The DSA’s contribution to more effective protection of the freedom of expression comes in the form of procedural safeguards. These strengthen due process, clarify notice and take down procedures, improve transparency of the decision making and ensure redress mechanism for removal or blocking decisions. It will not prohibit Twitter from introducing its own internal rules, but will require that the rules are clear and unambiguous and applied in a proportionate manner (Article 12). Any blocked user would also have to be informed about the reasons for blocking and possibilities to appeal the decision, e.g. through internal complaint-handling mechanisms, out-of-court dispute settlement, and judicial redress (Article 15). 

The main goal of the DSA is thus to regulate the process and not to regulate the speech. Adding these safeguards could have an overall positive effect on the enjoyment of the right to freedom of expression. This positive effect would be achieved without introducing any must-carry rules for certain types of content (e.g. speech by heads of states) that could potentially interfere with other rights and interests at stake. The safeguards would not necessarily help Donald Trump—platforms will be still able to delete or block on the basis of their own internal rules or on the basis of a notice. But the new rules would give him access to procedural remedies.

The DSA sets out that online platforms must handle complaints submitted through their internal complaint-handling system in a timely, diligent and objective manner. It also acknowledges that platforms make mistakes when deciding whether a user’s conduct is illegal or a piece of information illegal or against terms of service: following the suggestion by EFF, users who face content removal or account suspension will be given the option to demonstrate that the platform’s decision was unwarranted, in which case the online platform must reverse its decision and thus reinstate the content or account (Art 17(3)).

A Public-Private Censorship Model?

There are a number of problematic issues under the DSA that should be addressed by the EU legislator. For example, the provision on notice and action mechanism (Article 14) states that properly substantiated notices automatically give rise to actual knowledge of the content in question. As host providers only benefit from limited liability for third party content when they expeditiously remove illegal content they know of, platforms will have no other choice than to follow up by content blocking actions to escape the liability threat. Even though the DSA requires notices to elaborate on “the reasons why the information in question is illegal content” (Article 14(3)), it does not mean that the stated reason will in fact always be correct. Mistakes, even in good faith, can also happen on the side of the notifying users. As a result, attaching actual knowledge to every compliant notice may become problematic. Instead of safeguarding freedom of expression, it could lead to misuse and overblocking of highly contextual content and, if not well-balanced, could turn the Digital Services Act into a censorship machine. 

There are also open questions about how platforms should assess what is proportionate when enforcing their own terms of service, how much pressure there will be from public authorities to remove content, and whether that clashes with the freedom to receive and impart information and ideas without interference by public authority. 

For example, Article 12 provides that providers of intermediary services have to include information about content restrictions and are required to act in a “diligent, objective, and proportionate manner when enforcing their own terms and conditions "(Article 12). Would platforms conduct any proportionality tests or just use it to justify any decision they take? Moreover, how does the requirement of proportionate enforcement interplay with mandatory platform measures to avoid both the distribution of manifestly illegal content and the issuance of manifestly unfounded notices? Under Article 20, online platforms are compelled to issue warnings to users and time-limited suspensions in such cases.

It is the right approach to subject the freedom of contract of platform service providers to compliance with certain minimum procedural standards. However, it is wrong to push (large) platforms into an active position and make them quasi-law enforcers under the threat of liability for third party content or high fines. If platforms have to remove accounts (“shall suspend,” Article 20); have to effectively mitigate risks (“shall put in place mitigation measures,” Article 27 - notably, Article 26 refers to freedom of expression being a protected risk); and have to inform law enforcement authorities about certain types of content (“shall promptly inform,” Article 21), there is a risk that there will not be much freedom left at some platforms to “hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.” There are reasons to doubt that the Commission’s sympathy for a co-regulatory approach in the form of EU Commission “guidelines” on how to mitigate systemic risks on online platforms (Article 27(3)) will give enough orientation to platforms for when to act and when not to act. 

It will now be up to the EU Parliament and the Council to strike a fair balance between the rights anchored in the Fundamental Rights Charter, including freedom of expression.

Tags