This is the third installment in a four-part blog series surveying global intermediary liability laws.  You can read additional posts here: 

The following examples are primarily based on research completed in the Summer of 2021. Updates and modifications to legislation mentioned in this post which occurred after that time may not always be reflected below.

So far, we’ve looked at the history of platform regulation and how the elements of intermediary regulation can be tweaked by policymakers. Let’s now take a closer look at some examples of recently adopted laws and upcoming regulatory initiatives from around the world that reflect a changing tide in regulation. These examples are not intended to provide a comprehensive overview of intermediary regulation, but offer insights into how some core principles of intermediary regulations are being reshaped by different policy agendas. For reasons of brevity and context, we mainly focus on horizontal liability regimes rather than on sector-specific rules such as those in the realm of copyright.

Recent Developments from around the World

Australia 

In response to the 2019 attack on two mosques in Christchurch, NZ, Australia passed amendments to its Criminal Code in March of the same year. The law designates the sharing of “abhorrent video material” a criminal offense. Under the new law, internet intermediaries face penalties of up to 10% of their annual turnover for each offense if they are aware that their service can be used for accessing or sharing such material, do not remove such content “expeditiously,” or do not make the details of such content known to law enforcement agencies. The law defines “abhorrent violent material” as acts of terrorism, murder, attempted murder, torture, rape, and kidnapping. The new legislation has created a comparatively strict regulatory regime which encourages companies to actively monitor user-generated content. 

Austria

Following a trend arguably started by Germany’s NetzDG and France’s attempted Avia bill, Austria proposed a similar law for the “protection of users on communications platforms” in September 2020. The law that then went into effect January 2021 created obligations for all online platforms that either have a turnover of 500,000 Euros or more annually or have at least 100,000 users to remove “content whose illegality is already evident to a legal layperson” within 24 hours and other types of illegal content within seven days after the platform has been notified of its existence, including notifications by users. Failing such a removal, heavy fines with up to one or, in severe cases, up to 10 million Euros may be applied. Monetary penalties may also be imposed for failure to comply with additional obligations such as reporting systems, points of contact and transparency reports. Both notifier and poster have the possibility of redress by the responsible media regulator (KommAustria). Transparency reports have to be released by the platform annually and in the case of platforms with above one million users quarterly. Failure to comply with the provisions of the law may lead to the seizure of advertisement revenues and other profits from Austrian business partners. 

Brazil 

In May 2020, the Brazilian Senate introduced a new draft law intended to combat disinformation online, following concerns regarding the role that "fake news" played in the election of President Jair Bolsonaro in the 2018 general election. The bill contains transparency obligations, both for ads and content moderation practices, as well as due process requirements when restricting content and accounts. EFF has worked along with digital rights groups and activists to counter the traceability mandate on private messaging services, which was dropped in the current text. Yet, dangerous provisions still remain, for example, with the expansion of data retention obligations seeking to enable identification of the user of an IP address. The bill does not directly alter the safe harbor established in Brazil's Marco Civil da Internet law, by which, as a general rule (with exceptions for copyright and unauthorized disclosure of private images containing nudity and/or sexual activities), platforms are only held liable for third-party content when they fail to comply with a judicial decision ordering the removal of infringing content. According to the bill, however, platforms are subject to joint responsibility for harms caused by paid content if they fail to confirm who is responsible for the account paying for the ad. The bill applies to social networks, search engines, and instant messaging applications with economic ends and over ten million users registered in Brazil. Penalties for non-compliance worryingly include the temporary suspension or prohibition of platforms' activities. These penalties must be set by a court order, by the vote of the absolute majority of court members.  

Canada

The proposed Canadian framework on online harms legislation requires online platforms to implement measures to identify harmful content and to respond to content flagged by any user within 24 hours. The category of “harmful content” explicitly includes speech that is legal but potentially upsetting or hurtful. Although the government does not envisage mandatory upload filters in practice, compliance with the very short deadline to respond to removal requests will leave platforms with no choice but to resort to effective filtering of user content. Additionally, a Digital Safety Commissioner would ensure platforms live up to their obligations under the proposal, and platforms that fail to comply with the Commissioner's regulations would face penalties of up to three percent of the providers' gross revenues or up to 10 million dollars, whichever is higher. On top of this, the Commissioner would be empowered to conduct inspections of OCSPs at any time “further to complaints, evidence of non-compliance, or at the [Commissioner’s] own discretion, for the OCSP’s compliance with the Act, regulations, decisions and orders related to a regulated OCS.”       

European Union 

The EU’s new internet billthe Digital Services Act or DSA—seeks to articulate clear responsibilities for online platforms, with strong enforcement mechanisms, while also protecting users’ fundamental rights. The DSA encourages good samaritan content moderation and sets out type- and size-based due diligence obligations, which include obligations relating to transparency in content moderation practices, algorithmic curation, and notice and action procedures. By focusing on processes rather than speech, the draft DSA largely preserves the key pillars of the e-Commerce Directive, such as the ban on mandated general monitoring of what users post and share online and the preservation of the extremely important principle that, as a general rule, liability for speech should rest with the speaker and not with platforms that host what users post or share online. 

However, the initial proposal introduced a “notice-equals-knowledge” approach under which properly substantiated notices automatically give rise to actual knowledge of the notified content. As host providers only benefit from limited liability for third-party content when they expeditiously remove content they “know” to be illegal, platforms will have no other choice but to block content to escape the liability threat. The position of the Council of the EU addresses the risk of overblocking by clarifying that only notices on the basis of which a diligent provider of hosting services can identify the illegality of the content could trigger the consequences of the liability provisions. The EU Parliament initially showed sympathy for short and strict removal obligations and holding any “active platform” potentially liable for the communications of its users. However, the key committee on internal market affairs (IMCO) approved rules that would preserve the traditional liability exemptions for online intermediaries and abstained from introducing short deadlines for content removals. This position was confirmed by a plenary vote in January 2022, where the full house of the EU Parliament rejected tight deadlines to remove potentially illegal content and made sure that platforms don't risk liability just for reviewing content. The final compromise deal largely kept the principles under the e-Commerce Directive intact. Negotiators approved the approach of the Council and agreed that it should be relevant if diligent providers receive enough information to identify the illegality of content without a “detailed legal examination”.  

France

In June 2020, France passed the controversial Avia bill into law. The new law mandates social media intermediaries to remove obviously illegal content within 24 hours, and content relating to terrorism and child abuse within 1 hour. While the law set out to combat hate speech, its expanded scope includes a range of offenses, from content that is apologetic of acts of terrorism to war crimes and crimes against humanity, amongst others. When French Senators filed a challenge to the French Supreme Court, EFF teamed up with French allies to submit an amicus brief. We argued that the Avia law infringed upon EU law on several grounds. EFF’s intervention proved ultimately successful, as the Supreme Court struck down the law’s requirements to remove infringing content within 24 hours, recognizing that they would encourage platforms to remove perfectly legal speech. 

In July 2021, the National Assembly adopted new rules aiming at regulating platforms and transposing in advance some provisions from the proposed EU Digital Services Act. The “Principles of the Republic bill”, the end-product of a controversial proposal meant to target political Islam, contains provisions relating to the fight against hate speech and illegal content online. It sets out obligations for platforms related to transparency, content moderation and cooperation with authorities. Non-compliance can result in high sanctions imposed from ARCOM, the regulatory authority for audiovisual and digital communication.

Germany 

In October 2017, Germany’s Netzwerkdurchsetzungsgesetz (or Network Enforcement Act- NetzDG for short) entered into force. Since then, it has been amended twice. Similar to France’s Avia bill, which it preceded, the law aims to tackle “hate speech” and illegal content on social networks. The law introduces a range of due diligence obligations for social media platforms with more than 2 million registered users. Such platforms are obliged to introduce a notice-and-action system that offers users “easy to use” mechanisms to flag potentially illegal content. Platforms must also offer redress options in cases where users believe their content was wrongfully removed, including a voluntary out-of-court dispute resolution system. Finally, the law introduced extensive transparency reporting obligations about content removals.

Most notably, NetzDG obliges platforms to remove or disable access to manifestly illegal content within 24 hours of having been notified of the content. Where content is not manifestly illegal, social media providers must remove the post in question within seven days. Non-compliance can lead to significant fines. 

During the hasty legislative debate that accompanied the original NetzDG,  the law was heavily criticized in Germany and abroad, with most critics arguing that the strict turnaround times for platforms to remove content do not allow for balanced legal analysis and may therefore lead to wrongful takedowns. While the social media companies reported fewer notifications of content under NetzDG than expected (although their methodology to assess and count notifications differs), there is evidence that the law does indeed lead to over-blocking. Moreover, there are significant questions regarding the law’s compatibility (or lack thereof) with European law. There are also concerns that content reporting is particularly difficult for non-German speakers, in spite of a legal requirement under NetzDG that complaint procedures be “easy to use.” Beyond its impact across the EU, NetzDG has inspired ‘copycat’ laws in jurisdictions around the globe. A recent study reports that at least thirteen countries, including Venezuela, Australia, Russia, India, Kenya and Malaysia proposed laws that are similar to NetzDG since the law entered into force.

Since its adoption, NetzDG has been amended twice. In June 2020, the German parliament adopted a package of measures aimed at tackling "hate crime" and hate speech. Pursuant to the new legislation, platforms will have to forward content that could be illegal under German criminal law to the German Federal Criminal Police Office, where a new database/office will be established. Users will not be asked for consent before their content is forwarded, nor are there notification deadlines. The package also includes changes to Germany's criminal code. Following significant concerns regarding the proposed rules’ constitutionality, the law was belatedly adopted in April 2021, with the new obligations applicable from February 2022. In May 2021, the German Parliament also passed a law officially amending NetzDG, by introducing, inter alia, better redress options for users and obligations for platforms to provide access to data for researchers. The law entered into force in June 2021.

India

In February 2021, the Indian government introduced the Guidelines for Intermediaries and Digital Media Ethics Code Rules (subordinate legislation under the Information Technology Act, 2000) which replaced the previous Intermediary Rules of 2011. The 2021 Rules classify intermediaries into two categories–social media intermediaries and significant social media intermediaries (those with 5 million or more registered users in India). All social media intermediaries are subject to an expanded set of due diligence obligations, failure to comply with which will entail a loss of safe harbor and potential criminal action. Significant social media intermediaries are subject to additional due diligence obligations which include proactive monitoring duties, expanded local personnel requirements and a traceability requirement for encrypted private messaging platforms. Recent news reports in Indian media describe Twitter as having ‘lost intermediary status’ on account of failure to comply with obligations under the new Rules. According to officials, this opens Twitter up to potential criminal action for user content.  

Indonesia 

In November 24, 2020, Indonesia issued the Ministerial Regulation 5, which seeks to tighten the government’s grip over digital content and users’ data held by electronic systems operators” (ESOs) as long as they are accessible from within Indonesia. ESOs include social media and other content-sharing platforms, digital marketplaces, search engines, financial services, data processing and communication service providers, video calls, and online games. To operate in Indonesia, MR5, which was made retroactive to November 2020, compels ESOs to register with the Indonesian Ministry of Communication and Information Technology (Kominfo) and obtain an ID certificate. Those that fail to register by May 24, 2021 will be blocked in Indonesia. This registration period has been extended for six more months. ESOs are forced to take down prohibited content—which includes vague categories such as anything that creates "public disturbance". MR5 also grants Kominfo unfettered authority to define these terms. ESOs must ensure that their platform does not contain nor facilitate prohibited content—a general obligation to monitor content. Failure to comply with MR5 obligations can range from temporary to full blocking of the service and revocation of registration.

New Zealand

Also in response to the Christchurch mosque shootings, New Zealand introduced a bill to amend its Films, Videos, and Publications Classification Act of 1993. The bill designates the livestreaming of violent content a criminal offense for individuals streaming such content (not for intermediaries hosting the content), but also provides that an Inspector of Publications would be able to issue take-down notices for objectionable content. Takedown notices would be addressed to intermediaries that did not remove the content in question voluntarily, and failure to remove content pursuant to a takedown order could lead to fines for the intermediary in question. The Bill additionally enables the Chief Censor to make quick time-limited interim classification assessments of any publication in situations where the sudden viral distribution of content that may be objectionable or is likely to cause harm. The bill, which allows judicial authorities in New Zealand to issue fines to non-compliant platforms, was adopted in November 2021.

Pakistan

In 2020, Pakistan introduced the Citizen Protection (Against Online Harm) Rules which imposed obligations on social media companies. The original version of the Rules vested broad blocking and content removal powers in a "National Coordinator" and mandated intermediaries take down content within 24 hours once notified about the content in question. That takedown window could be reduced to six hours in case of an “emergency”, which was left undefined. This iteration of the Rules was suspended for reconsideration after outcry from civil society, who were dismayed over the lack of a consultative process in developing the first version of the Rules and argued that they contradicted some rights guaranteed by the constitution. A version of the Rules released later that year eliminated the body of the National Coordinator, and instead vested the powers in the Pakistan Telecommunications Authority (PTA), but maintained the short window for intermediaries to take down reported content. The law also imposed provisions obliging intermediaries to deploy “proactive mechanisms” to prevent the livestreaming of material considered to be unlawful—in particular “content related to terrorism, extremism, hate speech, defamation, fake news, incitement to violence and national security.” In addition, there is a provision to share decrypted user data with authorities without judicial oversight. In case of non-compliance, intermediaries face high fines, and entire platforms can be blocked. The Pakistani government again revised the Rules in November 2021, with the most recent iteration giving platforms 48 hours to comply with a removal or blocking order from the PTA, and only 12 hours when facing emergency requests. 

Poland

In January 2021, the Polish Ministry of Justice proposed the introduction of an “anti-censorship” bill to stop social media platforms from deleting content posted by Polish users or from blocking the users if their content does not break any Polish laws. The draft of the Polish law provides for content reinstatement through an appeals process headed by a Freedom of Speech Council. The Council, likely to be politically influenced, will be responsible for protecting speech on social media, and will dole out heavy fines for non-compliance. Platforms must respond to user complaints about blocking or content takedowns within 48 hours. Users can appeal against platform decisions to the Freedom of Speech Council, which can order reinstatement. Orders of reinstatement must be complied with within 24 hours; if they are not, high fines may be imposed. In light of the ongoing reform process of the e-Commerce directive at the EU level, the responsible ministry has directed efforts towards influencing the shape of the proposed Digital Services Act.

Russia

In December 2020, Russia’s Federal Law On Information, Information Technologies and Information Protection was amended to require social media platforms to proactively find and remove prohibited content. Meanwhile, Russian Administrative Law Article 13 establishes fines on internet services for non-compliance with content removal laws. The Law on Information, Information Technologies, and Information Protection covers all internet resources with more than 500,000 users per day who have a personal page with the ability to disseminate information. The list of prohibited materials includes: pornographic images of minors; information that encourages children to commit life-threatening and illegal activities; information on the manufacture and use of drugs; information about methods of committing suicide and calls for it; advertising of remote selling of alcohol and online casinos; information expressing clear disrespect for society, the state, the Constitution of the Russian Federation; as well as any content deemed to contain calls for riots, extremism, and participation in uncoordinated public events. In March of 2021, the Russian agency responsible for ensuring compliance with media and telecommunications laws used the law to justify throttling Twitter on “100 percent of mobile services and 50 percent of desktop services” because the social media company did not delete content the authorities deemed unlawful. 

Turkey

In yet another example of NetzDG’s spillover effects, Turkey adopted a law that mirrors the structure of NetzDG, but goes significantly further in the direction of censorship in August 2020. The law mandates social media platforms with more than one million daily users appoint a local representative in Turkey. Activists are concerned this move will enable the government to conduct even more censorship and surveillance. Failure to appoint a local representative could result in advertisement bans, steep penalties, and—most troublingly—bandwidth reductions. The legislation introduces new powers for Courts to order internet providers to throttle social media platforms’ bandwidth by up to 90%, practically blocking access to those sites. Local representatives will be tasked with responding to government requests to block or take down content. The law foresees that companies would be required to remove content that allegedly violates “personal rights'' and the “privacy of personal life” within 48 hours of receiving a court order or face heavy fines. It also includes provisions that would require social media platforms to store users’ data locally, prompting fears that providers would be obliged to transmit those data to the authorities, which experts expect to aggravate the already rampant self-censorship of Turkish social media users.

United Kingdom

In May 2021, the UK government published a draft of its Online Safety Bill, which attempts to tackle illegal and otherwise harmful content online by placing a duty of care on online platforms to protect their users from such content. The new Online Safety Bill also builds upon the government’s earlier proposals to establish a duty of care for online providers laid out in its April 2019 White Paper and its December 2020 response to a consultation. 

The bill is broad in scope, covering not only “user-to-user services” (companies that enable users to generate, upload, and share content with other users), but also search engine providers. The new statutory duty of care will be overseen by the UK Office of Communications (OFCOM), which has the power to issue high fines and block access to sites. Among the core issues that will determine the bill’s impact on freedom of speech is the concept of “harmful content.” The draft bill opts for a broad and vague notion of harmful content: speech that could reasonably, from the perspective of the provider, have a “significant adverse physical or psychological impact” on users. The great subjectivity involved in complying with the duty of care will inevitably lead to the overbroad removal of speech and inconsistent content moderation.

In terms of illegal content, “illegal content duties” comprise the obligations of platform operators to minimize the presence of so-called “priority illegal content,” to be defined through future regulation, and a requirement to take down any illegal content upon becoming aware of it. The draft bill thus departs from the EU’s e-Commerce Directive (and the proposed Digital Services Act), which abstained from imposing affirmative removal obligations on platforms. For the question of what constitutes illegal content, platforms are put first in line as arbiters of speech: content is deemed illegal if the service provider has “reasonable grounds” to believe that the content in question constitutes a relevant criminal offense.

United States

Under US law, the extent to which online intermediaries may bear liability that derives from the publication of user speech is essentially governed by a threefold approach. First, intermediaries bear full liability for violations of federal criminal law by their users. Second, claims of infringement of federal intellectual property laws are subject to the notice-and-takedown scheme of the Digital Millennium Copyright Act, particularly 17 U.S.C. § 512(c). Pursuant to this scheme, an intermediary that receives notice of infringing material must “act expeditiously to remove, or disable access to, the material.” Third, for substantially all other liability deriving from the publication of user speech, intermediaries are immunized pursuant to “Section 230,” the law most referred to in comparative law analyses. Section 230 immunizes all types of traditional publication activity, including selecting what user speech to include and not include, editing (as long as the editing itself does not make the speech actionable), and targeting of likely most receptive audiences. Section 230 also provides statutory immunity for any liability that arises from refusing to provide an account or publish user content. The First Amendment to the U.S. Constitution also protects such editorial decisions.

 There has been much discussion in the U.S. Congress over the past several years about modifying Section 230. These efforts broadly take one of the following approaches: (1) carving out additional exceptions for specific legal claims—one previous effort to do so, FOSTA/SESTA, which removed immunity for intermediaries against certain criminal and civil claims related to hosting online content that reflects sex trafficking or prostitution, is subject to an ongoing legal challenge; (2) replacing the immunity with a duty of care that the intermediary would have to meet to be shielded from liability; or (3) conditioning the immunity on the adoption of specific policies and procedures.

In the next, and final, installment of this blog series we’ll outline our conclusions and propose recommendations for platform liability regulations moving forward.  The other blogs in this series can be found here:

Related Issues