** This post is one section of a more extensive piece on Brazil’s platform accountability and regulation debate. Click here to read the entire content.

PL 2630 establishes special obligations for when there is an imminent risk of damage or negligence of an application provider (Articles 12-15). In assessing this section of the bill, it's crucial to recall the 2015 Joint Declaration about crisis situations. Among other recommendations, it highlights that "[s]tates should not respond to crisis situations by adopting additional restrictions on freedom of expression, except as strictly justified by the situation and international human rights law. Administrative measures restricting freedom of expression should be imposed only where they can be justified pursuant to the three-part test for such restrictions."

While this section of the bill purports to act as a legal basis for restricting fundamental freedoms during crisis situations, its current language fails to provide enough precision and clarity, as well as proper checks and balances to substantiate an intervention that is necessary and proportionate.

According to PL 2630, the decision implementing the security protocol will specify, among others things, the impacted providers, the protocol's deadline (up to 30 days, which can be extended), and a list of relevant issues or requirements that providers must address through effective and proportionate mitigation measures during the protocol's period. While the protocol is in force, and for the types of content specified in the implementation decision, the impacted providers are subject to joint and several liability for user-generated content as long as providers have prior knowledge of such content. A simple user notification, using the notice mechanism Article 16 requires internet applications to provide, is enough to constitute such prior knowledge. The bill, thus, creates an exceptional notice-and-takedown mechanism to be applied while the protocol is in effect and relating to certain types of contents (as per the protocol's "thematic delimitation").

Notice-and-takedown mechanisms raise many concerns. They can fuel the weaponization of notice systems to censor critical reporting, political criticism, and voices from marginalized groups. They too often lead to over-removals. The Office of the IACHR Special Rapporteur for Freedom of Expression has noted that they create incentives for private censorship as they put "private intermediaries in the position of having to make decisions about the lawfulness or unlawfulness" of user-generated content. Such "intermediaries are not necessarily going to consider the value of freedom of expression when making decisions about third-party produced content for which they might be held liable." Brazil's own experience in courts shows how tricky the issue can be. InternetLab's research based on rulings involving free expression online, released five years after Marco Civil's approval, indicated that Brazilian courts of appeals denied content removal requests in more than 60% of  cases. In the public hearing that Brazil's Supreme Court held to receive inputs on its cases about online intermediary liability, the Brazilian Association of Investigative Journalism (ABRAJI) presented data about takedown requests filed in courts from 2014 to 2022. According to ABRAJI, at some point of the judicial proceedings, judges agreed with content removal requests in around half of the cases, and some were reversed later on.

Yet, PL 2630's notice-and-takedown mechanism attached to a security protocol seems to play a moderating role amidst an increasing push from the Executive branch and the Supreme Court to expand the exceptions to Marco Civil's general rule on online intermediary liability. The fact this mechanism would be limited in time and in scope could help with some of the concerns above, as well as Article 18's rules, which include users' right to appeal content moderation decisions. However, the overall dynamic of the security protocol still poses serious problems. A paramount concern is that crisis situations don't become permanent by extending the duration or reiterating the occurrence of measures that, by definition, are restricted to exceptional circumstances. Clear and effective controls are required so that a legal discipline for crisis situations doesn't turn into the standard regulation.

Here are the main issues and possible mitigations Brazilian lawmakers should consider:

  • Article 12 defines a crisis situation in an extremely broad way. The imminence of risks set in Article 7, which includes a range of issues (e.g., the dissemination of illicit contents listed in Article 11 and risks to freedom of expression, public health, and the democratic State), or the "negligence or insufficiency of a provider's action" is enough to trigger the implementation of the security protocol. The criteria to typify what constitutes such insufficiency or negligence depend on regulation that is yet to exist. However, the provision doesn't relate the application's negligent action to the risks set in Article 7. An insufficiency or negligence of a provider related to any matter or an imminent risk set in Article 7 is enough to configure a crisis situation. This also means that even if providers are taking important steps in good faith to address Article 7's imminent risks , they can still be subject to the security protocol's exceptional measures. At a minimum, the provision should combine both requirements, using and instead of or in its language. But there are still other critical concerns.  
  • Previous versions of the bill qualified the protocol's situation of imminent risk. It used to refer to "imminent risks of harm to the collective dimension of fundamental rights." This is a critical qualifier, especially because Article 7 is still quite broad in the risks it lists. While its checklist may work to guide big provider's impact assessments, it raises concerns about possible abusive interpretations and malicious uses in the context of a security protocol that sets exceptional obligations to internet applications. Hence, there should be a risk of harm to the collective dimension of fundamental rights to allow an authority to put this security protocol in place. Furthermore, the bill should be explicit that the authority's assessment must follow strict necessary and proportionate standards when making such a decision.
  • The bill is silent about which authority has the power to declare a crisis situation and establish the security protocol's terms. We address the bill's oversight design in the next section, and the fact it currently lacks a proper democratic oversight structure is a major concern within the application of a security protocol. The 2015 Joint Declaration states that "[a]dministrative measures which directly limit freedom of expression, including regulatory systems for the media, should always be applied by an independent body. It should also be possible to appeal against the application of administrative measures to an independent court or other adjudicatory body." In this regard, and building on important related safeguards, the security protocol mechanism should count on robust checks and balances, including: (i) an independent government entity or oversight structure that assesses the crisis situation based on clear, transparent criteria and determines the implementation, or extension, of the security protocol by a reasoned decision within a public administrative proceeding abiding by due process safeguards; (ii) a referendum or prior consultation of a multistakeholder, participative council as part of the decision proceeding (both for implementing or extending the protocol); (iii) just like the administrative proceeding, not only a summary, but the resolution itself implementing or extending the security protocol is public; (iv) the right to a judicial review; (v) proper ongoing transparency over providers' measures deriving from the security protocol and government-related oversight activities. 
  • Finally, Article 16 setting the notice mechanism leaves crucial definitions to further regulation. It should at least clarify that user notices must specifically indicate the location of the allegedly unlawful material and explain why the user deems it unlawful. The bill should also make it explicit that due process safeguards that Article 18 ensures for users who have their content restricted remain applicable in the context of a security protocol, covering the providers and types of content affected, and the entire period the protocol is in effect.