Brazil’s federal elections are set for October 7, 2018, but the fear that “fake news” online might unfairly interfere with the electoral process has been looming over the country for far longer than this year’s campaign.  In June, the President of the Superior Electoral Court declared such interference could lead to the elections' annulment. The same month, the entity and political parties’ representatives signed a collaboration agreement pledging that they would not spread false content themselves. Companies, lawmakers, and civil society have all made attempts to tackle the issue.

The Brazilian parliament has proposed over 15 bills that attempt to deal with fake news — each containing their own unique threats to free expression. The vast majority of these bills treat fake news as a criminal offense. One bill proposes penalties of up to eight years in prison for “creating, disclosing or sharing false news that may modify or misrepresent the truth about a natural or legal person that affects the relevant public interest” on the Internet. Another aims to hold social media platforms accountable for "disseminating false, illegal or harmfully incomplete information to the detriment of individuals or companies.” It establishes a R$50 million (about USD 12.8 million) fine on companies that don’t delete such posts within 24 hours and proposes that companies create filters and tools to prevent the spread of fake news. Many of the bills would deeply jeopardize the general rule set in the Marco Civil (Law n. 12.965/2014; art. 19, main section)—Brazil’s civil rights framework for the Internet—that states intermediaries should not be held liable for third-party content except for noncompliance with a court order ruling a takedown.

Thanks to pressure from civil society and other initiatives, including a statement from the National Council for Human Rights , none of these bills have been approved. But, Brazil’s political environment is unpredictable and there are no guarantees that some of these troubling laws won’t be passed by the end of the year, especially if online misinformation during the federal elections turns against politicians currently in power.

In anticipation of pressures and judgments, platforms take on the role of online speech arbitrators.

Online platforms in Brazil are keen to protect themselves from accusations of bias and avoid allegations that they are unduly influencing the federal elections. From the local application of their global policies to specific actions related to the Brazilian elections, companies are taking measures that affect who and what can be found online.   

For example: Google set Quality Rater Guidelines to assess the performance of its search engine. The company contracts "search quality raters" who follow these guidelines to rate search results so Google can improve its search engine algorithms. The guidelines reflect what Google regards to be quality content and can affect content visibility in search results. In July, the guidelines were globally updated to consider the content creator’s reputation and whether link titles are shocking or exaggerated compared to actual content. Even if Google’s intention is to establish objectivity, leaving search quality raters to make judgments about online content accuracy is a dangerous practice to apply to political news, especially during elections. These decisions potentially affect what voters will or will not see on Google, which accounts for almost 97% of the search engine market share in Brazil.

Google—along with Facebook, Twitter, and the First Draft projectis also implementing an initiative in Brazil called Comprova.  Comprova brings together press associations and media outlets to cross-check information in order to curb the dissemination of false content across mobile devices and applications. Brazilians are encouraged to submit content to Comprova for verification and can also sign up to follow their fact-checking analysis. After the content is verified, its confirmation or counter-evidence is posted on Comprova’s website and disseminated by the partner platforms and newsrooms. However, the news outlets cooperating with Comprova, with few exceptions, are linked to mainstream media groups—which raises concerns about the possible bias of their assessments.

It’s worth remembering that traditional media, at least in Latin America, have a history of spreading misinformation themselves. And even fact-checking is not immune to controversies in Brazil. 

Recently, Facebook's partner checking agency classified a story as false in which a person close to the Pope had tried to visit the Former President Lula in prison and give him a blessed rosary. As a result, the platform sent a notification that the story was fake to users who had shared the piece. After two days of buzz, the Vatican updated the official statement on its website and confirmed the story was actually true, but the damage had already been done to the reputation of the website that released the news.

In another instance, Facebook removed 196 pages and 87 accounts from its platform that the company claimed violated its authenticity and spam policies. According to Facebook, the pages and accounts were part of a coordinated network that hid behind fake profiles and misled people to sow division and spread misinformation. This network was linked to an ultraconservative Brazilian political group that classified the takedown as an act of censorship and challenged it in the Supreme Court. Their public outcry also resulted in a Federal Prosecutor’s Office information request for Facebook to detail the removed pages and accounts. Although their constitutional claim to restore the content has been denied, the case prompted heated discussions about the platforms’ policies in curbing conduct in their environments given their relevance in hosting and facilitating public debate. Tackling fake profiles and automated content dissemination is also among the various measures Twitter is taking in Brazil in the face of elections.

Even when ostensibly aimed at spam and abuse, such general rules and guidelines can have a negative effect on legitimate conversations. Facebook's real name policy, for example, may expose activists to attacks and endanger people in vulnerable situations. Similarly, anti-spam measures could hinder coordinated actions that promote important causes. If companies are ramping up their policing of content in light of the elections, such actions require real public monitoring, true transparency, and due process for those affected, as underscored by the Santa Clara principles.

Aside from content moderation, transparency is important when candidates and political parties target users with paid content. Political interests and other relevant personal data influence the campaign ads and proposals that pop up on the platforms. Providing users with tools that show who is targeting them (and why) help to avoid deceit and manipulation. Likewise, listing all the paid ads that a political campaign is running on a platform provides a clear picture of what candidates disseminate through their networks. Facebook sought to answer these concerns by deploying specific tools in anticipation of the election. In Brazil, political parties, candidates, and parties’ coalitions aren’t permitted to run paid political ads online other than boosting their posts as sponsored content on major platforms. This reinforces the role these companies play and how important transparency is for the online dissemination of political advertising and information.

A sample of civil society’s tools

Closely following this tricky issue are Brazilian civil organizations and activists, committed to thwarting threats to free speech, raising public awareness, and engaging in direct dialogue with legislators, judges, and other public authorities to prevent harmful solutions to false news. As for transparency, they too bring their tools:

Você na Mira is an InternetLab project in collaboration with WhoTargetsMe, which allows users to monitor the microtargeting of political ads on Facebook. It’s a browser extension for Mozilla Firefox and Google Chrome that collects data about the sponsored political ads they receive. The information is also anonymously shared with the project team that analyzes the results. Here’s their latest report.

Fuzzify.me is also an extension for Firefox and Chrome that creates a timeline where users can see all the sponsored ads that have targeted them on Facebook. In addition to compiling the ads and the reasons a particular user was targeted, the extension allows users to clean their ad categories in order to become “fuzzier” to the algorithm. It was deployed by Coding Rights with the support of Mozilla Foundation.

Pegabot, by ITS Rio and the Institute of Equity & Technology, is a tool that checks the activity of an account to assess the likelihood that the profile is a bot. The higher the score, the higher the chance it is one. For now, the platform is integrated with Twitter, but soon it will support other social media platforms. The project is currently in testing phase.

Keeping track of how different players are dealing with misinformation, Intervozes reports on the nature and impacts of these actions throughout the election period. And as a result of multistakeholder discussions, the Brazilian Internet Steering Committee launched a guide aimed at public officials and users with guidelines on how to counter misinformation and practical tips to avoid being deceived by false content.

Amid all the concerns entangled in the electoral process, the issue of  fake news has definitely left its mark in the 2018 Brazilian elections. A multitude of initiatives have arisen and, now, surrounded by perils, gray areas, and interesting ideas, it's clear that easy, hasty, and scattered solutions are hardly the right ones. Any sound solution must consider users’ rights — and the censorship and freedom of expression implications it may have.

Tags