Lawless Land: Revista Pesquisa Fapesp

The first records of the impact of disinformation on political processes date back to ancient Rome (753 BC – AD 476), when Octavian (63 BC – AD 14) used short phrases caught on coins to discredit enemies and become the world’s first ruler. The Roman Empire (27 BC – 476 AD). But, as Portuguese historian Fernando Catruja of the University of Coimbra recalls in an article published in 2020, the emergence of digital technologies has made this phenomenon take on new manifestations, and one of its current characteristics is the impulse to go beyond the manipulation of facts, seeking to replace reality itself. Focusing on this issue, a study developed between April 2020 and June 2021 by researchers from the University of São Paulo Law School (FD-USP), funded by FAPESP, analyzed how Brazilian legal organizations react to disinformation spread across digital. During the 2018 election period, the lack of consensus around the concept of disinformation and the difficulties in quantifying its consequences were identified as central to the development of legislation.

The study’s coordinator, jurist Celso Fernandez Campilongo, of the FD-USP, notes that 15 years ago the formation of public opinion was mostly influenced by long and reflexive analyzes, disseminated centrally by the mainstream media. Today, public opinion has to deal with an avalanche of short and choppy information, posted by people with a strong social media presence. With this, in a way, memes and jokes have replaced the analytical text”, he compares. By highlighting that access to social networks can be considered more democratic, Campilongo cites the Continuous National Survey of Household Sample – Information and Communication Technology (Continuous PNAD – TIC), which he published Brazilian Institute of Geography and Statistics (IBGE) in April 2020. According to their data, three out of four Brazilians used the Internet in 2019, and mobile phones were the most used equipment for this purpose.In addition, the survey showed that 95.7% of the country’s citizens who have Web Access Use the network to send or receive text, voice, or picture messages through applications.

Al-Faqih states that since the declaration of the republic in 1889, elections in the country have been characterized by authoritarian manifestations. As an example, he cites the Credential Verification Commission. The initiative, created during the Empire, gained importance during the First Republic, mainly from 1899 onwards, through actions promoted by then-President Manuel Ferraz de Campos Siles (1841-1913). The body allowed the central government, for example, to remove opposition candidates, even when elected, from their political office. For Campilongo, the continuity of the so-called halter vote, that is, the vote in which voters vote for candidates nominated by political leaders or their electoral leaders, and the fact that only in 1988 were people who could not read or write able to exercise the right to vote are other examples of this phenomenon.

Define concepts
The United Nations Educational, Scientific and Cultural Organization (UNESCO) advises against using the term “fake news” – or in English, fake news. The Foundation notes that the word “news” refers to information that is verifiable and of general interest. Information that does not consider this pattern should not be called news. Instead of fake news, the organization suggests using the term “disinformation,” referring to deliberate attempts to confuse or manipulate people through the transmission of false data. In contrast, the term “misinformation” should be applied to denote misleading content that is posted without the intent to manipulate it.

Given this history, the 2018 elections were marked by the unprecedented role of digital communication platforms, including social networks and private messaging services, which became the basis for widespread disinformation. We analyzed how this phenomenon has had repercussions for the legal system,” says jurist Marco Antonio Luciavo Lim de Barros, of the University of Presbyteriana MacKenzie, another author of the study signed by five researchers. In 2015, the Federal Supreme Court (STF) banned companies from funding campaigns and parties, with the aim of reducing the weight of economic power in electoral disputes and equalizing the participation of representatives of disadvantaged social groups.According to Loschiavo, the measure ended with a destabilization of the electoral market, so that previously financial support moved to other environments, and entrepreneurs mobilized to pay for the massive transmission of information on networks social, as a way to ensure the preservation of their interests.

Regarding initiatives that sought to regulate the digital environment prior to 2018, another member of the research team, jurist Lucas Fucci Amato, of the FD-USP, explained that Marco Civil da Internet, in force since 2014, was the first legislation to be passed. For this purpose, establish principles, guarantees, rights and duties for those who use the network, as well as guidelines for state action. Another milestone is the General Data Protection Regulation (LGPD), in force since 2018, which relates to the processing of personal data in the digital environment. In 2019, the Elections Act began banning mass sending via messaging apps.

Gustavo Nascimento

Considering the dissemination of false information related to political issues, Amato states that the electoral law classifies the offenses of slander, defamation and slander, related to the disclosure of untrue facts. In addition, Law No. 9504/1997 provides for the right of reply in cases of publishing incorrect or offensive facts and it is a crime to expose comments on the Internet that defame the image of candidates, parties or coalitions. “These laws were created to try to control the performance of big companies and relate to cases of slander, defamation and slander that occur centrally. With the advent of digital platforms, communication has become faster and more decentralized and the repressive control provided for in previous legislation has ceased to function,” notes Loschiavo.

“Because of these characteristics, we found that the justice system had difficulties in dealing with transnational flows of communications and in regulating the dissemination of misinformation in the digital environment,” says Amato of the study results. When mapping recent efforts by public authorities to control misinformation, researchers note that the judiciary has taken careful measures to protect the digital environment. In the legislative branch, disagreements have led to successive delays in voting on some bills and delaying the entry into force of others that have already been approved. To resolve cases involving accusations related to the disclosure of false data, judges have resorted to not defining general principles, rather than setting clear and precise rules, and hiring experts in technology and digital law, including companies in the sector,” says the legal.

Regarding the performance of the Supreme Electoral Tribunal (TSE), Amato highlights that until the beginning of 2021, the decisions taken were unilateral democratic, that is, they were deliberated by a single judge and not by the plenary, which does not contribute to the consolidation of jurisprudence. “These elements show that the TSE acted in favor of freedom of expression, in order to avoid instances of censorship, at the expense of proposing greater control over the dissemination of false content in private messaging services and social networks,” Amato told.

Algorithm against misinformation
Researchers at the Center for Applied Mathematical Sciences on Industry (CeMEAI) at the Institute of Mathematics and Computer Science of the University of São Paulo (ICMC-USP), in São Carlos, one of the Centers for Research, Innovation and Publishing (Cepid) funded by FAPESP, have created an algorithm to detect misinformation with an accuracy of 96%. The tool runs on the website, which brings together mathematical models built from exposure to more than 100,000 news items published in the past five years. “The algorithm tends to understand texts with an imperative tone or a sense of urgency, for example, as false, but it also analyzes the context of the words, before making a prediction about whether the content in question is false or not,” statistician Francisco Lozada Neto, director of technology transfer at CeMEAI . According to him, the platform has received more than 4,000 visits since February 2022 and must be constantly updated to keep pace with the context of dissemination of false information.

Loschiavo notes that the research also determined that the main means of combating misinformation adopted by Brazilian courts was to require the intentional removal of misleading content, as well as specifying that platforms would begin to flag the possibility that some information might be incorrect. Notions of fake news and misinformation remain open. He explained that there is no definition of it in the election law, which poses an interpretative problem for the courts.

Another challenge to the justice system, according to Luciavo, is the difficulty of demonstrating the potentially harmful use of disinformation and its ability to interfere with electoral results. “In the survey, we determined that after 2018, the judiciary has recognized that the most effective way to deal with disinformation is through preventive behaviours. As a result, TSEs have begun to make platforms for agreements that require them to comply with anti-disinformation programs, modify content, and introduce verification systems sources, limit message forwarding, and block fake accounts,” it details.

Discussions triggered by the 2018 electoral context led to the drafting of Bill No. 2360/2020, known as PL of fake newswhich is currently being processed in Parliament. The initiative anticipates the need for digital platforms to indicate the publication of advertising content so that the audience can distinguish it from the news. The PL law also states that companies must have representatives in Brazil, able to provide explanations to the justice on request. According to Loschiavo, in the project, technology companies are expected to identify and warn abusive behavior, that is, the performance of accounts that mimic the identity of third parties to widely disseminate content aimed at destabilizing public debate. According to him, PL introduces the concept of “threatened self-regulation”, creating a hybrid model among digital telecom companies, members of government and civil society for the joint development of standards for regulating the digital environment. “However, this mechanism does run the risk of public interests being taken over by private interests,” he analyzes, when considering weaknesses in the legal proposal.

Along the same lines, journalist Ivan Baganotti, a researcher at Methodist University in São Paulo, argues that the lack of a clear definition of the concept of disinformation in PL can threaten the right to freedom of expression, as is the case with the legislation of countries such as Malaysia. “In the Asian country law, the definition of misinformation is so broad that any data that does not have legal confirmation can be considered false. Since the law came into force in 2018, many people have been subjected to unfair penalties.” With similar dynamics, he reported that in Russia, anti-disinformation legislation is used to curb news critical of the country’s position in the war against Ukraine.


Leave a Comment