** This post is one section of a more extensive piece on Brazil’s platform accountability and regulation debate. Click here to read the entire content.

The progression of PL 2630's versions was an expression of opting for a process-based approach instead of a content-focused one within a regulation initiative aiming to advance platform accountability. However, after amendments earlier this year, the bill now contains a list of illicit practices, connected to illicit content, that internet applications “must act diligently to prevent and mitigate (…) making efforts to improve the fight against the dissemination of illegal content generated by third parties.” This relates  to a duty of care obligation that the bill doesn't define, but nevertheless operationalizes its application, mainly in Article 11. The list of such illicit practices in Article 11 points to provisions in six different laws that amount to around 40 criminal offenses, each one containing a set of elements that must be present for the conduct to be illegal. Some offenses also have causes that exclude certain conduct from being the basis of a crime. For instance, both Brazil’s Antiterrorism Law (Law n. 13.260/2016) and the crimes against the democratic state set in the Penal Code don't apply to critical political demonstrations based on constitutional rights. As per Article 11 of the bill, it would be up to the internet application to consider all these elements and assess whether conduct or content visible through their platforms constitute a criminal activity.

In some cases, it’s even harder to understand what exactly the provider should check, or whether it is something the provider should check at all, despite its inclusion in the list of Article 11 criminal offenses. For example, Article 11 generically refers to the crimes against children and adolescents of Law n. 8.069/1990. Among these criminal offenses, there’s the failure of a doctor, nurse, or the head of a healthcare facility to correctly identify a  newborn and its birth mother at the time of delivery (Article 229 of Law n. 8069/1990). What’s the duty of care expected from internet platforms here? This rule is an example of a provision encompassed by Article 11 which doesn't seem to have any clear relationship with online platforms. Article 11 is also not very clear about how and which institution(s) will assess the compliance of duty of care obligations by internet applications. It states evaluations will not focus on isolated cases and will include information internet applications provide to authorities on their efforts to prevent and mitigate the practices listed, as well as analysis of platform’s reports and on how they respond to notices and complaints.

Within the same bill, Article 45 stipulates that “when the provider becomes aware of information that raises suspicion that a crime involving threat to life has occurred or may occur, it must immediately report their suspicion to the competent authorities.” While a crime involving a threat to life is definitely an emergency and a dire situation, Article 45 establishes a new policing role for internet applications that, even within this strict scope, may give rise to controversial outcomes, potentially affecting, for example, women in Brazil seeking information online about safe abortion.

Duty of care obligations as set in PL 2630 rely on a regulatory approach that reinforces digital platforms as points of control over people's online expression and actions. They require internet applications to be judges of whether acts or content are lawful based on a list of complex criminal offenses, as if it were simple for content moderation tools and processes to be programmed to recognize every element that constitutes each offense. But, to the contrary, these are often close calls that even judges and juries may have difficulty with. In many cases, users disseminate sensitive content precisely to call out institutional violence, human rights violations, and the perpetration of crimes in conflict situations. Sharing videos on social networks that expose cases of discrimination contribute to holding the perpetrators accountable. During the wave of protests in Chile, internet platforms wrongfully restricted content reporting the police’s harsh repression of demonstrations, having deemed it violent content. In Brazil, we saw similar concerns, for example, when Instagram censored images of Jacarezinho’s community’s massacre in 2021, which was the most lethal police operation in Rio de Janeiro’s history. In other geographies, the quest to restrict extremist content has already removed videos documenting human rights violations in conflicts in countries like Syria and Ukraine.

As the Office of the Inter-American Commission on Human Rights (IACHR) Special Rapporteur for Freedom of Expression pointed out, as private actors, internet applications "lack the ability to weigh rights and to interpret the law in accordance with freedom of speech and other human rights standards," particularly when the failure to restrict specific contents can lead to administrative penalties or legal liability.

It’s not that internet applications shouldn’t make efforts to prevent the prevalence of pernicious content in their platforms, or that we don’t want them to do a better job when dealing with content capable of causing serious collective harms. We agree they can do better, especially by considering local cultures and realities. We also agree that their policies should align with human rights standards and that they should consider the potential impacts of their decisions to human rights, preventing and mitigating possible harms.

However, we should not mix platform accountability with reinforcing digital platforms as points of control over people's online expression and actions. This is a dangerous path considering the power big platforms already have and the increasing intermediation of digital technologies in everything we do. Article 11's approach is also problematic in that it establishes such control based on a list of potentially unlawful practices that political forces can change and expand at any time or lead to opportunistic or abusive enforcement to restrict access to information and silent criticism or dissident voices.

On the contrary, platform accountability prioritizes a process-based and systemic approach by which the provider assesses and addresses, to prevent and mitigate, the negative impacts of its activities to human rights. This is consistent with the UN Guiding Principles on Business and Human Rights. The PL 2630 itself has provisions on systemic risk analysis and mitigation measures related to companies' activities. Brazilian lawmakers should prioritize this approach over the concerning “duty of care” obligations.

Moreover, the concept of duty of care, as we currently see in the Brazilian debate, has yet another risk. It allows for interpretations that internet applications should engage in general monitoring of the user content they host. Such interpretations are not explicitly denied in the text of PL 2630, as they are, for example, in the EU DSA.

** This post is one section of a more extensive piece on Brazil’s platform accountability and regulation debate. Click here to read the entire content.