The public defender in combating and mitigating algorithmic discrimination

In May, I had the grateful opportunity to speak on a panel about Algorithmic Discrimination at the Annual Conference of the Public Defender of Maranhão. I was especially pleased to admire the role of the Public Defender’s Office in supporting society and advocating for a judicial system that protects human rights, as well as public education about rights – missions that are key at the time of consensus-setting on algorithms and artificial intelligence.

At the opening of the panel I was able to bring up reflections on the elusive character – and permeated with power dynamics – in the very definition of artificial intelligence. I recalled definitions such as the OECD definition that states that “an AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions that influence real or virtual environments.” In the nexus between goals, intentions, and harms in the development and implementation of algorithmic systems we have normative possibilities that are more accessible to society than a technocentrism in the code.

The approach of identifying biases to combat algorithmic discrimination was explained by prof. Dr. Ana Paula Cavalcanti. Among the problems presented as sources for distortions in the system are: difficulty in managing high volumes of data; skewed sampling and initial biases of the bases; disparities in computing power; disparities in sample size; and issues of ethical development, transparency, and accountability.

The concept of algorithmic discrimination has already been recognized by states around the world, such as the US – home of most big tech organizations – which defined it as “Algorithmic discrimination occurs when automated systems contribute to unjustified differential treatment or unfavorable impacts to people based on their race, color, ethnicity, sex, disability, veteran status, genetic information, or any other category protected by law. Depending on the specific conditions, such algorithmic discriminations may violate legal protections.”

From algorithmic discrimination, I presented characteristics of what I call algorithmic racism. Commonly cited problems about algorithmic discrimination, such as the feedback loop and opacity of systems, connects with vulnerabilities imposed on Black populations such as limited access to rights, field coloniality, and differential visibility. The digital gaps that generate difficulties of various levels in digital access were also addressed, but going beyond the concept of “digital exclusion”.

Thales Dias Pereira, Master of Laws and Public Defender, opened his speech by recalling the context of “subintegration” in that no one is really excluded from society and its aspects. This observation paved the way for the reminder that it is basically impossible to be excluded from algorithmic processes, since citizens’ fundamental rights can be subject to algorithmic decisions even if they do not know it.

The public defender calls for the observation of the concept of algorithmic hypervulnerability, a condition that converges on individuals and groups who are unable to challenge or even understand the parameters used by algorithmic systems and also experience intersectional inequalities.

For Dr. Thales Pereira, legal regulation should not harm positive aspects of the development of algorithmic systems, but needs to observe the peculiarities of hypervulnerable individuals and groups.

In connection with the proponent’s consideration that dialogue is needed between AI regulation and fundamental rights theories, I closed my presentation by recalling that we already have the research and maturity to regulate artificial intelligence.

As for the role of the State, the report “Racial discrimination and emerging digital technologies: a human rights analysis” by the UN rapporteur and Prof. Dr. Law Tendayi Achiume elaborates on the possible impact of commitments such as “making human rights, racial equality and non-discrimination impact assessments a prerequisite for the adoption of systems based on such technologies by public authorities” and that “states should ensure transparency and accountability on the use of emerging digital technologies by the public sector and enable independent analysis and oversight, including through the use of only systems that are auditable.”

In the private sector, the notion of “Trustworthy AI” in the Mozilla Foundation’s lens includes building healthy environments for professionals to play their part in producing reliable, transparent, and fair technologies. Among the foundation’s suggestions are:

  • There needs to be a major shift in corporate culture so that employees who advocate responsible AI practices feel supported and empowered. Evidence suggests that the actions of in-house lawyers will have no impact unless their work is aligned with organizational practices.
  • Engineering teams should strive to reflect the diversity of people who use technology, along racial, gender, geographic, socioeconomic, and accessibility lines

Finally, I mentioned some proposals present in written contributions to the Jurists’ Commission responsible for subsidizing the elaboration of a substitute about artificial intelligence in Brazil:

  • Black Jurists emphasized that “it is essential that debates, studies, and collaborations in the field of racial theories be carried out in order to incorporate into the bill provisions that protect the black population, as the social group most vulnerable to such technologies”.
  • The Women in Privacy Network, in an article signed by Eloá Caixeta and Karolyne Utomi recommended “that practical cases around the world be studied with regard to the damage caused by AI when misused, such as, for example, the banning of the use of facial recognition technologies for police purposes by the government of San Francisco in the United States, where it was understood that the use of facial recognition can exacerbate racial injustice, with the technology presenting little benefit compared to the risks and damage it can generate”.
  • Researchers like Natane Santos have proposed that “the preponderance of mandatory human review of automated decisions presupposes the effective guarantee of the rights: (i) informational self-determination; (ii) non-discrimination and transparency; (iii) right to information about criteria and parameters of decisions, review, explanation and opposition to automated decisions”.

In summary, the multiplication of actors engaged in the challenge of AI regulation is a trend in all three sectors of power. Public defenders act and will act more and more in complex imbroglios about AI and algorithmic systems in the defense of citizens. Multisectoral commitments can establish networks of exchange that ensure that emerging digital technologies serve the purposes and potentials of human and social life, not just profit and concentration of power.