How to raise levels of knowledge about AI, risks and governance?
This May, I participated in the meetingClosing Knowledge Gaps, Fostering Participatory AI Governance: The Role of Civil Society organized in Buenos Aires by the National Endowment for Democracy, International Forum for Democratic Studies and Chequeado. transcribe part of my opening speech for the session on Communicating About AI Governance: Fairness, Accountability.
Responding to one of the opening questions for the speech, I would like to collaborate with the question “What strategies might be effective for raising the baseline level of knowledge and awareness in this area?“. A first collective effort seems to me to recognize that the modes of relationship with emerging digital technologies will differ from the intersectional characteristics of communities, their problems and their vulnerabilities.
In a survey we carried out, with the support of Tecla (Ação Educativa – Assessoria, Pesquisa e Informação) and Mozilla Foundation, we asked more than 100 Black Brazilian experts what were their primary concerns about emerging digital technologies. My hypothesis was that facial recognition and biometric surveillance would be the most mentioned topic. This was not the case: biometric surveillance was the second most common concern. The most cited concerned pointed out by Black Brazilian experts was the “Epistemic Racism and Erasure of Knowledge”. The intentional erasure of critical and antiracist takes on technology penalize racial justice and increase the overcoming impacts of centuries of racism.
Technical, scientific and humanist experts from minority groups are not heard about algorithmic damage, but it is possible to go even further. The importance of “experiential knowledge” or “lived experiences” is essential as a goal of black feminist thought, such as Patricia Hill Collins who states the centrality of “Experience as a criterion of meaning” as a epistemological position.
Thus, I believe that maintaining the “Black Box” discourse on digital algorithms is a central problem. This not only limits the practical understanding of problems related to bias, discrimination and racism in algorithms but also undermines the potential of alternative sociotechnical imaginaries.
I prefer the term “algorithmic systems” instead of artificial intelligence for this reason. What we call AI is not intelligent and is not artificial. Using this term can downplay the complexity of human intelligence and erase layers of appropriation of work embedded in algorithmic systems.
The common description of algorithmic systems as a black box between input, model and output limits the understanding of its effects, impacts and possibilities. When we explicitly include layers such as Context, Objectives, Beneficiary Actors and Impacted Communities in the definition or debate, we can include more people in the debate.
I would therefore recommend five commitments on the subject:
a. Inclusion of impacted communities in the debate. But it is not necessarily a matter of “knowledge gap”, but it is a matter of overcoming epistemic racism in all institutions;
b. Commitment that governance mechanisms under construction, such as supervision as algorithmic impact assessment, become public and participatory mechanisms;
c. Building support mechanisms for vulnerable communities, including offering resources that bypass traditional gatekeepers;
d. Inclusion of stakeholders from the field of education that can engage more people, such as high school teachers;
e. Finally, focus on goals and impacts on the public debate on artificial intelligence. Terms like bias, intent or ethics are essential but not enough, we need to move towards algorithmic justice. And in some cases, this justice means not implementing the system, as is the case with biometric surveillance and predictive policing.