Reference
Mohamed, Shakir, Marie-Therese Png, and William Isaac. 2020. “Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence.” Philosophy & Technology 33 (4): 659–84. https://doi.org/10.1007/s13347-020-00405-8.
Keywords
Colonialism, decolonization, values, ethics, artificial intelligence, critical science
Notes
This paper looks at advances in artificial intelligence through the lens of critical science, post-colonial, and decolonial theory. The authors acknowledge the positive potential for AI technologies, but in this paper they highlight its considerable risks, especially for vulnerable populations. They call for a proactive approach to be adopted by AI communities using decolonial theories that use historical hindsight to identify patterns of power that remain relevant – perhaps more than ever – in the rapid advance of AI technologies.
“The years ahead will usher in a wave of new scientific breakthroughs and technologies driven by AI research, making it incumbent upon AI communities to strengthen the social contract through ethical foresight and the multiplicity of intellectual perspectives available to us, ultimately supporting future technologies that enable greater well-being, with the goal of beneficence and justice for all” (p.659).
AI is not simply comprised of technology; it is both an object and a subject. It is artifacts but also systems involving networks and human institutions (p.660). The authors ask crucial questions about power and values:
“What values and norms should we aim to uphold when performing research or deployment of systems based on artificial intelligence? In what ways do failures to account for asymmetrical power dynamics undermine our ability to mitigate identified harms from AI? How do unacknowledged and unquestioned systems of values and power inhibit our ability to assess harms and failures in the future?” (p.660).
Contextual values
Science is a product of human values in specific contexts involving personal and societal concerns in developing and using scientific knowledge. While science may use positivistic epistemologies, it is not positivistic at the core because it is a product of human values.
Unethical research and applications in AI
The authors cite recent studies of AI use in high-stakes sectors, such as autonomous weapons, so-called “predictive policing,” and decisions about who gets what kind of health care. Automated decision-making can affect life chances for billions of people by limiting access to opportunities and resources, especially for members of already-marginalized communities who have been subjected to a long history of disadvantage and bias. AI can both amplify and obscure power asymmetries by legitimating long-standing biases and harms through automated decisions that are claimed to be objective, but are shaped by structural inequities. The authors quote R. Benjamin (2019) who notes that:
“whereas in a previous era, the intention to deepen racial inequities was more explicit, today coded inequity is perpetuated precisely because those who design and adopt such tools are not thinking carefully about systemic racism” (p.662).
Critical Science
They call for the field of AI to develop foresight approaches to developing AI, grounded in the critical sciences, defined by Guidotti as “a mode of science in which scientific methods are used to critique the adverse consequences of technological development” (Guidotti TL. Critical science and the critique of technology. Public Health Rev. 1994;22(3-4):235-50. PMID: 7708937).
The term “critical science” does not represent a singular theoretical approach, but connotes a range of critical theories. Or in the authors’ words:
“The critical science approach represents a loosely associated group of disciplines that seek to uncover the underlying cultural assumptions that dominate a field of study and the broader society. Scholarship in this domain (Winner 1980; Nissenbaum 2001; Greene et al. 2019) aims not only to explain sociotechnical phenomena, but to also examine issues of values, culture and power at play between stakeholders and technological artefacts. We use a necessarily broad scope of critical science theories due to the expansive range of applications of AI, but seek to emphasise particularly the role of post-colonial and decolonial critical theories. While decolonial studies begins from a platform of historical colonialism, it is deeply entangled with the critical theories of race, feminism, law, queerness and science and technology studies” (p.662).
The main focus of this article, given the particular promises and pitfalls of IA, is “post-colonial and decolonial critical theories” (p.662). In service of that objective, the authors summarize the meaning of coloniality and decolonial theory, providing a kind of literature review describing and defining these terms:
“Decolonisation refers to the intellectual, political, economic and societal work concerned with the restoration of land and life following the end of historical colonial periods (Ashcroft 2006). Territorial appropriation, exploitation of the natural environment and of human labour, and direct control of social structures are the characteristics of historical colonialism. Colonialism’s effects endure in the present, and when these colonial characteristics are identified with present-day activities, we speak of the more general concept of coloniality” (p.663).
The authors further define coloniality:
“Coloniality is what survives colonialism (Ndlovu-Gatsheni 2015). Coloniality therefore seeks to explain the continuation of power dynamics between those advantaged and disadvantaged by “the historical processes of dispossession, enslavement, appropriation and extraction […] central to the emergence of the modern world” (p.663).
The authors are primary focused on “structural decolonization,” which “seeks to undo colonial mechanisms of power, economics, language, culture and thinking that shapes contemporary life: interrogating the provenance and legitimacy of dominant forms of knowledge, values, norms and assumptions” (p.664). They go into considerable detail about the decolonial landscape.
The relevance of coloniality to AI is made explicit in this article. Territorial coloniality of course refers to physical spaces expropriated and exploited by colonial power. In a similar vein, digital spaces have become sites of resource extraction and exploitation, described by other scholars as the biopolitical public domain (Cohen 2018). Data-centric epistemologies and capitalist economic models are expressions of coloniality, “in how they impose ways of being, thinking, and feeling that leads to the expulsion of human beings from the social order, denies the existence of alternative worlds and epistemologies, and threatens life on Earth” (Ricaurte 2019)” (p.665).
Algorithmic coloniality
The authors build on the term “data colonialism” by using “algorithmic coloniality” as a frame for algorithmic decision-making that shapes labor markets, geopolitical power, and discourses on ethics, fairness, and accountability.
Algorithmic oppression
“Algorithmic oppression extends the unjust subordination of one social group and the privileging of another—maintained by a “complex network of social restrictions” ranging from social norms, laws, institutional rules, implicit biases and stereotypes (Taylor 2016)—through automated, data-driven and predictive systems. The notion of algorithmic or automated forms of oppression has been studied by scholars such as Noble (2018) and Eubanks (2018)” (p.666).
Algorithmic exploitation
As defined by the authors, algorithmic exploitation “considers the ways in which institutional actors and industries that surround algorithmic tools take advantage of (often already marginalised) people by unfair or unethical means, for the asymmetrical benefit of these industries. The following examples examine colonial continuities in labour practices and scientific experimentation in the context of algorithmic industries” (p.667). They provide these examples:
- Ghost workers who label data used by AI systems, who typically work remotely for very low pay, or in the case of prisons no pay.
- Beta-testing, which happens all the time when companies roll out new software systems and interfaces. This is often done using marginalized populations in nations with weak data protection laws, which the authors refer to as ethics dumping. The example of exploitation by Cambridge Analytica in Kenya and Nigeria are excellent examples.
Algorithmic dispossession
This seems like a very useful term:
“Algorithmic dispossession, drawing from Harvey (2004) and Thatcher et al. (2016), describes how, in the growing digital economy, certain regulatory policies result in a centralisation of power, assets, or rights in the hands of a minority and the deprivation of power, assets or rights from a disempowered majority” (p.669).
Tactics for decolonial AI
The authors present a framework of three tactics for designing AI:
- Supporting a “critical technical practice of AI”
- Establishing reciprocal engagements and reverse tutelage
- Renewing affective and political communities.
“How we build a critical practice of AI depends on the strength of political communities to shape the ways they will use AI, their inclusion and ownership of advanced technologies, and the mechanisms in place to contest, redress and reverse technological interventions… The decolonial imperative asks for a move from attitudes of technological benevolence and paternalism towards solidarity. This principle enters amongst the core of decolonial tactics and foresight, speaking to the larger goal of decolonising power” (p.676).
The authors make a strong argument for describing current developments in AI, and the perpetuation and exacerbations of asymmetrical power relations, as algorithmic coloniality. They provide a set of terms that extend and provide detail on the specific elements of that coloniality, and real life examples that give it flesh.