Asia-Pacific

Outside the Box: Why Artificial Intelligence Needs Decolonial Studies

AI is deeply entangled with the legacies of colonialism. Analyzing IT exposes the hidden human labor that sustains AI systems while examining how algorithmic biases reflect and reinforce historical patterns of exclusion. These technologies facilitate exclusion by favoring perspectives and ideologies held by developed nations, which includes a lack of appreciation for indigenous knowledge systems. A truly equitable AI future demands a decolonial approach centered on diverse voices, transparent labor practices and epistemic justice.
By
Outside the Box: Why Artificial Intelligence Needs Decolonial Studies

Via Shutterstock.

May 26, 2025 06:46 EDT
 user comment feature
Check out our comment feature!
visitor can bookmark

AI’s influence spans the globe. These constellations of technology, as some governments label them, are a dynamic and influential aspect of today’s society. And yet AI systems and the analysis of such systems stem from Western and Eurocentric traditions. An inadvertent consequence of this primacy of Western and Eurocentric ideals is that AI has historically reflected colonial epistemologies and revealed the biased underbelly entrenched in modern societies. The esteemed data and AI studies scholar Payal Arora discusses how the development and global impact of AI systems unequally benefit citizens of the Global North, who enjoy more liberal and protective laws than countries in the Global South, whose often illiberal legislative provisions constrain users’ behavior and the potential benefits of access to AI.

Looking more deeply into how AI performs, the models currently exploited across the world’s diverse cultures have been trained on data sourced within the smaller populations of the world. Moreover, research by authors such as Lisa Gitelman and Antoinette Rouvroy lead us to the conclusion not only that the concept of “raw” or unbiased data simply does not exist but also that data is always contextualized for certain pragmatic ends.

The Global North has an unfair advantage with regard to the development and implementation of AI systems. Furthermore, the developed datasets on which these systems rely are obviously biased towards European and North American cultures. We need to acknowledge the emergence of a new form of colonialism that, according to Nick Couldry and Ulises Mejias, exploits AI systems and data as “tools for exploiting human life for power and for capital.” They make the specific point that because of the commercial culture of techno-platforms that focus on behavioral prediction, data colonialism repeats the traditional colonial framework of extraction for profit, but with data itself as the extractable commodity.

Reacting to the ChatGPT revolution

Since late 2022, AI has taken the world by storm, leading to a global media and regulatory frenzy. It has begun to dominate the economic logic of multiple sectors of industry. But even before the ChatGPT revolution, the question of how AI would influence society, the economy, ethics, human cultures and human identity has been an object of vibrant contradictory debate. We are only beginning to seek to understand how the explosion of activity based on AI will affect the field of decolonial studies.

Like most facets of modernity since the advent of the industrial revolution, AI is not immune to the remnants of colonialist logic and culture. The work of semiotician Walter Mignolo on the theme of decoloniality can offer some guidance. In his discussion on coloniality and decolonization, Mignolo raises pertinent points that may serve as a backdrop to continuing decolonial analysis of AI. We can start by acknowledging the epistemic and lifeworld-shaping agendas behind colonial praxes. Mignolo insists that the assumptions and regulations of Western systems of thought must be challenged if we hope to move past colonial models of thought. 

AI as it exists today clearly diminishes what anthropologists and ethnologists recognize as Indigenous Knowledge Systems (IKS). These include indigenous perceptual and interpretative frameworks for understanding the world. The development and integration of IKS is a high priority of decolonizing practices, which AI models do not overtly reflect. Consequently, the dominant model of AI’s algorithmic intelligence currently available reifies and perpetuates a Eurocentric methodology and unconsciously imposes it on the diversity of human cultures.

Mignolo’s work discusses how coloniality forms the dark undercurrent of modernity. Mignolo’s most striking claim, however, is that coloniality was instrumental to the development of modernity. We may similarly claim that the dark undercurrent of AI technologies is reflected in the global infrastructure driving AI, and the unequal relationships that people from various countries form with it. Just as coloniality was instrumental to modernity, certain infrastructural inequalities were also instrumental to these systems. AI systems require a lot of data to train, and this data is often biased to white or white-passing individuals. Additionally, early AI systems have classified black people as gorillas, and failed to recognize black skin tones on cameras. This misrepresentation of minorities by technologies reflects invisible biases that manifest themselves into AI technologies.

The hidden workforce powering AI

Behind every sleek AI model or chatbot lies a global network of human labor that remains largely invisible and underpaid. While AI’s intellectual development is centered in the tech hubs of the Global North, much of the work that makes these systems function is carried out by people in the Global South. This includes the often traumatic job of labeling harmful or explicit content so AI models know what to avoid.

In early 2023, reports emerged that OpenAI had outsourced content moderation tasks for ChatGPT to data workers in Kenya. Their job? Sift through deeply disturbing material — ranging from hate speech and racial slurs to graphic descriptions of sexual violence — to help train the AI not to produce it. These workers reported experiencing psychological distress and burnout, all while earning less than $2 an hour. Similar stories have come out of Asia and Latin America, where data annotators operate far from the public eye — and even further from the legal protections and workplace rights enjoyed by workers in Silicon Valley.

This isn’t a glitch in the system. It’s part of the system. The same logic that once drove colonial extraction of raw materials is now driving the extraction of cognitive and emotional labor from vulnerable populations. Cheap labor, lack of regulation and economic precarity make the Global South an ideal backend for the data-hungry engines of AI.

A growing number of low-profile digital jobs — called “microwork” — involve breaking down massive tasks into tiny, repetitive actions. From image tagging to audio transcription, this work feeds the AI economy but offers little in return. In Venezuela, for instance, even highly educated engineers have turned to microwork after the country’s economic collapse left few other options.

This kind of digital piecework is precarious by design. Workers often don’t know who they’re working for, can be dropped without warning and have little recourse to challenge unfair conditions. Yet their contributions are essential. Without them, the AI models powering search engines, language tools and image generators couldn’t function.

Technology is not neutral

AI systems, like the societies that produce them, are shaped by their histories and biases. The assumption that data is neutral — or that machines can somehow rise above human prejudices — is a dangerous myth. Historically, some of the most egregious algorithmic failures have disproportionately affected people of color. For example, facial recognition software has struggled to identify non-white faces. A policing algorithm used in the United States, COMPAS, was found to unfairly label black individuals as higher-risk for reoffending.

These aren’t accidents — they’re symptoms of systems built on biased data and narrow perspectives. As AI is trained on past behaviors, texts, and imagery, it can unintentionally reinforce stereotypes. Ask a generative AI to produce an image of an “Indian,” and you’re more likely to see clichés like turbans and outdated depictions of traditional clothing. This happens because the data used to train the model often reflects the assumptions and priorities of developers in the Global North.

Even the architectural design of early infrastructure, as scholar Langdon Winner once noted, can encode social bias — like a bridge built too low for buses, effectively excluding poor and minority communities. The same logic holds for AI. The algorithms may be new, but the exclusions through representation along the lines of race, gender and religion are familiar.

Whose AI, and for whom? A call for decolonial AI

As countries around the world race to lead in AI development, national policies are shaping the way the technology is designed, adopted and governed. These strategies are often couched in language about ethics and innovation — but who actually benefits? As Mignolo reminds us, coloniality isn’t just about economics or politics. It’s about control over knowledge, meaning and representation. And AI — built on data, driven by algorithms and shaped by policy — is now one of the most powerful tools for that control.

If AI is to serve a truly global population, we must confront the deep-rooted inequities in how it is built, maintained and deployed. This means recognizing the hidden labor behind it. It means building models that reflect the diversity of human experience, not just those privileged by history. And it means creating new spaces — both intellectual and institutional — for voices from the Global South to join and impactfully contribute to the conversation.

Because until we address the colonial legacies embedded in AI, the future it promises will remain unequally distributed.

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries