Artificial Intelligence Colonialism:Environmental Damage, Labor Exploitation, and Human Rights Crises in the Global South
The artificial intelligence (AI) industry, expected to reach $1.81 trillion by 2030, is revolutionizing sectors and economies worldwide. Its growth, however, intensifies global disparities and contributes to human rights abuses. This study explores two key questions: How does AI development lead to human rights violations, particularly in labor exploitation and environmental harm in the Global South? In what ways do these practices intensify systemic inequalities? This article demonstrates that AI functions as a form of digital colonialism, concentrating wealth among a global elite primarily in the Global North, while the Global South suffers from dehumanizing working conditions and environmental consequences. Laborers in the Global South endure unstable employment for low pay, supporting AI advancements while remaining unseen in the industry's narrative of progress. Concurrently, resource extraction and power-hungry AI data centers in the Global South damage ecosystems already impacted by climate change. This article emphasizes the critical need for responsible AI governance that emphasizes workers' rights, ecological preservation, and balanced global progress.
Introduction
The global artificial intelligence (AI) market, valued at $136.55 billion in 2022, is projected to grow at a compound annual growth rate (CAGR) of 37.3%, reaching 1.81 trillion by 2030. AI is expected to contribute $15.7 trillion to the global economy by 2030, surpassing the current combined output of China and India.1 For instance, AI expenditure in India increased by 109.6% or $665 million in 2018 and is projected to grow at a CAGR of 39% to reach 11.7 bllion USD by 2025, potentially contributing $500 billion USD (value of profit and industry size) to the nation's GDP. By 2030, AI is expected to boost China's GDP by 26% and North America's by 14.5%, collectively accounting for nearly 70% of the global economic impact. AI technologies may enhance [End Page 75] labor productivity by up to 40% across 16 industries, including manufacturing, potentially adding $3.8 trillion in gross value added by 2035. The rapid advancement of AI is transforming industries and reshaping the global economy, with AI-related technologies expected to contribute over $15.7 trillion to the global economy by 2030, according to PriceWaterhouseCoopers.2
As highly industrialized countries race ahead in AI development, two questions arise: What human rights violations are associated with AI development in the Global South, particularly in the contexts of labor exploitation and environmental degradation? And how do these practices reinforce global inequalities? The rise and expansion of AI technologies in the contemporary global capitalist system signifies a new form of transnational colonialism, which I and other scholars have labeled as 'AI colonialism.'3,4,5,6 This phenomenon urges us to examine the urgent ethical and political issues of AI, as it worsens global socioeconomic inequalities. In addition, the wealth generated by this new techology disproportionately enriches a transnational super-rich class, while the labor-heavy populations of the Global South endure dehumanizing conditions to sustain this technological progress.7,8,9 Simultaneously, the environmental costs of AI—ranging from resource extraction to carbon emissions—are systematically outsourced to vulnerable and undervalued regions, ensuring that the global North's elite remains shielded from the devastating ecological and human consequences. AI colonialism thus represents a new form of digital and environmental exploitation, where the benefits accrue to the Global North's super-rich class while the Global South remarkably suffers the externalities.10,11,12,13,14,15,16
Neoliberal capitalism drives technological innovation for profit and efficiency, often neglecting labor rights, environmental protections, and socioeconomic justice.17,18 Neoliberal policies promote deregulation, capital accumulation, and labor outsourcing to regions with the lowest wages and weakest social safety nets. In the AI sector, these dynamics exacerbate Global North-South disparities. Major tech companies, largely in the Global North, outsource tasks like data labeling and content moderation to countries such as India, Kenya, and the Philippines.19,20,21,22,23,24,25 Workers in these regions, earning as little as $1.50 per hour, face precarious conditions with minimal social safety protections. This exploitation is a deliberate strategy to maximize profits while minimizing costs, at the expense of vulnerable populations.26
Indeed, countries in the Global North dominate AI readiness rankings, with the United States, the United Kingdom, and Germany consistently ranking at the top, while many Global South states lag far behind.27 This disparity in AI readiness reflects not just a technological gap, but also a systemic issue where the Global South is used primarily as a source of undervalued human labor and cheapened source of natural resource inputs rather than as an equal partner in the development and deployment of AI technologies. The human rights implications are profound: workers in these countries often perform [End Page 76] monotonous, repetitive tasks that are essential for training AI systems, such as tagging images, reviewing content, and processing data. Yet, these workers are often rendered invisible in the global narrative of AI innovation, existing as "ghost laborers" whose contributions are exploited while they remain outside the frame of AI's celebrated progress.
The environmental cost of AI is equally concerning.28,29,30,31 The training of large AI models, such as OpenAI's GPT-3, requires massive amounts of computational power. A study from the University of Massachusetts Amherst found that training a single AI model can emit over 284,000 kilograms of CO2, equivalent to the lifetime carbon footprint of five cars. Energy demands are often offloaded to data centers in regions with weak environmental regulations, like the Global South. Amazon, Google, and Microsoft have constructed data centers in Africa and Southeast Asia, attracted by cheap electricity and limited governance oversight. These centers worsen local environmental degradation and natural resource depletion in areas already heavily impacted by climate change. In Kenya, where droughts and environmental crises have worsened recently, energy-intensive data centers by global tech companies raise significant concerns about the long-term sustainability of AI development in these regions.32,33,34,35
AI Colonialism: Necrostratification and Necroexportation
The relationship between AI, human societies, and ecosystems in the Global South reflects a deeply neocolonial dynamic, shaped by two interrelated processes: 1) necroexportation and 2) necrostratification. Necroexportation denotes the systematic transfer of AI's detrimental impacts—environmental harm, labor exploitation, and resource depletion—to the Global South, while the Global North reaps life-enhancing benefits. This structural condition involves exporting lethal consequences to marginalized human societies and ecosystems, reflecting necropolitical global inequalities. It highlights the deliberate process by which powerful entities shift harm to what they deem as expendable human populations. Necrostratification, on the other hand, captures the stratification and differentiation of life-and-death prospects across socially and historically constructed hierarchies, determining whose lives are valued and whose are rendered disposable. These two processes exemplify the dual nature of AI colonialism, highlighting the stark socioeconomic hierarchies of humanity and ecosystems supporting the expansion of the global AI economy.
Building on Achille Mbembe's concept of necropolitics, which extends Foucault's analysis of biopolitics, necrostratification pertains to the hierarchical ordering of life and death prospects within the global system.36,37,38 Biopolitics, as conceptualized by Foucault, refers to the governance of populations through the regulation of life processes such as health, reproduction, and labor. The concept highlights how power operates not only through coercion but also by managing life to optimize productivity and societal order. Later on, Mbembe expands this concept with necropolitics, emphasizing the power of sovereigns [End Page 77] to determine not just how life is managed but also who is allowed to live and who is consigned to death. In the context of AI colonialism, necrostratification reveals how the benefits of AI development are stratified along global logics of socioeconomic hierarchies, with the Global North enjoying the technological benefits and wealth accumulation from AI while the Global South is subjected to conditions of exploitation, environmental degradation, and premature death. AI colonialism exercises necropolitical logic by determining which human populations and environmental ecosystems are valuable and which are disposable. The invisibilized yet essential workers of the Global South are exploited as disposable tools for AI advancement, while economic profits concentrate in the Global North. Tech corporations capitalize on global disparities, outsourcing AI tasks to regions with weak labor protections, like Southeast Asia and sub-Saharan Africa, allowing affluent countries to benefit without bearing the costs. Innovation narratives obscure these workers' contributions, framing AI as autonomous rather than a product of exploited labor. This system leads to disproportionate profits for socio-economic elites, while marginalized laborers lack recognition, rights, fair compensation, and are often invisibilized and killed due to life's precarity.
Foucault's theories on power, surveillance, and capitalism shed light on AI colonialism's mechanisms.39,40,41,42,43 His concept of disciplinary power explains how AI workers, especially in the Global South, are under constant surveillance and control to ensure productivity. Shoshana Zuboff's surveillance capitalism analysis shows how human behavior is commodified into data for profit. In the AI economy, Global South workers are exploited for both their labor and data, which refine AI systems or predict behaviors, thus commodifying their lives twice.44
My concept of necroexportation in AI builds on Lessenich's social theory of externalization by showing how AI benefits are concentrated in affluent regions, while the Global South suffers environmental degradation, labor exploitation, and eroded futures.45 This process creates global life-and-death hierarchies, designating certain populations and ecosystems with varying values of importance and disposability to sustain others' privileges. In AI, necroexportation appears through exploitative labor in the Global South, where workers face hazardous conditions, and extractive industries devastate ecosystems to support AI infrastructure. It also undermines intergenerational equity, as AI's energy demands worsen climate change, trapping marginalized populations in cycles of vulnerability and inequality. Necroexportation involves the institutionalization of sacrifice across global divides, while necrostratification examines societal hierarchies determining life-and-death prospects. Necroexportation externalizes costs from the Global North to the Global South, whereas necrostratification differentiates human populations within and across societies based on [End Page 78] race, class, gender, abilities, geography, and enduring historical inequalities. Together, these concepts highlight different inequality dimensions: necroexportation addresses global harm offloading and transferability, and necrostratification focuses on societal structures that perpetuate varying levels of disposability.
AI development's environmental impact highlights its necrostratified nature. Large-scale machine learning models demand significant computational power, supported by data centers consuming substantial electricity, often situated in the Global South due to lax environmental regulations and low energy costs. Additionally, the extraction of rare minerals for AI hardware exacerbates the environmental burden on the Global South. This mirrors historical Western colonial exploitation for resources and its modern neocolonial practices, with AI development contributing to climate change and resource depletion, disproportionately affecting the Global South. The Global North's historical carbon emissions from industrialization have caused the current climate crisis, leaving the Global South to face climate disasters with constrained resources, insufficient international aid, domestic governance challenges, inequitable trade agreements, and a compromised local elite, perpetuating unequal resource allocation.46,47,48,49,50,51,52 AI's environmental impact from data centers and mining exacerbates existing inequalities, worsening climate impacts for vulnerable countries. Nations responsible for past emissions benefit from AI while shifting environmental costs to regions already facing rising sea levels, unpredictable weather, and resource scarcity.
AI and Labor Exploitation: The Present Humanity
The rapid development of AI is often heralded as a technological revolution, promising innovation, efficiency, and economic growth.53,54,55 Behind the sleek facade of AI, a deeply exploitative global labor structure underpins its development—one that remains largely invisible to the public. Far from being a fully autonomous technology, AI is built on the backs of many workers in the Global South who perform tedious and poorly compensated tasks like data labeling and content moderation. These tasks are indispensable to training AI systems, yet they are outsourced to low-wage workers in countries like India, Kenya, and the Philippines. Many AI systems rely on a vulnerable workforce performing repetitive tasks for low wages, often as little as $1.46 per hour after tax.56 Multinational corporations outsource these tasks to countries like Venezuela, Bulgaria, India, Kenya, and the Philippines, where workers label data for AI [End Page 79] systems.57 Language barriers arise as instructions are typically in English, risking termination of labor contract for misunderstandings. Workers endure precarious conditions, heavy surveillance, and punishment for deviations. Content moderators, essential for online platform safety, encounter traumatic material without adequate mental health support, leading to anxiety, depression, and PTSD. These workers have to endure a minimum of 8 hours every working day watching graphic content such as murder, child abuse, and pornography.58
AI's dependence on human labor is most evident in the data labeling processes, where vast datasets must be manually categorized to train machine learning algorithms. For instance, AI systems used for image recognition rely on humans to label millions of images to develop the algorithm to distinguish between objects. Similarly, content moderation on platforms like Facebook and YouTube depends on workers who review and flag inappropriate or harmful material.59,60 AI, despite its advanced capabilities, depends on human input, often outsourced to low-wage regions due to significant international wage disparities.
There are many examples of precarity and dehumanizing conditions in the Global South, resulting from the global expansion of the AI industry. India has emerged as a major hub for AI-related labor, particularly in data processing and labeling.61 Millions of Indian workers are employed in the digital labor market, performing the tedious tasks required to build AI systems. However, these jobs come with significant downsides. Data labelers in the Global South earn an average of just $1.50 per hour, a wage that barely covers basic living expenses.62 Moreover, these workers often lack job security, as many are hired on short-term contracts or as freelancers through platforms like Amazon Mechanical Turk or Appen. These digital platforms also subject workers to intense surveillance and performance monitoring, creating a high-pressure environment where failure to meet strict productivity targets can result in the immediate loss of work. Kenya, too, has become a key destination for outsourced AI labor, particularly in the area of content moderation.63 Tech giants like Facebook and Google have outsourced content review tasks to Kenyan workers, who are responsible for monitoring and filtering user-generated content to ensure compliance with community standards. However, this work comes with significant ethical concerns.64 Kenyan content moderators are often exposed to graphic, violent, or disturbing material, which can lead to severe psychological trauma. Despite the emotional toll of the job, these workers are paid as little as two dollars per hour—a remarkable contrast to Kenya's average hourly wage of approximately ten dollars per hour.65 They are expected to sift through thousands of pieces of content every day, with little access to mental health support or counseling. Kenyan Facebook content moderators, many of whom reported symptoms of post-traumatic stress disorder (PTSD) after repeatedly viewing traumatic content, characterized their working conditions as dehumanizing.66 [End Page 80] Yet, because these workers are employed through subcontractors, they are often denied the benefits and protections that full-time employees would typically be provided with. The Philippines, with its large, English-speaking workforce, has also become a significant player in the AI labor market, particularly in data labeling. Filipino workers are often hired through global digital platforms, competing with thousands of others for micro-tasks that pay very little. A study by the Oxford Internet Institute found that workers on these platforms earn an average of two to three dollars per hour, with some workers earning even less (FairWork 2023).67 Despite the crucial role these workers play in the functioning of AI systems, they are relegated to the status of "ghost workers"—invisible to the companies and consumers who benefit from their labor. Like their counterparts in India and Kenya, Filipino workers face precarious working conditions, with little job security or access to social welfare benefits. Many are employed as independent contractors rather than full-time employees, meaning they lack access to healthcare, paid leave, or pension benefits.68,69
The case of Oskarina Vero Fuentes further illustrates these exploitative conditions. Her story as a Venezuelan content moderator based in Colombia captures the "digital sweatshop" of the AI tech ecosystem.70 Fuentes performs data labeling tasks on platforms such as Appen, earning between 2.2 and 50 cents per task, with typical earnings of just one dollar for an hour and a half of work. On rare occasions when tasks are plentiful, she can earn up to $280 per month, barely reaching Colombia's minimum wage of $285—but such weeks are uncommon, and on slow days, she will make as little as one to two dollars. Fuentes works over 18 hours daily, starting at 2 a.m. to secure unpredictable tasks, a common practice in the Global South. In East Africa, Venezuela, India, the Philippines, and refugee camps in Kenya and Lebanon, workers engage in microtasks or short-term contracts in data centers like Sama's Nairobi office, which employs 3,000 people under conditions criticized by Time as exploitative for content moderators.71
The increasing demand for inexpensive labor in the AI training industry has led to the exploitation of underage workers, depriving them of rights and dignity while perpetuating cycles of inequality and harm.72 Platforms like Clickworker and Toloka have minimal age verification standards, merely asking workers to state they are over 18, while others like Remotasks use face recognition, which can be bypassed, as one worker did by using his grandmother's face. In some Venezuelan homes, children as young as thirteen are involved in data labeling tasks, sharing accounts within family units, and working in shifts to maximize productivity.73 This setup often results in physical strain, as described by a family where members, including children, work long hours, causing back pain. Additionally, the lack of oversight allows for situations where workers [End Page 81] might not retain their full income, as seen with a Clickworker user in India who takes half of his workers' earnings. These cases illustrate an interrelated problem of not only labor rights but also of the rights and dignity of children.74,75
The labor exploitation that characterizes AI development in the Global South is not just a matter of low wages—it is also a violation of basic human rights. Article 23 of the Universal Declaration of Human Rights guarantees the right to just and favorable conditions of work, including fair wages that provide a decent standard of living.76 Yet, AI workers in the Global South are systematically denied these rights. They are paid poverty wages, forced to work under high-pressure conditions, and given little to no job security. Content moderators, in particular, are subjected to severe psychological harm due to the nature of the material they are required to review, yet they are offered no meaningful support or compensation for the risks associated with their work.
The gig-based nature of AI labor also compounds the precariousness of these workers' lives.77,78,79,80,81 In India and the Philippines, digital workers such as data labelers are typically employed on short-term contracts or as freelancers, lacking job security. This results in an unstable work environment where workers must accept low wages and poor conditions for potential future employment. The digital platforms employing these workers operate in a legal gray area with weak labor protections, offering limited recourse for mistreatment or unfair dismissal, thus perpetuating exploitation in AI labor outsourcing. The human toll is severe: Kenyan content moderators suffer lasting psychological harm from exposure to violent material while earning much less than their counterparts in the Global North. In early 2024, nearly 100 Kenyan data labelers and AI workers, employed by companies like Facebook, Scale AI, and OpenAI, wrote an open letter to US President Joe Biden, describing their working conditions as akin to "modern-day slavery."82 Indian data labelers face long, monotonous hours for subsistence wages, and Filipino workers compete for insecure micro-tasks, trapping them in poverty and precarious employment. Despite being crucial to AI systems, their labor is devalued, contributions ignored, and rights violated.
Venture capitalists and wealthy investors who sit at the top of the global socioeconomic hierarchy benefit the most from AI development. Below them are top executives and elite engineers with high salaries and influence over AI innovation. At the bottom, Global South workers perform essential but undervalued [End Page 82] labor supporting AI systems. This hierarchy illustrates necrostratification, where life prospects and benefits are unevenly distributed. Investors and executives enjoy immense financial rewards and security, far removed from AI labor burdens. Conversely, low-wage data labelers and content moderators in the Global South endure exploitative conditions with minimal recognition, reinforcing a system that devalues their labor and stratifies human life and work, disproportionately impacting the most marginalized. Within the underpaid category of regular moderators and AI workers, a distinct wage disparity exists. Surprisingly, AI data labelers in Venezuela earn between ninety cents and two dollars per hour, while their counterparts in the United States receive hourly wages of ten to twenty-five dollars.83 In contrast, AI development is enriching billionaires by significantly increasing company valuations and personal fortunes. For example, Nvidia's involvement in AI has helped boost a billionaire's net worth to $1 billion. Investments from giants like Microsoft and OpenAI have driven a $2.6 billion valuation for AI startups, benefiting stakeholders. This surge in AI has created new billionaires and expanded existing fortunes of a new group of transnational super-rich class.84
This labor-reward stratification embodies the necropolitical logic of AI colonialism, where lower-tier workers are undervalued while the elite reap significant rewards. For the superrich and top executives, AI yields immense fi-nancial and technological advantages; for Global South workers, it results in exploitation and insecurity. Necrostratification illustrates these structural inequalities within the global AI political economy, benefiting the wealthiest while marginalizing laborers. Global South workers are essential yet remain invisible, their humanity reduced to mere utility. AI development perpetuates global inequalities, reinforcing a necrostratified system with unequal life prospects. Superrich investors and executives thrive on AI's success, while Global South workers bear the costs and their contributions are ignored. This skewed gain distribution from AI development mirrors a deeper necropolitical order that urgently needs addressing to combat entrenched inequalities of AI colonialism.
AI and Environmental Degradation: The Present and the Future of Humanity
The environmental impacts of AI include direct and indirect effects, both of which significantly impact local and environmental living conditions. Direct impacts arise from the lifecycle of AI computer resources—production, transport, operations, and end-of-life stages—with operations being the most energy-intensive, causing substantial greenhouse gas (GHG) emissions and resource consumption.85 Indirect impacts come from AI applications in sectors [End Page 83] like mining and manufacturing, where efficiency gains may paradoxically increase net GHG emissions. These diffusion effects are challenging to quantify as AI-specific impacts are embedded within broader ICT activities. AI's potential for sustainability is countered by its energy and material demands, exacerbating environmental issues and necessitating robust mitigation frameworks and policies. Although AI computational capabilities have advanced, leading to environmental concerns, a 550% increase in data center computing capacity from 2010 to 2018 only resulted in a 6% rise in energy consumption.86 The exponential growth in AI's energy demands, with Google reporting 60% of its AI-related electricity use stemming from inference in 2022 and research suggesting a large language model (LLM) assistant for Google searches could require an equivalent amount of energy as Ireland's annual energy consumption, emphasizes the pressing need to tackle its environmental impact as global AI adoption accelerates.87 Notably, the real emissions from the in-house data centers of major tech companies like Google, Microsoft, Meta, and Apple from 2020 to 2022 were about 662% higher than officially reported.88
One of the most significant environmental challenges posed by AI is its enormous energy consumption. AI is notably more energy-intensive than typical cloud-based applications, with a ChatGPT query requiring nearly 10 times as much electricity to process as a Google search. This increased demand is expected to cause data center power demand to grow by 160% by 2030.89 Artificial intelligence requires significant computational power from numerous data center servers, consuming substantial electricity.90 Northern Virginia has become a key hub for this expanding industry. By 2030, its data centers will require energy equivalent to powering 6 million households.91 The national electricity demand has surged so much that plans to close several coal plants have been delayed, according to another study. This rising power need has widespread impacts on energy consumption and infrastructure planning. Training large-scale machine learning models, such as OpenAI's GPT-3, requires vast amounts of computational power. According to a study by the University of Massachusetts Amherst, training a single AI model can emit over 284,000 kilograms of carbon dioxide, the equivalent of the emissions from five cars over their entire lifetimes.92 The 5 billion YouTube views of Despacito in 2018 consumed energy equivalent to heating 40,000 U.S. homes for a year, emphasizing tech's massive environmental impact.93 LLMs like ChatGPT are also highly energy-intensive, with approximately 700,000 liters of water used to cool the machines training ChatGPT-3 at Microsoft's facilities.94 The tech industry, despite advocating for sustainability, often disregards its significant environmental impact. Its hesitation to reveal data on energy use and emissions indicates an intent to avoid public scrutiny. The extensive water consumption by ChatGPT is particularly controversial, given the global scarcity of safe drinking water for many households, especially in the Global South.
In addition to the energy costs associated with AI, the development of AI hardware also relies on the extraction of rare minerals, a practice that has devastating environmental and social consequences. The Democratic Republic of Congo (DRC), for example, is the source of more than 70% of the world's [End Page 84] cobalt, a mineral essential for the batteries that power AI technologies.95 Artisanal and small-scale mining (ASM) for critical minerals often causes environmental damage such as deforestation, soil erosion, and water contamination from toxic chemicals. Human rights abuses are common, with workers facing unsafe conditions, lack of protective gear, and exploitation, including child labor. Large-scale operators typically avoid ASM to evade these risks and costs, leaving problems unresolved. In the DRC, cobalt mining has resulted in widespread deforestation, soil degradation, and water contamination, with toxic chemicals polluting local water supplies, harming human health and biodiversity. At least 25,000 children in Congo are among the artisanal miners working in hazardous conditions with minimal protective equipment.96 This endangers lives and devastates the environment. Cobalt extraction's ecological footprint is significant, as the forests destroyed during the process are vital for regulating the global climate by absorbing carbon dioxide.97 Increasing demand for AI hardware will exacerbate environmental damage from cobalt mining, with severe consequences for local ecosystems and the global environment. Cobalt and nickel mining for AI hardware comes with severe environmental costs, including deforestation, toxic air pollution, and water contamination.98 In the Philippines, seventeen nickel mines were shut down due to environmental issues, while in Norilsk, Russia, a nickel factory released 350,000 tons of sulfur dioxide yearly, heavily polluting the city.99 Furthermore, communities near mining sites like Cerro Matoso in Colombia experience increased deformities and respiratory illnesses due to toxic exposure from mining and smelting. These effects underscore the urgent need for sustainable methods in sourcing critical materials for AI technologies.
The environmental degradation caused by AI development is not limited to the destruction of ecosystems or the depletion of natural resources; it also contributes directly to the global climate crisis.100,101 Data centers and the mining of rare minerals both generate substantial greenhouse gas emissions, accelerating the pace of climate change. This is particularly concerning for the Global South, where the impacts of climate change are already being felt with increasing severity. Countries in sub-Saharan Africa, Southeast Asia, and Latin America are experiencing rising sea levels, prolonged droughts, and extreme weather events that threaten food security, displace populations, and exacerbate existing inequalities. For example, in East Africa, where food insecurity is a growing concern, climate change has led to increasingly unpredictable rainfall patterns, worsening agricultural productivity.102 The expansion of data centers in this region only adds to the environmental burden, as they consume large amounts of electricity and water, further straining local resources and contributing to ecological collapse.
AI development causes extensive and inequitable environmental degradation. Through necroexportation, the Global North, which reaps the benefits of [End Page 85] AI, transfers the environmental costs to the Global South, leading to ecosystem and community damage through resource extraction and energy consumption. This mirrors historical colonialism: the Global South provides labor and resources for the Global North's advancement while bearing the massive environmental and social consequences.
Conclusion
This paper explored how AI development and usage perpetuates global inequalities through systemic labor exploitation, environmental degradation, and privacy violations, primarily impacting the Global South in fundamentally detrimental ways. While AI is often portrayed as autonomous and transformative, it relies heavily on underpaid human labor in the Global South, where workers perform crucial but undervalued tasks like data labeling and content moderation. This system reinforces global socio-economic hierarchies, concentrating the material benefits of AI in the Global North while the Global South bears the burdens of labor exploitation and environmental harm. At the core of this analysis is the concept of necrostratification, where the rewards of AI development are unequally distributed, favoring investors and executives at the top while marginalizing workers at the bottom.
The immediate human rights violations experienced by these laborers represent a critical issue, but the long-term consequences of AI extend further. Future generations will face severe environmental and privacy challenges due to AI's energy-intensive operations and resource extraction, which contribute significantly to carbon emissions, deforestation, and loss of biodiversity. These practices exacerbate climate change, disproportionately affecting the Global South, where vulnerable populations will bear the brunt of the ecological crises that AI accelerates. Additionally, AI's growing surveillance capabilities pose serious risks to individual privacy, not only for current generations but also for future societies. Without robust governance frameworks regulating AI's data collection and surveillance, future populations may face heightened monitoring and control, exacerbating global inequalities while undermining one's dignity and rights. AI development must be analyzed through intergenerational justice, considering both current and future generations' rights to a world free from environmental degradation and invasive surveillance. Hence, global AI governance should address labor exploitation, privacy protection, and ecosystem preservation to ensure sustainable and equitable development. Additionally, AI's environmental impact, such as energy consumption and resource extraction, threatens biodiversity and planetary health, necessitating the integration of environmental sustainability into AI governance. A rights-based approach to [End Page 86] AI should encompass dignity protection of all human individuals and ecosystem protection for future generations.
The promise of AI technologies depends on the intended purposes and actual use by its producers and users. The AI-human rights relationship is complex and significant.103,104,105 AI can advance human rights by saving lives and protecting the planet through early disease detection, natural disaster prediction, and aiding vulnerable communities. Responsible AI governance, however, is crucial to prevent undermining peace and human rights.106 International efforts emphasize AI as a progress engine for Sustainable Development Goals without compromising human rights. AI offers profound opportunities but poses existential challenges requiring careful governance to prevent misuse, protect human rights, and address labor exploitation, privacy concerns, and environmental impact. To ensure AI promotes a fair and sustainable future, a comprehensive system safeguarding the dignity of current and future generations and the planet's well-being is essential. This framework should underpin responsible AI development and implementation. As with countless technological innovations within a global capitalist system steeped in socioeconomic stratifications, the promise of improving the human condition cannot rest on technology alone. Global transformation towards emancipatory and just politics demands a profound reimagining of the global order—one where human dignity and sustainable development are paramount, and the logics of oppression are rendered relics of the past. In that way, AI might evolve from being a tool of colonialism to a potent instrument for just social change.
Dr. Salvador Santino F. Regilme is an Associate Professor of International Relations at the Institute for History at Leiden University.
Notes
1. Maheshwari, Rashi. "Top AI Statistics And Trends." Forbes. February 6, 2024. https://www.forbes.com/advisor/in/business/ai-statistics/.
2. PwC. "Sizing the Prize: What's the Real Value of AI for Your Business and How Can You Capitalise?" PwC. October 1, 2020. https://www.pwc.com.au/government/pwc-ai-analysissizing-the-prize-report.pdf.
3. Adams, Rachel. 2021. "Can Artificial Intelligence Be Decolonized?" Interdisciplinary Science Reviews 46 (1–2): 176–97. doi:10.1080/03080188.2020.1840225.
4. Muldoon, James, and Boxi A Wu. "Artificial Intelligence in the Colonial Matrix of Power." Philosophy & Technology 36, no. 4 (2023): 80. doi:10.1007/s13347-023-00687-8.
5. Arora, Payal. "Creative Data Justice: A Decolonial and Indigenous Framework to Assess Creativity and Artificial Intelligence." Information, Communication & Society, ahead-of-print (2024): 1–17. doi:10.1080/1369118x.2024.2420041.
6. Arora, A., M. Barrett, E. Lee, E. Oborn, and K. Prince. "Risk and the Future of AI: Algorithmic Bias, Data Colonialism, and Marginalization." Information and Organization 33, no. 3 (2023): 100478. doi:10.1016/j.infoandorg.2023.100478.
7. Regilme, Salvador Santino. "Europe's Super-Rich: Towards Oligarchic Constitutional Order." JCMS: Journal of Common Market Studies. 2024. doi:10.1111/jcms.13702.
8. Regilme, Salvador Santino. "Constitutional Order in Oligarchic Democracies: Neoliberal Rights versus Socio-Economic Rights." Law, Culture and the Humanities 19 no. 1 (2023): 126–43. doi:10.1177/1743872119854142.
9. Pijl, Kees van der. Transnational Classes and International Relations. Routledge, 1998.
10. Jones, Elsabet, and Baylee Easterday."Artificial Intelligence's Environmental Costs and Promise." Council on Foreign Relations. June 28, 2022. https://www.cfr.org/blog/artificial-intelligencesenvironmental-costs-and-promise.
11. Heikkilä, Melissa. "Making an Image with Generative AI Uses as Much Energy as Charging Your Phone." MIT Technology Review. December 1, 2023. https://www.technologyreview.com/2023/12/01/1084189/making-an-image-with-generative-ai-uses-as-much-energy-ascharging-your-phone/?truid=&utm_source=the_algorithm&utm_medium=email&utm_campaign=the_algorithm.unpaid.engagement&utm_content=12-04-2023.
12. Bolle, Monica De. "AI's Carbon Footprint Appears Likely to Be Alarming." Peterson Institute for International Economics. February 29, 2024. https://www.piie.com/blogs/realtime-economics/2024/ais-carbon-footprint-appears-likely-be-alarming.
13. Heikkilä, Melissa. "AI's Carbon Footprint Is Bigger than You Think." MIT Technology Review. December 5, 2023. https://www.technologyreview.com/2023/12/05/1084417/ais-carbon-footprint-is-bigger-than-you-think/.
14. Cho, Renee. "AI's Growing Carbon Footprint." State of the Planet - News from the Columbia Climate School. June 9, 2023. https://news.climate.columbia.edu/2023/06/09/ais-growing-carbonfootprint/.
15. Tomlinson, Bill, Rebecca W. Black, Donald J. Patterson, and Andrew W. Torrance. "The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans." Scientific Reports 14 no. 1 (2024): 3732. doi:10.1038/s41598-024-54271-x.
16. Muldoon, James, Callum Cant, Mark Graham, and Funda Ustek Spilda. "The Poverty of Ethical AI: Impact Sourcing and AI Supply Chains." AI & SOCIETY, 2023, 1–15. doi:10.1007/s00146-023-01824-9.
17. Regilme, Salvador Santino. "Constitutional Order in Oligarchic Democracies: Neoliberal Rights versus Socio-Economic Rights." Law, Culture and the Humanities 19 no. 1 (2023): 126–43. doi:10.1177/1743872119854142.
18. Regilme, Salvador. "Bringing the Global Political Economy Back In: Neoliberalism, Globalization, and Democratic Consolidation." International Studies Perspectives 15 no. 3 (2014): 277–96. doi:10.1111/insp.12020.
19. Chisnall, Mick. "Digital Slavery, Time for Abolition?" Policy Studies 41 no. 5 (2020): 488–506. doi:10.1080/01442872.2020.1724926.
20. Haskins, Caroline. "The Low-Paid Humans Behind AI's Smarts Ask Biden to Free Them From 'Modern Day Slavery.'" Wired. May 22, 2024. https://www.wired.com/story/low-paid-humansai-biden-modern-day-slavery/.
21. Rowe, Niamh. "Millions of Workers Are Training AI Models for Pennies." November 1, 2022. https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/?utm_medium=email&utm_source=pocket_hits&utm_campaign=POCKET_HITS-EN-DAILY-RECS-2023_10_18&sponsored=0&position=1&category=what_else_were_reading&scheduled_corpus_item_id=3b8860c4-378e-488e-96a3-23482629c63d&url=https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/.
22. Rowe, Niamh. "Millions of Workers Are Training AI Models for Pennies."
23. Rowe, Niamh. "Underage Workers Are Training AI." Wired. November 15, 2023. https://www.wired.com/story/artificial-intelligence-data-labeling-children/.
24. Rowe, Niamh. "'It's Destroyed Me Completely': Kenyan Moderators Decry Toll of Training of AI Models." The Guardian. August 2, 2023. https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai.
25. Williams, Adrienne, Milagros Miceli, and Timnit Gebru. "The Exploited Labor Behind Artificial Intelligence." Noemamag. October 13, 2022. https://www.noemamag.com/the-exploitedlabor-behind-artificial-intelligence/.
26. Haskins, Caroline. "The Low-Paid Humans Behind AI's Smarts Ask Biden to Free Them From 'Modern Day Slavery.'" Wired. May 22, 2024. https://www.wired.com/story/low-paid-humansai-biden-modern-day-slavery/.
27. Oxford Insights. "Government AI Readiness Index 2023." Oxford Insights. December 2, 2023. https://oxfordinsights.com/wp-content/uploads/2023/12/2023-Government-AI-Readiness-Index-2.pdf.
28. Holden, Kerry, and Matthew Harsh. "On Pipelines, Readiness and Annotative Labour: Political Geographies of AI and Data Infrastructures in Africa." Political Geography 113 (2024): 103150. doi:10.1016/j.polgeo.2024.103150.
29. Coleman, Jude. "AI's Climate Impact Goes beyond Its Emissions." Scientific American. December 7, 2023. https://www.scientificamerican.com/article/ais-climate-impact-goes-beyondits-emissions/.
30. Nordgren, Anders. "Artificial Intelligence and Climate Change: Ethical Issues." Journal of Information, Communication and Ethics in Society 21, no. 1 (2022): 1–15. doi:10.1108/jices-11-2021-0106.
31. Chen, Siqi, Shuyunfan Zhang, Qihua Zeng, Jiaxuan Ao, Xiaohua Chen, and Shizhao Zhang. "Can Artificial Intelligence Achieve Carbon Neutrality? Evidence from a Quasi-Natural Experiment." Frontiers in Ecology and Evolution 11 (2023): 1151017. doi:10.3389/fevo.2023.1151017.
32. Muldoon, James, Callum Cant, Mark Graham, and Funda Ustek Spilda. "The Poverty of Ethical AI: Impact Sourcing and AI Supply Chains." AI & SOCIETY, 2023, 1–15. doi:10.1007/s00146-023-01824-9.
33. Chisnall, Mick. "Digital Slavery, Time for Abolition?" Policy Studies 41 no. 5 (2020): 488–506. doi:10.1080/01442872.2020.1724926.
34. Rowe, Niamh. "'It's Destroyed Me Completely': Kenyan Moderators Decry Toll of Training of AI Models." The Guardian. August 2, 2023. https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai.
35. Rowe, Niamh. "Underage Workers Are Training AI." Wired. November 15, 2023. https://www.wired.com/story/artificial-intelligence-data-labeling-children/.
36. Mbembe, Achille. Necropolitics. Durham, NC: Duke University Press, 2019. doi:10.1515/9781478007227-001.
37. Mbembe, Achille. "Necropolitics." Public Culture 15 (January 2003): 11–40.
38. Regilme, Salvador Santino. "Crisis Politics of Dehumanisation during COVID-19: A Framework for Mapping the Social Processes through Which Dehumanisation Undermines Human Dignity." The British Journal of Politics and International Relations 25 no. 3 (2023): 555–73. doi:10.1177/13691481231178247.
39. Foucault, Michel. Security, Territory, Population. Palgrave MacMillan, 2004.
40. Foucault, Michel. The Government of Self and Others. Translated by Graham Burchell. Palgrave MacMillan, 2008. https://books.google.com.ph/books?id=G7rOQHX0K1kC&dq=the+government+of+self+and+others&hl=en&sa=X&ved=0ahUKEwiI2PbXyK7MAhWBk5QKHQzyB7UQ6AEIGjAA.
41. Lichtenstein, Eli B. "Foucault's Analytics of Sovereignty." Critical Horizons 22 no. 3 (2021): 287–305. doi:10.1080/14409917.2021.1953750.
42. Humphreys, Ashlee. "The Consumer as Foucauldian 'Object of Knowledge.'" Social Science Computer Review 24 no. 3 (2006): 296–309. doi:10.1177/0894439306287975.
43. Smith, Carole. "The Sovereign State v Foucault: Law and Disciplinary Power." The Sociological Review 48 no. 2 (2000): 283–306. doi:10.1111/1467-954x.00216.
44. Zuboff, Shoshana. The Age of Surveillance Capitalism : The Fight for a Human Future at the New of Power. New York: PublicAffairs, 2018.
45. Lessenich, Stephen. Living Well at Others' Expense. London: Polity Press, 2019.
46. Reibold, Kerstin. "Settler Colonialism, Decolonization, and Climate Change." Journal of Applied Philosophy 40 no. 4 (2023): 624–41. doi:10.1111/japp.12573.
47. Mercer, Harriet, and Thomas Simpson. "Imperialism, Colonialism, and Climate Change Science." Wiley Interdisciplinary Reviews: Climate Change 14 no. 6 (2023). doi:10.1002/wcc.851.
48. Islam, Faisal Bin, Lindsay Naylor, James Edward Bryan, and Dennis J. Coker. "Climate Coloniality and Settler Colonialism: Adaptation and Indigenous Futurities." Political Geography 114 (2024): 103164. doi:10.1016/j.polgeo.2024.103164.
49. Hoz, Nelsa De la, Diego Silva-Garzón, Nathalia Hernández Vidal, Laura Gutierrez Escobar, Martina Hasenfratz, and Benno Fladvad. "Unraveling the Colonialities of Climate Change and Action." Journal of Political Ecology 31 no. 1 (2024). doi:10.2458/jpe.6365.
50. Kratzer, Sebastian. "Climate Change and Colonialism in the Green Economy." Alternautas 2 no. 2 (2015). doi:10.31273/alternautas.v2i2.1020.
51. Ryneveld, Tara Nair van, and Mine Islar. "Coloniality as a Barrier to Climate Action: Hierarchies of Power in a Coal-Based Economy." Antipode 55 no. 3 (2023): 958–81. doi:10.1111/anti.12907.
52. Sultana, Farhana. "The Unbearable Heaviness of Climate Coloniality." Political Geography 99 (2022): 102638. doi:10.1016/j.polgeo.2022.102638.
53. Muldoon, James, and Boxi A Wu. "Artificial Intelligence in the Colonial Matrix of Power." Philosophy & Technology 36 no. 4 (2023): 80. doi:10.1007/s13347-023-00687-8.
54. Roberts, Huw, Emmie Hine, Mariarosaria Taddeo, and Luciano Floridi. "Global AI Governance: Barriers and Pathways Forward." International Affairs 100 no. 3 (2024): 1275–86. doi:10.1093/ia/iiae073.
55. Tallberg, Jonas, Eva Erman, Markus Furendal, Johannes Geith, Mark Klamberg, and Magnus Lundgren. "The Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research." International Studies Review 25 no. 3 (2023): viad040. doi:10.1093/isr/viad040.
56. Williams, Adrienne, Milagros Miceli, and Timnit Gebru. "The Exploited Labor Behind Artificial Intelligence." Noemamag. October 13, 2022. https://www.noemamag.com/the-exploitedlabor-behind-artificial-intelligence/.
57. Pogrebna, Ganna. "AI Underpinned by Developing World Tech Worker 'Slavery.'" Asia Times. October 9, 2024. https://asiatimes.com/2024/10/ai-underpinned-by-developing-world-techworker-slavery/.
58. Haskins, Caroline. "The Low-Paid Humans Behind AI's Smarts Ask Biden to Free Them From 'Modern Day Slavery.'" Wired. May 22, 2024. https://www.wired.com/story/low-paid-humansai-biden-modern-day-slavery/.
59. Arora, Payal. "Creative Data Justice: A Decolonial and Indigenous Framework to Assess Creativity and Artificial Intelligence." Information, Communication & Society, ahead-of-print (2024): 1–17. doi:10.1080/1369118x.2024.2420041.
60. Al-Sibai, Noor. "That AI You're Using Was Trained By Slave Labor, Basically." The Byte. October 20, 2023. https://futurism.com/the-byte/ai-gig-slave-labor.
61. Rowe, Niamh. "Millions of Workers Are Training AI Models for Pennies." November 1, 2022. https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/?utm_medium=email&utm_source=pocket_hits&utm_campaign=POCKET_HITS-EN-DAILY-RECS-2023_10_18&sponsored=0&position=1&category=what_else_were_reading&scheduled_corpus_item_id=3b8860c4-378e-488e-96a3-23482629c63d&url=https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/.
62. FairWork. "Fairwork Cloudwork Ratings 2023 Work in the Planetary Labour Market." Fair-work. November 1, 2023. https://fair.work/wp-content/uploads/sites/17/2023/07/Fairwork-Cloudwork-Ratings-2023-Red.pdf.
63. Rowe, Niamh. "'It's Destroyed Me Completely': Kenyan Moderators Decry Toll of Training of AI Models." The Guardian. August 2, 2023. https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai.
64. Muldoon, James, and Boxi A Wu. "Artificial Intelligence in the Colonial Matrix of Power." Philosophy & Technology 36 no. 4 (2023): 80. doi:10.1007/s13347-023-00687-8.
65. Destination Scanner. Average Salary in Kenya - Destination Scanner. Destination Scanner. Last accessed November 20, 2024. https://destinationscanner.com/average-salary-in-kenya/
66. Africa News. "Ex-Facebook Content Moderator in Kenya Sues Meta over Poor Working Conditions." Africa News. May 10, 2022. https://www.africanews.com/2022/05/10/ex-facebookcontent-moderator-in-kenya-sues-meta-over-poor-working-conditions/.
67. FairWork. "Fairwork Cloudwork Ratings 2023 Work in the Planetary Labour Market." Fair-work. November 1, 2023. https://fair.work/wp-content/uploads/sites/17/2023/07/Fairwork-Cloudwork-Ratings-2023-Red.pdf.
68. Williams, Adrienne, Milagros Miceli, and Timnit Gebru. "The Exploited Labor Behind Artificial Intelligence." Noemamag. October 13, 2022. https://www.noemamag.com/the-exploitedlabor-behind-artificial-intelligence/.
69. Ajaiyeoba, Ifeyimika O. "Diversity and Emotional Labor in the Gig Economy." Equality, Diversity and Inclusion: An International Journal ahead-of-print (2024). doi:10.1108/edi-11-2023-0394.
70. Rowe, Niamh. "Millions of Workers Are Training AI Models for Pennies." Wired. October 18, 2023. https://www.wired.com/story/millions-of-workers-are-training-ai-models-forpennies/?utm_medium=email&utm_source=pocket_hits&utm_campaign=POCKET_HITS-EN-DAILY-RECS-2023_10_18&sponsored=0&position=1&category=what_else_were_reading&scheduled_corpus_item_id=3b8860c4-378e-488e-96a3-23482629c63d&url=https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/.
71. Rowe, Niamh. "Millions of Workers Are Training AI Models for Pennies." Wired. October 18, 2023. https://www.wired.com/story/millions-of-workers-are-training-ai-models-forpennies/?utm_medium=email&utm_source=pocket_hits&utm_campaign=POCKET_HITS-EN-DAILY-RECS-2023_10_18&sponsored=0&position=1&category=what_else_were_reading&scheduled_corpus_item_id=3b8860c4-378e-488e-96a3-23482629c63d&url=https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/.
72. Rowe, Niamh. "Underage Workers Are Training AI." Wired. November 15, 2023. https://www.wired.com/story/artificial-intelligence-data-labeling-children/.
73. Rowe, Niamh, "Underage Workers Are Training AI."
74. Regilme, Salvador. 2024b. "Introduction: Rethinking the Crisis of Children's Rights: Multidisciplinary and Transnational Perspectives." In Children's Rights in Crisis, edited by Salvador Regilme, 1–20. Manchester, England: Manchester University Press, 2004. doi:10.7765/9781526170149.00006.
75. Regilme, Salvador. "Conclusions: Advancing Children's rights amid a Global Order under Siege." In Children's Rights in Crisis, edited by Salvador Regilme, 237–242. Manchester, England: Manchester University Press, 2024. doi:10.7765/9781526170149.00019.
76. United Nations. Universal Declaration of Human Rights. United Nations, 1948. https://www.un.org/en/about-us/universal-declaration-of-human-rights
77. Rowe, Niamh. "Millions of Workers Are Training AI Models for Pennies." Wired. October 18, 2023. https://www.wired.com/story/millions-of-workers-are-training-ai-models-forpennies/?utm_medium=email&utm_source=pocket_hits&utm_campaign=POCKET_HITS-EN-DAILY-RECS-2023_10_18&sponsored=0&position=1&category=what_else_were_reading&scheduled_corpus_item_id=3b8860c4-378e-488e-96a3-23482629c63d&url=https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/.
78. Rowe, Niamh. "Underage Workers Are Training AI." Wired. November 15, 2023. https://www.wired.com/story/artificial-intelligence-data-labeling-children/.
79. Rowe, Niamh. "'It's Destroyed Me Completely': Kenyan Moderators Decry Toll of Training of AI Models." The Guardian. August 2, 2023. https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai.
80. Haskins, Caroline. "The Low-Paid Humans Behind AI's Smarts Ask Biden to Free Them From 'Modern Day Slavery.'" Wired. May 22, 2024. https://www.wired.com/story/low-paid-humansai-biden-modern-day-slavery/.
81. Williams, Adrienne, Milagros Miceli, and Timnit Gebru. "The Exploited Labor Behind Artificial Intelligence." Noemamag. October 13, 2022. https://www.noemamag.com/the-exploitedlabor-behind-artificial-intelligence/.
82. Pogrebna, Ganna. "AI Underpinned by Developing World Tech Worker 'Slavery.'" Asia Times. October 9, 2024. https://asiatimes.com/2024/10/ai-underpinned-by-developing-world-techworker-slavery/.
83. Pogrebna, Ganna. "AI Underpinned by Developing World Tech Worker 'Slavery.'" Asia Times. October 9, 2024. https://asiatimes.com/2024/10/ai-underpinned-by-developing-world-techworker-slavery/.
84. Liu, Phoebe. "The Billionaires Getting Rich From AI 2024." Forbes. April 2, 2024. https://www.forbes.com/sites/phoebeliu/2024/04/02/the-billionaires-getting-rich-from-ai-2024/.
85. Organisation for Economic Co-operation and Development. 2022. "Measuring the Environmental Impacts of Artificial Intelligence Compute and Applications." November 1, 2022. https://www.oecd.org/en/publications/measuring-the-environmental-impacts-of-artificial-intelligencecompute-and-applications_7babf571-en.html.
86. Jacobs, Julian, and Francesco Tasin. "How the Global South May Pay the Cost of AI Development." OMFIF. July 1, 2024. https://www.omfif.org/2024/07/how-the-global-south-may-paythe-cost-of-ai-development/.
87. Jacobs, Julian, and Francesco Tasin. "How the Global South May Pay the Cost of AI Development." OMFIF. July 1, 2024. https://www.omfif.org/2024/07/how-the-global-south-may-paythe-cost-of-ai-development/.
88. O'Brien, Isabel. "Data Center Emissions Probably 662% Higher than Big Tech Claims. Can It Keep up the Ruse?" The Guardian. September 15, 2024. https://www.theguardian.com/technology/2024/sep/15/data-center-gas-emissions-tech.
89. O'Brien, Isabel. "Data Center Emissions Probably 662% Higher than Big Tech Claims. Can It Keep up the Ruse?" The Guardian. September 15, 2024. https://www.theguardian.com/technology/2024/sep/15/data-center-gas-emissions-tech.
90. Kerr, Dara. "Google and Microsoft Report Growing Emissions as They Double-down on AI." NPR. July 12, 2024. https://www.npr.org/2024/07/12/g-s1-9545/ai-brings-soaring-emissionsfor-google-and-microsoft-a-major-contributor-to-climate-change.
91. Kerr, Dara. "Google and Microsoft Report Growing Emissions as They Double-down on AI." NPR. July 12, 2024. https://www.npr.org/2024/07/12/g-s1-9545/ai-brings-soaring-emissionsfor-google-and-microsoft-a-major-contributor-to-climate-change.
92. Hao, Karen. "Training a Single AI Model Can Emit as Much Carbon as Five Cars in Their Lifetimes." MIT Technology Review. June 6, 2019. https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-carsin-their-lifetimes/.
93. Hao, Karen. "Training a Single AI Model Can Emit as Much Carbon as Five Cars in Their Lifetimes." MIT Technology Review. June 6, 2019. https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-carsin-their-lifetimes/.
94. Hao, Karen. "Training a Single AI Model Can Emit as Much Carbon as Five Cars in Their Lifetimes." MIT Technology Review. June 6, 2019. https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-carsin-their-lifetimes/.
95. Posner, Michael, and Dorothee Baumann-Pauly. "Teaching Case: Digging into the Ethics of Cobalt Mining." Financial Times. September 10, 2023. https://www.ft.com/content/81ab1d09-68b0-43c1-9155-5f1394ec73c5.
96. Posner, Michael, and Dorothee Baumann-Pauly. "Teaching Case: Digging into the Ethics of Cobalt Mining."
97. Opray, Max. "Nickel Mining: The Hidden Environmental Cost of Electric Cars." The Guardian. August 24, 2017. https://www.theguardian.com/sustainable-business/2017/aug/24/nickel-mining-hidden-environmental-cost-electric-cars-batteries.
98. Fabro, Keith. "Indigenous Filipinos Fight to Protect Biodiverse Mountains from Mining." Mongabay. March 28, 2024. https://news.mongabay.com/2024/03/indigenous-filipinos-fightto-protect-biodiverse-mountains-from-mining/.
99. Fabro, Keith. "Indigenous Filipinos Fight to Protect Biodiverse Mountains from Mining."
100. Nordgren, Anders. "Artificial Intelligence and Climate Change: Ethical Issues." Journal of Information, Communication and Ethics in Society 21 no. 1 (2022): 1–15. doi:10.1108/jices-11-2021-0106.
101. Zhong, Junhao, Yilin Zhong, Minghui Han, Tianjian Yang, and Qinghua Zhang. "The Impact of AI on Carbon Emissions: Evidence from 66 Countries." Applied Economics 56 no. 25 (2024): 2975–89. doi:10.1080/00036846.2023.2203461.
102. Mubenga-Tshitaka, Jean-Luc, Johane Dikgang, John W. Muteba Mwamba, and Dambala Gelo. "Climate Variability Impacts on Agricultural Output in East Africa." Cogent Economics & Finance 11 no. 1 (2023): 2181281. doi:10.1080/23322039.2023.2181281.
103. United Nations. "General Assembly Adopts Landmark Resolution on Steering Artificial Intelligence towards Global Good, Faster Realization of Sustainable Development." United Nations Meetings Coverage and Press Releases. March 21, 2024. https://press.un.org/en/2024/ga12588.doc.htm.
104. Donahoe, Eileen, and Megan MacDuffee Metzger. "Artificial Intelligence and Human Rights." Journal of Democracy 30 no. 2 (2019): 115–26. doi:10.1353/jod.2019.0029.
105. Humble, Kristian. "Artificial Intelligence, Social Harms and Human Rights." Critical Criminological Perspectives, 57–76. 2023. doi:10.1007/978-3-031-19149-7_3.
106. United Nations. "General Assembly Adopts Landmark Resolution on Steering Artificial Intelligence towards Global Good, Faster Realization of Sustainable Development." United Nations Meetings Coverage and Press Releases. March 21, 2024. https://press.un.org/en/2024/ga12588.doc.htm.