The Unavoidable Politics of AI: A Call for Ethical Transformation
For some time, I have avoided discussing the political dimensions of artificial intelligence (AI), focusing instead on its practical applications in education and psychology. However, I believe that AI is the most disruptive technology in history—surpassing even the printing press, the personal computer, and the internet. Its rapid integration into every facet of society demands our collective attention, and as helping professionals, we must engage in these discussions.
I was particularly moved by Dr. Pratyusha Ria Kalluri’s dissertation, Inside AI: How Artificial Intelligence Is Concentrating Power. In this work, Kalluri challenges the common perception of AI as a neutral “black box” and instead argues that AI is deeply embedded in global hierarchies and colonial legacies. This dissertation calls for understanding, resistance, and transformation—insisting that we prioritize social justice and environmental sustainability in AI development. The following are some of my takes from Kalluri’s work:
AI is not just a technology; it functions within a global hierarchy that mirrors historical systems of power and control. Historically, colonial empires extracted resources, wealth, and labor from various regions to benefit a select few. Today, AI development follows a similar model, where data, labor, and raw materials are extracted from the Global South and economically disadvantaged communities to sustain the AI ecosystem controlled by a handful of powerful corporations and nations.
In colonial history, European empires envisioned themselves as the “brains” of the world, governing the “extremities” of colonized regions. This ideology persists in AI, where tech companies in the U.S. and Europe claim to possess the “best and biggest new brains” in the form of AI models. These models are trained on vast datasets sourced from across the globe, often without proper consent, fair compensation, or ethical considerations.
The extraction is not only digital but also material. The demand for minerals such as cobalt and lithium—essential for AI hardware—has fueled exploitative mining practices in countries like the Democratic Republic of the Congo, where labor conditions are inhumane, and local communities bear the environmental burden. The control over AI development also represents a claim to the future, where those who hold the technology shape the narratives, policies, and economic landscapes of tomorrow. However, this vision often excludes marginalized voices, reinforcing systemic inequities rather than addressing them.
AI’s Extractive Practices
AI’s impact extends far beyond algorithms; it relies on an expansive infrastructure that extracts resources from the planet and its people. Some of the key extractive practices include:
Energy Consumption: The operation of AI models requires immense computing power, leading to increased energy consumption. Some data centers have even revived fossil fuel power plants to meet demand, worsening environmental degradation.
Water Usage: Cooling AI data centers requires vast amounts of water, exacerbating shortages in regions already struggling with water scarcity.
Data Labor Exploitation: AI systems are trained using massive datasets labeled by an underpaid and underrecognized global workforce, often in precarious conditions.
Earth Exploitation: The mining of essential minerals, such as lithium and cobalt, contributes to environmental destruction, forced labor, and violent conflicts.
Calls to Action: Ethical Responsibility in AI Development
Kalluri’s dissertation makes an urgent plea for AI researchers and developers to recognize their role in perpetuating these hierarchies and to take responsibility for ethical transformation. As helping professionals, we, too, have a role to play in shaping the trajectory of AI. We must:
Recognize that AI is not neutral: AI systems reflect the biases, assumptions, and values of their creators. We must question who benefits from AI and who is harmed.
Advocate for transparency: Policymakers and consumers must demand greater transparency from AI companies regarding their environmental, labor, and data practices.
Support ethical AI initiatives: Whether through research, professional advocacy, or purchasing choices, we must support companies that prioritize fair labor practices, sustainability, and responsible AI development.
Listen to affected communities: AI should not be designed in isolation. Developers must engage with communities disproportionately impacted by AI’s extractive processes and systemic biases.
Challenge AI’s political narratives: AI is often presented as an inevitable, apolitical force. We must resist this framing and push for a democratic and inclusive approach to its development and deployment.
A Consumer’s Role in AI Ethics
Although Kalluri’s dissertation primarily addresses AI developers and policymakers, it also implies an important role for us as consumers. Those who use AI-enabled devices daily should be aware of the vast and often harmful processes that fuel these technologies. Consumers can:
Educate ourselves about AI’s environmental and labor impacts.
Question AI’s role in society and challenge its framing as an inevitable, neutral force.
Demand corporate accountability by advocating for responsible AI practices.
Support ethical companies that prioritize sustainability and fair labor practices and boycott unethical companies.
Push for policies that regulate AI’s environmental impact and protect workers in the AI supply chain.
A Collective Duty to Shape AI’s Future
Artificial intelligence is shaping our world at an unprecedented pace, but it is not an inevitable force beyond our control. AI is a human-made system, and as such, we have the power—and the responsibility—to ensure that its development prioritizes equity, justice, and sustainability. Kalluri’s work underscores that AI’s political dimensions cannot be ignored, and as professionals dedicated to helping others, we must participate in shaping an AI future that serves all of humanity, not just the privileged few. I highly recommend digging into the entire dissertation here.