Kate Crawford's New Book Examines the Power, Politics, and Planetary Costs of AI
Publication Date: April 30, 2021 7 minute readBy invoking an atlas, I'm suggesting that we need new ways to understand the empires of artificial intelligence. … A topographical approach offers different perspectives and scales, beyond the abstract promises of artificial intelligence or the latest machine learning models. The aim is to understand AI in a wider context by walking through the many different landscapes of computation and seeing how they connect.
- Kate Crawford in “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence”
In her new book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Professor Kate Crawford reveals how artificial intelligence (AI) is a technology of extraction: from the minerals drawn from the earth, to the labor pulled from low-wage information workers, to the data taken from every action and expression. Rather than taking a narrow focus on code and algorithms, Professor Crawford offers a material and political perspective on what it takes to make AI and how it centralizes power.
In an interview with MIT’s Technology Review, Professor Crawford emphasizes the need to acknowledge both the politics and the physical impact that AI has on the planet. In the below excerpt from “Stop talking about AI ethics. It’s time to talk about power.” (MIT Technology Review, April 23, 2021), Professor Crawford explains the need to engage in broad democratic conversations around how AI systems are impacting the planet and people.
If there’s been a real trap in the tech sector for the last decade, it’s that the theory of change has always centered engineering. It’s always been, “If there’s a problem, there’s a tech fix for it.” And only recently are we starting to see that broaden out to “Oh, well, if there’s a problem, then regulation can fix it. Policymakers have a role.”
But I think we need to broaden that out even further. We have to say also: Where are the civil society groups, where are the activists, where are the advocates who are addressing issues of climate justice, labor rights, data protection? How do we include them in these discussions? How do we include affected communities?
Professor Crawford discusses key points from her research and the Atlas of AI with USC Annenberg’s Critical Conversations. In the below excerpt from “Kate Crawford maps a world of extraction and exploitation in ‘Atlas of AI’,” she explains the problem with large data sets collected without our knowledge.
If there’s an original sin of the field, it’s this moment when the idea of just harvesting the entire internet — taking people’s photos, taking people’s texts, taking their responses to each other — and seeing it as an aggregate infrastructure that had no specific histories or stories or intimacies or vulnerabilities contained within it. To strip it of all that, and say, this is just “raw material” — with very much scare quotes around that — to drive large-scale systems of prediction and optimization. That has brought us to this point, where I think we should be asking much harder questions of those data sets, not only because of their origins, but because of the way in which they bring, smuggling in, a worldview that is so rarely questioned — and is producing some very serious harms.
Below are a few excerpts from the introduction of Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford (Yale University Press, April 6, 2021):
In this book, I'll explore how artificial intelligence is made, in the widest sense, and the economic, political, cultural, and historical forces that shape it. Once we connect AI within these broader structures and social systems, we can escape the notion that artificial intelligence is a purely technical domain. At a fundamental level, AI is technical and social practices, institutions and infrastructures, politics and culture. Computational reason and embodied work are deeply interlinked: AI systems both reflect and produce social relations and understandings of the world.
Inside the Atlas of AI:
In chapter 1, we begin in the lithium mines of Nevada, one of the many sites of mineral extraction needed to power contemporary computation. Mining is where we see the extractive politics of AI at their most literal. On the software side, building models for natural language processing and computer vision is enormously energy hungry, and the competition to producer faster and more efficient models has driven computationally greedy methods that expand AI's carbon footprint. We trace the environmental and human birthplaces of planetary computation networks and see how they continue to terraform the planet.
Chapter 2 shows how artificial intelligence is made of human labor. And we'll hear from the workers who are protesting against the way that AI systems are increasing surveillance and control for their bosses. … Labor is also a story about time. Coordinating time demands increasingly detailed information about what people are doing and how and when they do it.
Chapter 3 focuses on the role of data. All publicly accessible digital material --including data that is personal or potentially damaging-- is open to being harvested for training datasets that are used to produce AI models. Beyond the serious issues of privacy and ongoing surveillance capitalism, the current practices of working with data in AI raise profound ethical, methodological, and epistemological concerns.
And how is this data used? In chapter 4, we look at the practices of classification in artificial intelligence systems. We see how contemporary systems use labels to predict human identity, commonly using binary gender, essentialized racial categories, and problematic assessments of character and credit worthiness. By looking at how classifications are made, we see how technical schemas enforce hierarchies and magnify inequity.
Chapter 5 considers the claim of the psychologist Paul Ekman that there are a small set of universal emotional states which can be read directly from the face. Despite the unstable premise, these tools are being rapidly implemented into hiring, education, and policing systems.
In chapter 6 we look at the ways in which AI systems are used as a tool of state power. The military past and present of artificial intelligence have shaped the practices of surveillance, data extraction, and risk assessment we see today. The military logics that have shaped AI systems are now part of the workings of municipal government, and they are further skewing the relation between states and subjects.
The concluding chapter assesses how artificial intelligence functions as a structure of power that combines infrastructure, capital, and labor. AI systems are built with the logics of capital, policing, and militarization --and this combination further widens the existing asymmetries of power.
But these logics can be challenged, just as systems that perpetuate oppression can be rejected. As conditions on Earth change, calls for data protection, labor rights, climate justice, and racial equity should be heard together. When these interconnected movements for justice inform how we understand artificial intelligence, different conceptions of planetary politics become possible.
Read more:- Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, April 6, 2021)
- “Stop talking about AI ethics. It’s time to talk about power.” (MIT Technology Review, April 23, 2021)
- “Kate Crawford maps a world of extraction and exploitation in ‘Atlas of AI’” (USC Annenberg Critical Conversations, April 14, 2021)
Additionally, watch “A Conversation with Kate Crawford, Author of 'Atlas of AI',” Professor Josh Kun talks with Professor Crawford about her new book. (USC Annenberg School for Communication and Journalism, April 14, 2021)
Kate Crawford is a Research Professor of Communication and STS at USC’s Annenberg School for Communication and Journalism and a Senior Principal Researcher at Microsoft Research in New York. In 2021, she will be the Miegunyah Distinguished Visiting Fellow at the University of Melbourne. Professor Crawford is a leading scholar of the social and political implications of artificial intelligence. Over her 20-year career, her work has focused on understanding large-scale data systems, machine learning and AI in the wider contexts of history, politics, labor, and the environment.
About Kate Crawford
Kate Crawford is a Research Professor of Communication and Science and Technology Studies at USC’s Annenberg School for Communication and Journalism and a Senior Principal Researcher at Microsoft Research in New York. Professor Crawford is a leading scholar of the social and political implications of artificial intelligence. Over her 20-year career, her work has focused on understanding large-scale data systems, machine learning and AI in the wider contexts of history, politics, labor, and the environment.