Kate Crawford Discusses Discrimination Concerns with Developing AI Language Model Systems
Publication Date: August 29, 2022 5 minute readDr. Kate Crawford, from an interview with France 24I’m much more concerned about the types of harms these [artificial intelligence] systems are already doing the world in terms of discrimination and biases in things like the criminal justice system or education or healthcare. I think we’ve got much, shall we say, real term concerns that we should be focused on rather than this fear that we might be creating something like Hal in “2001.” It’s really a long way off, if we ever get there.
Earlier this summer, a Washington Post article, “Google fired engineer who said its AI was sentient,” prompted a debate about advances in AI (artificial intelligence), public misunderstanding of how these machine-learning systems work, and corporate responsibility.
France 24 reached out to Dr. Kate Crawford, Research Professor at USC Annenberg and Senior Principal Researcher at Microsoft Research, NYC, to share her insights from over 20 years working to understand large-scale data systems, machine learning, and AI in the contexts of labor, politics, and the environment.
Below are a few excerpts from this interview, “Google's 'sentient' AI system LaMDA 'is really just a very large chatbot'” (France 24, June 15, 2022). The entire interview is available below.
Google’s LaMDA Is a Machine-Learning Language Model for Dialogue Applications
I think the reality is these systems [machine-learning language models such as Google’s LaMDA], in terms of the way that they engage, are really not much more complicated than very large statistical analysis at scale. There’s nothing specifically about the way the model is working that we should be worried about. Instead perhaps, we might be more worried about the types of biases or stereotypes that are very commonly built into these models. They are famous for producing forms of speech that are very hateful, or full of misinformation or dehumanizing language. And actually that’s a much harder problem for tech companies to solve.
I would suggest in fact –and this is why I wrote a book called Atlas of AI on this topic – that these models are neither artificial nor intelligent. They are in fact just built of huge amounts of dialogue that’s been extracted from the internet and then it’s basically just producing different sorts of responses based on things you say.
What Are Important Legislative Actions to Consider as AI Systems Develop?
Personally I think there’s a lot that can be done that is low-hanging fruit in legislative domain. Right now we don’t have individual rights of actions against a system that might prevent you from getting a job or getting into a school. And we’ve seen lots of instances where there have been protests against AI systems that have been coming back with incorrect grades for students in the UK, for example. Or actually producing discriminating results in terms of refugees trying to gain access to particular countries across Europe. These are the sorts of test cases that we’re seeing. Legislation really needs to respond by first of all creating these rights of action. Secondly, I would say, by creating more systems of transparency so that we can see how these systems are working. For example, trade secrecy law is commonly used as a way to prevent researchers and auditors from looking at how an AI system is working to see how it might be producing forms of harm. There are certainly ways we can address that with law in ways that would help us figure out both short and medium term effects of these kinds of systems.
And there is something important that is going on which is in order to create these very large models that we’re starting to see –for example, the one that Google is using called LaMDA –these are vast machine-learning models that cost an enormous amount of money to make and they burn a huge amount of energy to run. If you look at the world, there are really only a few companies that can afford to build things at this scale, and certainly very few universities can compete. So what we’re seeing is a system where more and more power is being put into fewer and fewer hands. It’s now an extremely concentrated industry. The sorts of questions that I think we need to be asking are what are the kind of democratic implications where just a few companies get to create the systems that are remapping the world, and telling us how things should look and how they’re named? I think this is something that we should be spending a lot more time thinking about. And that is ultimately a question for the public.
Watch the full interview: “Google's 'sentient' AI system LaMDA 'is really just a very large chatbot'” (France 24, June 15, 2022).
Kate Crawford is a leading scholar of the social and political implications of artificial intelligence. Her latest book is Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021). Dr. Crawford is a Research Professor of Communication and Science and Technology Studies at USC’s Annenberg School for Communication and Journalism and a Senior Principal Researcher at Microsoft Research in New York. Additionally, she currently holds the inaugural visiting chair for AI and justice at the École Normale Supérieure in Paris.
Dr. Crawford’s work also includes collaborative projects and visual investigations. Her project Anatomy of an AI System with Vladan Joler — which maps the full lifecycle of the Amazon Echo — won the Beazley Design of the Year Award in 2019, and is in the permanent collection of the Museum of Modern Art in New York. Her collaboration with the artist Trevor Paglen, "Excavating AI," won the Ayrton Prize from the British Society for the History of Science.
About Kate Crawford
Kate Crawford is a Research Professor of Communication and Science and Technology Studies at USC’s Annenberg School for Communication and Journalism and a Senior Principal Researcher at Microsoft Research in New York. Professor Crawford is a leading scholar of the social and political implications of artificial intelligence. Over her 20-year career, her work has focused on understanding large-scale data systems, machine learning and AI in the wider contexts of history, politics, labor, and the environment.