AI’s Prediction Power and the Anti-Discrimination Opportunity
Publication Date: October 15, 2022 8 minute readThe uncomfortable truth about discrimination is that, as power shifts, winners and losers are generated. Thus, resistance to adopting AIs is likely to be higher precisely when AIs have the potential to engender new systems that eliminate many aspects of discrimination.from "Power and Prediction: The Anti-Discrimination Opportunity" by Ajay Agrawal, Joshua Gans, and Avi Goldfarb
In a recent article, Rotman School of Management economic scholars Joshua Gans, Ajay Agrawal, and Avi Goldfarb explain why "AI [artificial intelligence] systems solutions have the potential to reduce discrimination across domains, from education to healthcare and banking."
"Power and Prediction: The Anti-Discrimination Opportunity" outlines a system mindset shift that can remove bias from artificial intelligence technologies, whether the bias is intentionally or unintentionally introduced into algorithmic processes. Below are a few excerpts from "Power and Prediction: The Anti-Discrimination Opportunity" (the Rotman Magazine, Fall 2022).
The Anti-Discrimination Opportunity
To address discrimination ... You need to detect the discrimination and you need to fix it. This is true of both human and machine predictions. In other words, eliminating discrimination requires a system.
Detecting Discrimination
Measuring discrimination in people is hard, requiring careful control over the context. But measuring discrimination by machines is more straightforward: Feed the machines the right data and see what comes out. The researcher can go to the AI and say, what if the person is like this? What if the person is like that? It is possible to try thousands of what-ifs. That is not possible with humans.
Fixing Discrimination
We do not mean to leave the impression that fixing discrimination is easy. First, it requires humans who want to fix the bias. If the humans who manage the AI want to deploy a tool that discriminates, they will have little difficulty doing so. And because the AI is software, their discrimination can happen at scale. However, it is easier to catch a deliberately discriminatory AI than a deliberately discriminatory human, because the AI leaves an audit trail: A well-funded regulator with well-trained auditors who can access the AI can run simulations to look for discrimination. Unfortunately, our current legal and regulatory systems struggle with these challenges as they were designed for a world of human decision-makers.
Second, even when deployed by well-intentioned humans who want to reduce biases, details matter — and focusing on details is time-consuming and expensive. There are many ways bias can seep into an AI’s predictions. Fixing bias requires understanding its source, which requires investments in storing data about past decisions. It also requires investments in simulating potential sources of bias to see how the AI holds up. And the first attempt might not work. New data might need to be collected and new processes required.
Third, an AI that reduces bias can change who holds decision-making power in an organization. Without AI, it might have been individual managers making decisions on who to hire. Even with the best intentions, these managers might hire through their social connections in a way that leads to unintended bias. With an AI designed to reduce bias, hiring through social connections will be harder. A more senior executive would set the threshold for which résumés should be considered. That executive might recognize that if all the company’s managers were hired through their social connections, a diverse workforce would be impossible. The AI reduces discrimination, but it also reduces the discretion that individual managers have in hiring relative to the objectives set by the executive suite. As a result, those managers might resist a system-level change that would reduce their power.
It Takes a System
In 2014, Amazon developed an AI system to assist with its recruiting.
A year later, the system was scrapped and never made it to the field. Why? Because it was found to not be evaluating candidates for technical jobs in a gender-neutral manner. The reason was a familiar one: Amazon’s AI was trained on past data that was overwhelmingly male. When they looked under the hood, the AI was explicitly down-weighting references to women, including women’s colleges. Simple tweaks could not restore neutrality.
You might read stories like this and think AI is hopelessly biased. But the other way you can read this is: The AI was biased and was judged to be such and so was not deployed. … This experience has taught AI developers that training on past data is often not good enough. New sources of data are required, and this takes time to develop. But in the end, the resulting AI can be evaluated. What’s more, it can be continually monitored for performance.
AI Bias Can Be Detected and Addressed
The good news is that new AI system solutions across domains — from education to healthcare and from banking to policing — can be designed and implemented to reduce discrimination. And these systems can be continuously and retroactively monitored to ensure continued success at removing discrimination.
Read the full article: "Power and Prediction: The Anti-Discrimination Opportunity" by Ajay Agrawal, Joshua Gans, and Avi Goldfarb (Rotman Magazine, Fall 2022).
Read More
Professors Gans, Goldfarb, and Agrawal have collaborated on two books that are being released next month.
"Prediction Machines: The Simple Economics of Artificial Intelligence" by Ajay Agrawal, Joshua Gans, and Avi Goldfarb. This is an updated and expanded version from the book published in 2018 by the same name. (Harvard Business Review, November 15, 2022)
The impact AI will have is profound, but the economic framework for understanding it is surprisingly simple. In "Prediction Machines," three eminent economists recast the rise of AI as a drop in the cost of prediction. With this masterful stroke, they lift the curtain on the AI-is-magic hype and provide economic clarity about the AI revolution as well as a basis for action by executives, policy makers, investors, and entrepreneurs. In this new, updated edition, the authors illustrate how, when AI is framed as cheap prediction, its extraordinary potential becomes clear: (1) Prediction is at the heart of making decisions amid uncertainty. Our businesses and personal lives are riddled with such decisions. (2) Prediction tools increase productivity--operating machines, handling documents, communicating with customers. (3) Uncertainty constrains strategy. Better prediction creates opportunities for new business strategies to compete. Also in new material, the authors explain how prediction fits into decision-making processes and how foundational technologies such as quantum computing will impact business choices.
"Power and Prediction: The Disruptive Economics of Artificial Intelligence" by Ajay Agrawal, Joshua Gans, and Avi Goldfarb (Harvard Business Review, November 15, 2022)
Artificial intelligence (AI) has impacted many industries around the world--banking and finance, pharmaceuticals, automotive, medical technology, manufacturing, and retail. But it has only just begun its odyssey toward cheaper, better, and faster predictions that drive strategic business decisions. When prediction is taken to the max, industries transform, and with such transformation comes disruption. What is at the root of this? In their bestselling first book, "Prediction Machines," eminent economists Ajay Agrawal, Joshua Gans, and Avi Goldfarb explained the simple yet game-changing economics of AI. Now, in "Power and Prediction," they go deeper, examining the most basic unit of analysis: the decision. The authors explain that the two key decision-making ingredients are prediction and judgment, and we perform both together in our minds, often without realizing it. The rise of AI is shifting prediction from humans to machines, relieving people from this cognitive load while increasing the speed and accuracy of decisions. This sets the stage for a flourishing of new decisions and has profound implications for system-level innovation. Redesigning systems of interdependent decisions takes time--many industries are in the quiet before the storm--but when these new systems emerge, they can be disruptive on a global scale. Decision-making confers power. In industry, power confers profits; in society, power confers control. This process will have winners and losers, and the authors show how businesses can leverage opportunities, as well as protect their positions.
About the Authors
Joshua Gans is a Professor of Strategic Management and holder of the Jeffrey S. Skoll Chair of Technical Innovation and Entrepreneurship at the Rotman School of Management, the University of Toronto (with a cross-appointment in the Department of Economics). He is also Chief Economist of the University of Toronto's Creative Destruction Lab. Professor Gans’ research is primarily focused on understanding the economic drivers of innovation and scientific progress, and has core interests in digital strategy and antitrust policy.
Ajay Agrawal is the Geoffrey Taber Chair in Entrepreneurship and Innovation and Professor of Strategic Management at the University of Toronto’s Rotman School of Management. His research is focused on the economics of machine intelligence.
Avi Goldfarb is the Rotman Chair in Artificial Intelligence and Healthcare and a professor of marketing at the Rotman School of Management, University of Toronto. His research focuses on the opportunities and challenges of the digital economy.
About Joshua Gans
Joshua Gans is the Jeffrey S. Skoll Chair of Technical Innovation and Entrepreneurship, as well as Professor of Strategic Management, at the University of Toronto Rotman School of Management. Previously he was a professor at the Melbourne Business School, University of Melbourne, and at the School of Economics, University of New South Wales.