Kate Crawford Discusses the Very Real Biases in AI

By TAP Staff Blogger

Posted on October 30, 2017


Share

In a recent Wall Street Journal article, AI Now co-founder Kate Crawford stresses that “digital brains can be just as error-prone and biased as ours.”

 

Below are a few excerpts from “Artificial Intelligence—With Very Real Biases.”

 

As someone who researches the social implications of AI, I tend to think of something far more banal: a municipal water system, part of the substrate of our everyday lives. We expect these systems to work—to quench our thirst, water our plants and bathe our children. And we assume that the water flowing into our homes and offices is safe. Only when disaster strikes—as it did in Flint, Mich.—do we realize the critical importance of safe and reliable infrastructure.

 

Artificial intelligence is quickly becoming part of the information infrastructure we rely on every day. Early-stage AI technologies are filtering into everything from driving directions to job and loan applications. But unlike our water systems, there are no established methods to test AI for safety, fairness or effectiveness. Error-prone or biased artificial-intelligence systems have the potential to taint our social ecosystem in ways that are initially hard to detect, harmful in the long term and expensive—or even impossible—to reverse. And unlike public infrastructure, AI systems are largely developed by private companies and governed by proprietary, black-box algorithms.

 

These systems “learn” from social data that reflects human history, with all its biases and prejudices intact. Algorithms can unintentionally boost those biases, as many computer scientists have shown. Last year, a ProPublica expose on “Machine Bias” showed how algorithmic risk-assessment systems are spreading bias within our criminal-justice system. So-called predictive policing systems are suffering from a lack of strong predeployment bias testing and monitoring. As one RAND study showed, Chicago’s algorithmic “heat list” system for identifying at-risk individuals failed to significantly reduce violent crime and also increased police harassment complaints by the very populations it was meant to protect. We have a long way to go before these systems can come close to the nuance of human decision making and even further until they can offer real accountability.

 

As the organizational theorist Peter Drucker once wrote, we can’t manage what we can’t measure. As AI becomes the new infrastructure, flowing invisibly through our daily lives like the water in our faucets, we must understand its short- and long-term effects and know that it is safe for all to use. This is a critical moment for positive interventions, which will require new tests and methodologies drawn from diverse disciplines to help us understand AI in the context of complex social systems. Only by developing a deeper understanding of AI systems as they act in the world can we ensure that this new infrastructure never turns toxic.

 

Read the full article: “Artificial Intelligence—With Very Real Biases.”

 

 

Kate Crawford is a Distinguished Research Professor at New York University, a Principal Researcher at Microsoft Research New York, and a Visiting Professor at the MIT Media Lab. Professor Crawford is the co-founder and co-director of the AI Now Research Institute, along with Meredith Whittaker. AI Now is a new interdisciplinary research center dedicated to studying the social impacts of artificial intelligence. In July 2016, she co-chaired the Obama White House symposium on the impacts of AI in the near term. The symposium addressed artificial intelligence across four domains: labor, health, social inequality, and ethics. Professor Crawford’s recent publications address data bias and fairness, social impacts of artificial intelligence, predictive analytics and due process, and algorithmic accountability and transparency.

 


Share