The Challenges with Designing Fair Algorithms

By TAP Staff Blogger

Posted on November 21, 2018


Share

Algorithmic technologies powered by artificial intelligence (AI) promise to generate tremendous benefit to society (i.e., advanced decision-making tools that address difficult medical challenges; route traffic with near-optimal efficiency; and, curate media feeds to stunning degrees of personalization). However, they also raise concerns about unintentional discrimination, bias, and harmful consequences (i.e., digital advertising campaigns that promote STEM-related work to men more frequently than women; the use of machine learning to spread disinformation through social media; and, facial recognition algorithms that inadvertently discriminate).

 

The Ethical Machine, a new initiative funded by the Shorenstein Center at the Harvard Kennedy School, explores the challenges of designing fair algorithms. The project’s website provides essays written by policy experts and academics who share their research and work about AI ethics.

 

Three TAP scholars have work featured on The Ethical Machine. Below are the introductory paragraphs with links to their full essays.

 

Algorithmic Bias or Fairness: The Importance of the Economic Context
By Catherine Tucker, Sloan Distinguished Professor of Management and Professor of Marketing at Massachusetts Institute of Technology Sloan School of Management

 

As a society, we have shifted from a world where policy fears are focused on the ubiquity of digital data, to one where those concerns now center on the potential harm caused by the automated processing of this data. Given this, I find it useful as an economist to investigate what leads algorithms to reach apparently biased results—and whether there are causes grounded in economics.

 

AI Marketing as a Trojan Horse
By Joseph Turow, Robert Lewis Shayon Professor of Communication at the Annenberg School for Communication at the University of Pennsylvania

 

One can imagine amazing things when contemporary marketing meets sophisticated AI. While you are at work, your knowledgeable digital helper purchases plane tickets for your upcoming trip using criteria it developed from analyzing dozens of flights you’ve taken. As you travel home, the same virtual assistant orders a week’s worth of groceries with the help of another virtual assistant that conveys your family’s culinary interests. That evening it suggests movies and TV shows for various members of your family based on their past interests, current social situations, and real-time emotional states. These activities can’t happen for most of us in quite these ways right now, but artificial intelligence technologists will tell you they are right around the corner as marketers apply artificial intelligence for collecting, evaluating, and acting on data about us in unprecedented ways. Look a bit deeper into these developments, though, and you might see a less optimistic reality. Marketing technology driven by artificial intelligence can also be a Trojan Horse—a surreptitious channel through which marketers discriminate against many of the people who welcome it and by which governments create new paths to impose their will on citizens who just wanted better entertainment and shopping.

 

Don’t Believe Every AI You See
By danah boyd, founder and president, Data & Society Research Institute; principal researcher, Microsoft Research; and visiting professor at New York University, and M.C. Elish, research lead, Data & Society Research Institute

 

At a recent machine learning conference, Ali Rahimi—a leading research scientist at Google— sparked a controversy. During his acceptance speech after receiving the “Test of Time” award honoring a lasting contribution to the field, Rahimi provocatively proposed that “machine learning has become alchemy.” He argued that even though alchemy “worked,” it was based on unverifiable theories, many of which turned out to be false, such as curing illness with leaches or transmuting metal into gold. The parallel is that many of today’s machine learning models, especially those that involve the use of neural networks or deep learning, are¬ poorly understood and under-theorized. They seem to work, and that is enough. While this may not matter in every instance, Rahimi emphasized, it is profoundly consequential when it comes to systems that serve important social functions, in fields such as healthcare and criminal justice, and in tasks like determining credit worthiness and curating news. He concluded: “I would like to live in a world whose systems are built on rigorous, reliable, verifiable knowledge, and not on alchemy.”

 


Share