Skip to main content
Technology | Academics | Policy - Home
  • Topics
    • Topics

    • Topics OverviewOverview
    • Artificial Intelligence and Machine Learning
      • Artificial Intelligence and Machine Learning

      • Artificial Intelligence and Machine Learning OverviewOverview
      • Artificial Intelligence Policy
      • Ethics of Artificial Intelligence
      • Generative AI
    • Cybersecurity
      • Cybersecurity

      • Cybersecurity OverviewOverview
      • Cyber Peace / Cyber Warfare
      • Election Security
    • Impact of Tech on Society
      • Impact of Tech on Society

      • Impact of Tech on Society OverviewOverview
      • Future of Work
      • Tech’s Impact on Economic Equity
      • Tech’s Impact on Racial and Gender Equity
    • Innovation and Economic Impact
    • Intellectual Property and Open Source
      • Intellectual Property and Open Source

      • Intellectual Property and Open Source OverviewOverview
      • Copyright and Trademarks
      • Open Source
      • Patents
    • Networks and Infrastructure
      • Networks and Infrastructure

      • Networks and Infrastructure OverviewOverview
      • Broadband and Wireless Technologies
      • Cloud Computing
      • Internet
      • Net Neutrality
    • Platforms and Platform Regulation
      • Platforms and Platform Regulation

      • Platforms and Platform Regulation OverviewOverview
      • Antitrust / Competition
      • Content Moderation/Section 230
      • Disinformation / Misinformation
      • Freedom of Speech
      • Media and Content
    • Privacy
      • Privacy

      • Privacy OverviewOverview
      • Cross-Border Data Transfers
  • Scholars
  • Events
  • For the Media
    • For the Media

    • Media OverviewMedia Overview
    • Fact Sheets
    • Press Releases
  • About TAP
  • Subscribe to our Newsletter

Breadcrumbs

Go up a level to Home is the parent page of

  • Share Ifeoma Ajunwa Proposes A Veil of Ignorance for Automated Decision-Making via Facebook
  • Share Ifeoma Ajunwa Proposes A Veil of Ignorance for Automated Decision-Making via Twitter
  • Share Ifeoma Ajunwa Proposes A Veil of Ignorance for Automated Decision-Making via Email
  • Share Ifeoma Ajunwa Proposes A Veil of Ignorance for Automated Decision-Making via LinkedIn
  • Share Ifeoma Ajunwa Proposes A Veil of Ignorance for Automated Decision-Making via Pinterest
BLOG POST

Ifeoma Ajunwa Proposes A Veil of Ignorance for Automated Decision-Making

Publication Date: June 28, 2021 7 minute read
Written By
TAP Staff Blogger
Featuring

 Ifeoma  Ajunwa

Ifeoma AjunwaTAP Scholar
  • Artificial Intelligence and Machine Learning
You can only fix a problem once you know it exists. I do think automated hiring systems have a role to play in anti-discriminatory law.  - Ifeoma Ajunwa, from her talk at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society

Last month at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, University of North Carolina School of Law professor Ifeoma Ajunwa provided a keynote speech that addressed unintended bias in automated decision-making. Titled, “A Veil of Ignorance for Automated Decision-Making,” Professor Ajunwa’s talk examined the paradox of automated decision-making, saying, “although it’s meant to evade human bias, in actually, it not only replicates that same bias, it can also amplify it at scale.”

Below are a few excerpts from Professor Ajunwa’s keynote talk, “A Veil of Ignorance for Automated Decision-Making”:

About Professor Ajunwa

I am a legal scholar and also a sociologist. I am also what I would term an accidental tech scholar. I stumbled into studying tech as a graduate student at Columbia University where I was completing my doctoral research on the reentry of the formerly incarcerated. To paraphrase Dostoevsky, a society should be judged by how it treats its prisoners. Which is to put another way, a society should be judged by how it treats its least advantaged.
During my doctoral research, I met individuals who had served time in prison and who were eager, desperate even, to return to work and rejoin society. And yet, no amount of rehabilitation, no matter of social and cultural capital brokerage could save them from the swift dismissal of automated hiring systems. I wrote about this in my article, “The Paradox of Automation as Anti-Bias Intervention”.

The Paradox of Automated Decision-Making

We are at a point in our society where we have finally accepted the mountain of social-scientific evidence of human bias. With this recognition of human bias also comes attempts to fix it. And that is why many have turned to automated decision-making. The paradox of automated decision-making however is that although it’s meant to evade human bias, in actuality, it not only replicates that same bias, it can also amplify it at scale. And because of socio-technical phenomenon, like algorithmic exceptionalism, automated decision-making can even serve to obfuscate bias. Making it even more difficult to detect.
Yet decision makers have overlooked this paradox and have increasingly been delegating sensitive and important decision making to machine-learning algorithms. A socio-technical phenomenon that I term algorithmic capture.

Impediments to Addressing Bias in Automated Decision-Making

Recently slogans such as “greed is good” and “move fast and break things” were common place directives deployed to justify a rapacious business approach that privileged shareholder primacy, paid little regard to environmental protections, and devalued workers’ rights. But we’ve now come to question the societal good of this approach. Still the impediments to addressing bias in automated decision-making are one word slogans like, efficiency, optimization, and even, fairness. As we debate what amount of bias is allowed for efficiency or quibble over technical meanings of fairness, real people, like incarcerated people who are shut out by automated hiring systems, suffer.
Indeed, as a coauthor and I found, automated hiring systems are optimized to automate rejection. Their efficiency in specifications of fit is expressly designed to clone the best workers already in an organization. Such a system can only serve to reproduce existing and historical race, gender, ablest, or other biases in past hiring decisions.
In 2016, the AI scholar Kate Crawford wrote about AI’s white guy problem. This references the idea that since most AI or automated decision-making systems are created by white men, some of the recent foibles of automated decision making could be attributed to the ignorance of its creators. That is, these automated decision-making systems were merely reflecting back their creators’ blind spots in regards to race, gender, etc.

A Rawlsian Veil of Ignorance

To truly fulfill the social contracts, the legal philosopher John Rawls has proposed that decision makers should imagine that they sit behind a veil of ignorance that keeps them from knowing their position in society, and thus keeps them from both identifying and privileging their own specific individual circumstances. Rawls believes that it’s only by being ignorant of our circumstances that we can work toward the common good rather than self-interest.
For Rawls, a veil of ignorance works hand in hand with the ‘difference principle’. For the difference principle, he argues that for the social contract to guarantee everyone equal opportunity for success, decision-makers must focus their choices on helping the worst off in society. A Rawlsian veil of ignorance is therefore necessary because decision makers who adopt it will have no choice but to privilege the worse off in society since they cannot know if they themselves are in this category.
Now, it is important to under score here what the Rawlsian veil of ignorance is not. It is not a color-blindness ideology that exhorts us to ignore the social fact of race and racism. It is not a bias-ameliorating exercise that seeks to find the optimal amount of bias while maximizing efficiency. Rather, it is a deliberate practice that takes clear-eyed stock of society’s inequalities. One that looks at the faces at the bottom of the well and says, ‘how can I lift these people up.’
Thus, as the first step to enacting a Rawlsian veil of ignorance for the creation and implementation of automated systems, we should start with these questions:
  • What if we stopped designing algorithms for efficiency?
  • What if we tried instead to maximize for inclusion?
  • What if a paramount concern before deploying any automated decision-making system became testing whether such systems could help or hurt the least advantaged in our society?
  • Finally, what if the tech industry decided to take up new slogans such as: ‘move slow and audit things’?

Following Professor Ajunwa’s prepared talk, there was a robust Q & A that delved into topics such as:

  • What can white male software developers do to identify their blind spots?
  • How could the veil of ignorance be implemented?
  • Are there development and audit practices in other industries that you think AI developers might emulate?
  • How could the current regulatory environment be changed to incentivize ‘move slow and audit things’?
  • Do you see any promise for marginalized communities from automated hiring systems?

Watch the full keynote speech and Q & A with Professor Ifeoma Ajunwa: “A Veil of Ignorance for Automated Decision-Making”.

Read More:
  • “The Paradox of Automation as Anti-Bias Intervention” by Professor Ifeoma Ajunwa (Cordoza Law Review, Vol. 41, p 1671, 2020)
  • “Ifeoma Ajunwa on the Paradox of Automation as Anti-Bias Intervention” (Yale Law School News, March 28, 2019)
  • “The Auditing Imperative for Automated Hiring” by Professor Ifeoma Ajunwa (Harvard Journal Law & Technology, Vol. 34, forthcoming 2021)
    • TAP summary of the article, “The Auditing Imperative for Automated Hiring”

Ifeoma Ajunwa is Associate Professor at the University of North Carolina School of Law. She is also the Founding Director of UNC’s Artificial Intelligence Decision-Making Research Program. Professor Ajunwa is also a faculty associate at the Berkman Klein Center for Internet & Society at Harvard University. Her research interests are at the intersection of law and technology with a particular focus on the ethical governance of workplace technologies, and also on diversity and inclusion in the labor market and the workplace.

  • Share Ifeoma Ajunwa Proposes A Veil of Ignorance for Automated Decision-Making via Facebook
  • Share Ifeoma Ajunwa Proposes A Veil of Ignorance for Automated Decision-Making via Twitter
  • Share Ifeoma Ajunwa Proposes A Veil of Ignorance for Automated Decision-Making via Email
  • Share Ifeoma Ajunwa Proposes A Veil of Ignorance for Automated Decision-Making via LinkedIn
Tags
  • Discrimination
  • Bias

 Ifeoma  Ajunwa

About Ifeoma Ajunwa

Ifeoma Ajunwa is the AI.Humanity Professor of Law and Ethics and the Founding Director of the AI and the Law Program at Emory Law. Starting January 2024, she will also be the Associate Dean for Projects and Partnerships. Additionally, Professor Ajunwa has been a Faculty Associate at the Berkman Klein Center at Harvard University since 2017. Her research interests are at the intersection of law and technology with a particular focus on the ethical governance of workplace technologies, and also on diversity and inclusion in the labor market and the workplace.

Read full Bio

See more with Ifeoma Ajunwa

  • See more blog posts
  • See academic article summaries

Related Blog Posts

BLOG POST

The Quantified Worker: Professor Ifeoma Ajunwa’s Research on Workplace Surveillance and Automated Hiring Systems

Publication Date: June 29, 2023
Emory Law Professor Ifeoma Ajunwa discusses her new book, The Quantified Worker, Law and Technology in the Modern Workplace. She explains how the workforce science of today goes far beyond increasing efficiency and threatens to erase individual personhood.
Written By
TAP Staff Blogger
Featuring

 Ifeoma  Ajunwa

Ifeoma AjunwaTAP Scholar
  • Artificial Intelligence and Machine Learning
  • Ethics of Artificial Intelligence
  • Impact of Tech on Society
  • Future of Work
  • Tech’s Impact on Economic Equity
  • Tech’s Impact on Racial and Gender Equity
  • Artificial Intelligence Policy
BLOG POST

Recent Papers from TAP Scholars

Publication Date: January 15, 2021
A selection of articles recently written by TAP scholars explore AI and the impact on privacy, how to safeguard privacy and security in an interconnected world, digital platforms and antitrust, and patent reform to support innovation.
Written By
TAP Staff Blogger
  • Privacy
  • Networks and Infrastructure
  • Internet
  • Artificial Intelligence and Machine Learning
  • Antitrust / Competition
  • Patents
  • Intellectual Property and Open Source
BLOG POST

The Most Read TAP Blogs from 2019

Publication Date: December 31, 2019
Take a look at the top viewed blog posts from this past year that have been written by TAP scholars.
Written By
TAP Staff Blogger
  • Internet
  • Privacy
  • Cloud Computing
  • Media and Content
  • Broadband and Wireless Technologies
  • Artificial Intelligence and Machine Learning
  • Networks and Infrastructure
See All Blog Posts
Technology | Academics | Policy - Home
Follow us on TwitterLink us on LinkedinLike us on FacebookWatch us on youtube
  • Blog Posts
  • Academic Article Summaries
  • Fact Sheets
  • Hot Topic

Subscribe to our Newsletter

Name
  • Privacy & Cookies
  • Terms of Use
  • Feedback
© Copyright 2023