The Paradox of Automation as Anti-Bias Intervention

Article Source: Cardozo Law Review, Vol. 41, No. 5, pp. 1671-1742, 2020
Publication Date:
Time to Read: 2 minute read
Written By:



Automated decision-making systems may facilitate bias. Job seekers lack access to the data and algorithms used by automated hiring systems, hindering plaintiff’s efforts to prove “disparate impact” discrimination under Title VII.


Policy Relevance:

New legal doctrines are needed to support equal opportunity in employment.


Key Takeaways:
  • Some expect that removing humans from decision-making processes and replacing them with automated decision-making systems will eliminate bias; however, sometimes, automated systems amplify bias.
  • A case study of algorithmic systems used in the hiring process reveals problematic features of these systems at odds with the principle of equal opportunity in employment.
    • Automated background checks of social media incorporate unwarranted assumptions that an applicant’s private behavior (like swearing) is relevant to their professional behavior.
    • Systems that analyze facial expressions struggle to read the expressions of those with darker skin.
    • Checks reveal information that employers are not supposed to consider, such as pregnancy status.
  • Bias is introduced in the hiring process by the legal system's deference to employers, who use nebulous criterion such as "cultural fit;" some firms now prefer to focus more on “values fit.”
  • Legal frameworks that assure accountability for technological hiring tools are lacking, and make it difficult to detect bias.
  • New legal approaches could support the liability of employers and makers of algorithmic hiring systems; for example, hiring platforms could serve as “information fiduciaries” of job applicants.
  • A new doctrine of discrimination per se should be created, modelled on the idea of negligence per se.
    • An employer's failure to audit and correct automated hiring systems that have a disparate impact should serve as prima facie evidence of discriminatory intent.
    • The employer could rebut this evidence by showing business necessity.
  • Legal protections should be established for consumers modelled on the Fair Credit Reporting Act, so that consumers may access the information collected and used by automated hiring systems.



 Ifeoma  Ajunwa

About Ifeoma Ajunwa

Ifeoma Ajunwa is the AI.Humanity Professor of Law and Ethics and the Founding Director of the AI and the Law Program at Emory Law. Starting January 2024, she will also be the Associate Dean for Projects and Partnerships. Additionally, Professor Ajunwa has been a Faculty Associate at the Berkman Klein Center at Harvard University since 2017. Her research interests are at the intersection of law and technology with a particular focus on the ethical governance of workplace technologies, and also on diversity and inclusion in the labor market and the workplace.