The Quantified Worker: Professor Ifeoma Ajunwa’s Research on Workplace Surveillance and Automated Hiring Systems
Publication Date: June 29, 2023 7 minute readEmory Law Professor Ifeoma Ajunwa discusses her new book, The Quantified Worker, Law and Technology in the Modern Workplace. She explains how the workforce science of today goes far beyond increasing efficiency and threatens to erase individual personhood.
Professor Ifeoma Ajunwa discussing her book, “The Quantified Worker.”When you have over-surveillance, when you have over-quantification of productivity such that workers are petrified of making a mistake and petrified of not meeting whatever hours have been set for them, then you are impeding on creativity. And that is going to be a net loss for that organization because you are going to have less innovation.
Emory Law Professor Ifeoma Ajunwa’s new book, The Quantified Worker: Law and Technology in the Modern Workplace, explores how the workforce science of today goes far beyond increasing efficiency and threatens to erase individual personhood.
Ifeoma Ajunwa is the AI.Humanity Professor of Law and Ethics and the Founding Director of the AI and the Law Program at Emory Law. Prior to joining Emory Law in 2023, Professor Ajunwa was a tenured law professor at the University of North Carolina School of Law where she was the Founding Director of the Artificial Intelligence Decision-Making Research (AI-DR) Program at UNC Law. Her research interests are at the intersection of law and technology with a particular focus on the ethical governance of workplace technologies, and also on diversity and inclusion in the labor market and the workplace.
In a recent talk about The Quantified Worker, Professor Ajunwa explains that productivity tracking of workers developed on plantations. Prior to large plantations, there were independent artisans and traders who had control of their own shops. With the advent of slavery, there was a centralized system with a large core of workers –involuntary and unpaid slave workers—and a management desire to document productivity and prevent inefficiency or malingering. While the practice of slavery has been abolished, the ethos of a manager having the right to track the productivity of their workers at all times continues.
Unattended Consequences of Productivity and Surveillance Tools
In today’s technology, big data and artificial intelligence (AI) technologies are used to surveil workers in order to quantify efficiency and incentivize productivity. However, these tools can have the undesirable consequences of limiting productivity. One reason is because of the psychological effect on the worker. Since everything is documented, employees might believe it is not acceptable to go above and beyond their assigned tasks because they don’t want to be seen as doing something outside the purview of their job. Additionally, productivity apps and surveillance tools can stymie creativity. When corporations allow workers to be creative, they are very good at figuring out innovations, streamlining processes, and improving efficiency and safety.
But when you have over-surveillance, when you have over-quantification of productivity such that workers are petrified of making a mistake and petrified of not meeting whatever hours have been set for them, then you are impeding on creativity. And that is going to be a net loss for that organization because you are going to have less innovation. You're just going to have rote workers.
The Reach of Workplace Monitoring and Automated Hiring
The data-driven workplace goes beyond quantifying individual tasks. Workplace wellness programs appraise employee’s health; personality job tests calibrate individual’s mental state; and monitoring of social media and surveillance of the workplaces measures social behavior.
In the realm of hiring employees, employers use algorithms to review resumes, often with unintended discrimination coded in. And AI is used to conduct video interviews with job candidates; the technology evaluates their movements, eye contact, and voice.
An example Professor Ajunwa discussed was HireVue, a company that advertises an automated hiring platform with structured video interviews, skill assessments, and job search assistance that they claim is “data-driven” and “science-backed”. Professor Ajunwa said, “HireVue was claiming to do emotion analysis on the candidates. It was grading the interviews for trustworthiness, confidence, and veracity.” She countered these claims with the research of psychologists and social psychologists: “Human emotion can be expressed very differently depending on culture.” An American looking happy is very different from a Russian looking happy. The American smiles widely when they’re happy, and this may appear duplicitous to a Russian. And an American may think a Russian is not happy because they are not smiling broadly. “The science is actually not there to firmly say that any AI system can have access to the universal expression of emotion.”
The Ethics of AI Tools in the Workplace
“It’s impossible to hold back the tide of technological progress,” said Professor Ajunwa when discussing the negative consequences of automated and surveillance tools that are currently being used in the workplace. Technology will continue to progress, and companies will capitalize on it. However, she said, “I think we owe an ethical responsibility to think about how a product will affect society.”
Professor Ajunwa advocates for including “society in the loop” when developing new products. Considering the impact on society as a whole involves exploring how this new tool could be used for nefarious purposes, even though it is being developed for good. Think about safeguards that would prevent it from being used in a way that is unintended. Additionally, consider lobbying laws to govern the use of these technologies. Professor Ajunwa shared that New York City was one of the first jurisdictions to have standards for automated hiring. She said that some corporations were in support of this.
She included an important question when developing AI technologies: “Is this an appropriate job for an AI technology, or is this a job that is still more appropriate for a human being to handle?” Consider the premise of the product or tool idea. In the HireVue example, the premise that AI technology can assess human emotion was flawed.
Audits as a Critical Feedback Loop
“I don’t think you can claim to have responsible AI if you are not looking under the hood,” said Professor Ajunwa when asked about audits. She went on to say that audits have to be continual and have to take into account changing circumstances. She believes internal audits are good while a product is in development, and external audits can help show how a product is performing “in the wild.”
She stressed that audits are very important, particularly in getting past the automation bias. Professor Ajunwa shared a story from Amazon’s project of creating an automated hiring system. It was secretly disbanded shortly after deployment because the hiring system turned out to be biased against women. That of course, was not the goal. Amazon was trying to diversify its workforce by creating this automated hiring system. The premise for creating the system was good. The problem was the training data. They system took the top performers in the company and used them as training data for the hiring system. But in a corporation that has a demographic skewed toward males, the automated hiring system learned that male candidates were preferable.
Professor Ajunwa emphasized that the right audits on automated decision-making systems can provide a mirror to show what is going on in the company in terms of existing bias and how that has impacted the workforce.
Working with Policymakers on Responsible AI
Professor Ajunwa is a Founding Board Member of the Labor Tech Research Network which is an international group of scholars committed to the research of the ethics of AI used in the workplace and for labor. She has also previously served as a Board member for the Institute for Africa Development (IAD) and for the Cornell Prison Education Program (CPEP). Additionally, she has testified before the U.S. Congress (Committee on Education and Labor), and been invited to speak with governmental agencies such as the Consumer Financial Protection Bureau (the CFPB), the Equal Employment Opportunity Commission (the EEOC).
Read More:
- “The Auditing Imperative for Automated Hiring” by Ifeoma Ajunwa (Harvard Journal Law & Technology, 2021)
- “Race, Labor, and the Future of Work” by Ifeoma Ajunwa (Oxford Handbook of Race and Law, 2020)
- “The Paradox of Automation as Anti-Bias Intervention” by Ifeoma Ajunwa (Cardozo Law Review, 2020)
- “Protecting Workers' Civil Rights in the Digital Age” by Ifeoma Ajunwa (North Carolina Journal of Law & Technology, 2020)
- “Platforms at Work: Automated Hiring Platforms and Other New Intermediaries in the Organization of the Workplace” by Ifeoma Ajunwa and Daniel Greene (Work and Labor in the Digital Age, 2019)
- “Algorithms at Work: Productivity Monitoring Applications and Wearable Technology as the New Data-Centric Research Agenda for Employment and Labor Law” by Ifeoma Ajunwa (St. Louis University Law Journal, 2018)
- “Age Discrimination by Platforms” by Ifeoma Ajunwa (Berkeley Journal of Employment & Labor Law, 2019)
- “Limitless Worker Surveillance” by Ifeoma Ajunwa, Kate Crawford, and Jason Schultz (California Law Review, 2017)
About Ifeoma Ajunwa
Ifeoma Ajunwa is the AI.Humanity Professor of Law and Ethics and the Founding Director of the AI and the Law Program at Emory Law. Starting January 2024, she will also be the Associate Dean for Projects and Partnerships. Additionally, Professor Ajunwa has been a Faculty Associate at the Berkman Klein Center at Harvard University since 2017. Her research interests are at the intersection of law and technology with a particular focus on the ethical governance of workplace technologies, and also on diversity and inclusion in the labor market and the workplace.