Recent Papers from TAP Scholars Examine AI’s Impact on the Future of Work and Privacy

By TAP Staff Blogger

Posted on September 13, 2021


Share

Adoption of artificial intelligence (AI) technologies available today is transforming society and changing processes in business and government. In the private sector, AI systems can diagnose disease, improve the accuracy of home price predictions, provide simple legal services, and run self-driving cars. In the public sector, AI can identify surveillance images, control autonomous weapons, or guide the sentencing of criminal defendants. As with any technology, AI poses challenges that need to be addressed by policy makers and all stakeholders involved in its development and deployment.

 

TAP scholars examine AI’s effects on labor, business, policing, privacy, racial and gender equity, and the economy. Below are a few of the academic papers that TAP scholars have written recently on the technology policy issues relating to artificial intelligence.

 

AI’s Impact on the Future of Work

 

The Future of Work in the Age of AI: Displacement or Risk-Shifting?
By Karen Levy and Pegah Moradi
The Oxford Handbook of Ethics of AI, Markus D. Dubber, Frank Pasquale, and Sunit Das, eds., pp. 27-52, 2020

 

In “The Future of Work in the Age of AI: Displacement or Risk-Shifting?,” law professor Karen Levy and Pegah Moradi, both of Cornell University, examine how artificial intelligence-based systems are altering the conditions and quality of work. The authors point out that employers use AI to shift risks to low-wage workers, adopting irregular schedules or systems that force workers to adopt a strenuous pace. Regarding policy proposals, while there are some policy proposals that seek to address the problem of workers displaced by AI-based systems; policymakers should also consider protection for workers who retain jobs, such as laws that require employers to announce schedules in advance.

 

Read the TAP article summary for “The Future of Work in the Age of AI: Displacement or Risk-Shifting?.”

 

Two recent papers by University of North Carolina law professor Ifeoma Ajunwa delve into how emerging AI systems may worsen labor inequalities and how automated decision-making systems may facilitate bias.

 

Race, Labor, and the Future of Work
By Ifeoma Ajunwa
Oxford Handbook of Race and Law, Eds. Emily Houh, Khiara Bridges, Devon Carbado, 2020

 

In “Race, Labor, and the Future of Work,” Professor Ajunwa points out that as automation and globalization transform the labor market, the need for labor protection for racial minorities remains a concern. For example, gig work platforms enable tech companies to hire workers anywhere in the world to monitor platform content; however, this could create a global class of marginalized workers who have no job security or benefits. Additionally, many Black and Hispanic female workers work in jobs considered the most likely to be automated; thus, labor automation could create large unemployment disparities. Professor Ajunwa proposes that policymakers develop greater legal protection to ensure that workers are protected and adequately compensated for their labor.

 

Read the TAP article summary for “Race, Labor, and the Future of Work.”

 

The Paradox of Automation as Anti-Bias Intervention
By Ifeoma Ajunwa
Cardozo Law Review, Vol. 41, No. 5, pp. 1671-1742, 2020

 

The Paradox of Automation as Anti-Bias Intervention” exposes the potential for discrimination in automated decision-making systems. The initial expectation of removing humans from decision-making processes and replacing them with automated decision-making systems was to eliminate bias; however, in this article, Professor Ajunwa explains how automated systems can sometimes amplify bias. She shares a case study of algorithmic systems used in the hiring process that reveals problematic features that are at odds with the principle of equal opportunity in employment: automated background checks of social media incorporate unwarranted assumptions that an applicant’s private behavior (like swearing) is relevant to their professional behavior; systems that analyze facial expressions struggle to read the expressions of those with darker skin; and, checks reveal information that employers are not supposed to consider, such as pregnancy status. Professor Ajunwa states that new legal doctrines are needed to support equal opportunity in employment.

 

Read the TAP article summary for “The Paradox of Automation as Anti-Bias Intervention.”

 

AI and Privacy

 

Protecting Workers' Civil Rights in the Digital Age
By Ifeoma Ajunwa
North Carolina Journal of Law & Technology, Vol. 21, Issue 4, pp. 1-26, 2020

 

In another article by UNC law professor Ifeoma Ajunwa, “Protecting Workers' Civil Rights in the Digital Age,” Professor Ajunwa outlines how automated hiring, workplace wellness programs, and electronic workplace surveillance raise concerns about employment discrimination and privacy. For example, automated hiring systems may deliberately or inadvertently enable discrimination based on race, gender, age, or other qualities through the following ways: design features allow “culling” of applicants with some traits without leaving a record; facially neutral variables can be used as proxies for race or gender; and, intellectual property law keeps features of automated hiring systems secret. Professor Ajunwa suggests guidelines for legislators to enact that would prevent automated discrimination and protect workers’ privacy.

 

Read the TAP article summary for “Protecting Workers' Civil Rights in the Digital Age.”

 

Journalism and the Voice Intelligence Industry
By Joseph Turow
Digital Journalism, November, 2020 (online)

 

Joseph Turow, Professor of Media Systems & Industries with the Annenberg School for Communication, examines how artificial intelligence systems that analyze human voices have important implications for the creation and marketing of news. In “Journalism and the Voice Intelligence Industry,” Professor Turow examines a variety of technologies that incorporate AI-based analytics, such as voice-intelligent speakers, vehicle information systems, customer service via voice call, and interconnected household devices. He explains how call centers use AI-based systems to analyze customers while they talk to mollify them or make a sale. He also presents information about how news outlets supported by advertising could use voice analysis to alter the arrangement of new articles, advertising, and discounts for different people to increase their interest in the journalistic brand; stories could be altered to match the audience member's emotions in real time. Professor Turow cautions that the use of voice intelligence to create and shape more profitable news will raise new conflicts of interest and risks of discrimination.

 

Read the TAP article summary for “Journalism and the Voice Intelligence Industry.”

 

Automation and Accountability

 

The Automated Administrative State: A Crisis of Legitimacy
By M. Ryan Calo and Danielle Citron
Emory Law Journal (forthcoming)

 

Law professors Ryan Calo, University of Washington, and Danielle Citron, University of Virginia, delve into the use of automation and algorithms within federal and state agencies. Examples include determining public benefits, evaluate public school teachers, assess unemployment benefits, and evaluate the risk posed by criminal defendants. In “The Automated Administrative State: A Crisis of Legitimacy” Professors Calo and Citron show how in some instances the automated tools result in a loss of due process and accountability; however, some agencies effectively incorporate the use of automated systems to meet their delegated responsibilities. The authors discuss how effective automation furthers values such as access, quality, and self-assessment; and these systems make the administrative state fairer and more effective, and are not just designed to reduce costs.

 

Read the TAP article summary for “The Automated Administrative State: A Crisis of Legitimacy.”

 

To read more from TAP scholars on these topics, peruse TAP’s issue-focused pages on artificial intelligence and privacy.

 

 


Share