Recent Papers from TAP Scholars Delve into AI’s Impact on Tech-policy Issues
Publication Date: January 20, 2023 6 minute readTAP scholars examine artificial intelligence technologies (AI) impact on gender equity, algorithmic decision-making, cost and rating collusion, free speech, and trade secrecy. Below are a few of the academic papers that TAP scholars have written recently on the technology policy issues relating to artificial intelligence.
AI’s Impact on Gender Equity
The Gender Panopticon: Artificial Intelligence, Gender, and Design Justice
By Jessica Jung and Sonia Katyal
UCLA Law Review, Vol. 68, pp. 692-785, 2021
Artificial Intelligence (AI) surveillance systems often use binary male/female gender classifications, failing to recognize the complexity of LGBTQ+ identity formation. Automated biometric gender recognition researchers make harmful assumptions: they assume that gender is binary, limited to male or female; they assume that gender is immutable; and, they assume that gender may be identified based on physical characteristics. Nonbinary persons are almost always misgendered. Law and technology design should support gender self-determination.
AI and Algorithms
Seeing Like an Algorithmic Error: What Are Algorithmic Mistakes, Why Do They Matter, How Might They Be Public Problems?
By Mike Ananny
Yale Journal of Law & Technology, Vol. 24, pp. 1-21, 2022
The errors made by machine learning-based systems such as remote proctoring tools can reveal deeper economic and policy issues, such as bias against students of color or low-income students. Algorithmic errors are made, not found; they are a product of people, perspectives, experiences, and assumptions. Some algorithmic errors make good public problems, revealing systemic failures and how they can be remedied; policymakers would benefit from classifying errors more precisely.
Algorithms and Decision-Making in the Public Sector
By Kyla Chasalow, Karen Levy, and Sarah Riley
Annual Review of Law and Social Science, Vol. 17, pp. 319-334, 2021
A road map for the study of algorithmic systems used by local governments in the United States, including issues relating to procurement, bias, transparency, and regulation. "Algorithm" refers to technologies that use machine learning or programmed rules to inform or execute actions. Increasingly, local governments within the United States use algorithmic systems to make decisions relating to criminal justice, benefits, and education. The design of algorithms used by government should support accountability.
Understanding AI Collusion and Compliance
By Justin Johnson and Daniel Sokol
Chapter in Cambridge Handbook on Compliance, D. Daniel Sokol & Benjamin van Rooij, eds., 2021
Artificial intelligence (AI) allows firms to adopt new types of anti-competitive behavior, but may also aid in the detection of such behavior. AI collusion could include non-price elements, such as product reviews and ratings. Collusion is said to have occurred when prices are higher than they would be if the players interacted with one another only in the short run, rather than interacting on a long-run basis. Firms and regulators should consider the possibility of different types of AI collusion: algorithms directed to maximize profits might learn to collude without the direct involvement of humans; and humans could intentionally design algorithms to collude.
AI-Generated Speech
Digital Democracy/Protecting Democracy
The First Amendment Does Not Protect Replicants
By Lawrence Lessig
Chapter in Social Media, Freedom of Speech, and the Future of our Democracy, Lee Bollinger and Geoffrey Stone, eds., Oxford University Press, 2022
Often, the Constitution protects speech from censorship even if the speaker is not human. Nonetheless, AI-generated political speech could harm democracy. And neither we nor the framers of the Constitution understand the ramifications of unchecked machine speech. While courts should follow settled constitutional doctrine, they should make exceptions when technology has changed the world fundamentally. AI-based content generators need not be banned, but are not entitled to full First Amendment protection.
AI Ethics
Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence
By Kate Crawford
Yale University Press, 2021
Artificial intelligence (AI) relies on natural resources, low-cost labor, and data. The production of AI technology harms the environment. AI systems rely on low-wage workers. AI ethics should include consideration of the labor conditions of miners, contractors, and crowdworkers; we should consider the experiences of those harmed by AI, and the systems' carbon footprint.
Read the related TAP blog: “Kate Crawford's New Book Examines the Power, Politics, and Planetary Costs of AI.”
AI and Trade Secrecy
From Trade Secrecy to Seclusion
By Charles Tait Graves and Sonia Katyal
Georgetown Law Journal, Vol. 109, Issue 6, pp. 1337-1420, 2021
Traditionally, trade secret law protected innovations from misappropriation by departing employees. Now, however, trade secret claims are often used to conceal information of public concern. However, use of machine learning algorithms to automate governmental decision-making threatens constitutional rights of due process, blocking decisionmakers from accountability for poor results. Courts should reject nontraditional trade secret claims. Legislators should protect public access to certain records.
Read the related TAP blog: “Sonia Katyal and Charles Graves Explore the Use of Trade Secrecy to Conceal Algorithmic Decisionmaking.”
Special Note: “Graves and Katyal win for their article ‘From Trade Secrecy to Seclusion’” – “Providing an insightful and alarming analysis on the evolving use of trade secrecy laws to conceal vital information from the public, Charles Graves and Sonia Katyal have been selected as the 2022 Law Science and Innovation/Intellectual Property Program Prize Contest recipients by the Center for Law Science and Innovation (LSI) and the Intellectual Property (IP) Law program at the Sandra Day O’Connor College of Law.”
Trademark Search, Artificial Intelligence and the Role of the Private Sector
By Sonia Katyal and Aniket Kesari
Berkeley Technology Law Journal, Vol. 35, pp. 501-586, 2021
Worldwide, trademark offices and private firms use artificial intelligence-based systems (AI) to identify distinct trademarks. Observers expect AI to streamline the trademark application process, improving trademark quality, but also making marks harder to obtain; and AI could allow firms to calculate the litigation risk of different trademarks, detect trademark infringement, as well as, design subtle counterfeits to deceive consumers. Trademark registration should be treated as an adversarial machine learning problem because, over time, AI search tools and the USPTO will continually adapt to one another’s decisions. AI will transform trademark business and legal processes.
To read more from TAP scholars on their work exploring AI and technology policy, peruse TAP’s issue-focused page on artificial intelligence.