Artificial Intelligence and Machine Learning

Artificial intelligence (AI) refers to technologies that perform learning and reasoning in ways intended to simulate human cognitive abilities. Specific techniques and technologies include machine learning, neural networks, and large language models. The tasks that AI technologies can perform include speech recognition, computer vision, and translation between languages. AI’s development is at an inflection point, where the vast amount of data available for training, combined with the cloud’s powerful yet affordable computing power and storage capabilities, have facilitated great advances in innovative algorithms and machine learning in the past decade.

The promise of AI is that the knowledge gained from applying analytics to the wealth of data that is available today will enhance any decision-making process with additional intelligence, leading to better outcomes. The intense global interest in AI is due to its potential to boost economic growth. Recent research by McKinsey estimated that AI could potentially provide an additional $13 trillion to the world economy by 2030. This will be realized primarily through innovation in products and services, and labor automation. Other factors such as the pace of adoption of AI, global connectedness, labor-market structure, and industry structure of a country may also contribute to the size of the impact. The World Economic Forum’s Future of Jobs Report 2020 states that by 2025, even though 85 million jobs may be displaced by a shift in the division of labor between humans and machines, it is expected that 97 million new roles could emerge that are more adapted to the new division of labor between humans, machines and algorithms.

Adoption of AI technologies available today is transforming society and changing processes in business and government. In the private sector, AI systems can diagnose disease, improve the accuracy of home price predictions, provide simple legal services, run self-driving cars, and manage therapeutic robots in the home. In the public sector, AI can identify surveillance images, control autonomous weapons, or guide the sentencing of criminal defendants. As with any technology, AI poses challenges that need to be addressed by policy makers and all stakeholders involved in its development and deployment.

Policy Issues

The following are select issues that often arise in discussions of artificial intelligence:

  • What engineering practices, governance processes, principles, guidance, laws, regulations, and policies will promote the development and deployment of responsible AI technologies without stifling innovation?
     
  • AI will be deployed in systems that make consequential decisions, including recommending access to credit, education, jobs, or criminal sentencing. How can policymakers work together with stakeholders to raise awareness of the potential for bias against disadvantaged groups and the need for mitigating solutions? What are some of the best practices that have been implemented in different sectors? What harms are already addressed by existing regulations, and what regulations are needed to address new harms?
     
  • Can transparency, that is, disclosure of the algorithms, rules, and data sets used by AI systems, address concerns about privacy or fairness; if so, how should disclosures be designed?
     
  • AI systems use and collect large amounts of consumer data, whether such systems (such as smart cities) are controlled by the public sector or the private sector (such as home healthcare robots). How are concerns about privacy and cybersecurity being addressed? Will AI further exacerbate the dominance of a few firms, especially if large firms collect the most crucial training data? How will AI change the business strategies of firms that supply AI, firms that adopt AI, and create new business models and markets?
     
  • How rapidly will the use of AI spread throughout the economy? How does the pace of change effect productivity growth and the ability of society to adapt to AI-related issues, e.g., retraining? What data is needed to inform development of policy frameworks to mitigate these issues?
     
  • How will AI affect the future of work and the future of jobs, including traditional white-collar jobs such as those in law and medicine? Will AI displace human workers on a large scale, leading to widespread unemployment? Relatedly, should policymakers adopt a “universal basic income,” expand safety nets, or otherwise adapt tax and labor policies to ameliorate effects on displaced workers?
     
  • What will the societal implications of large language models such as ChatGPT be? How will large language models impact the environment in terms of energy consumption, the future of work, and the future of education?
     
  • AI systems and robots can create content such as news articles and videos. How will courts apply first amendment rights of free speech and copyright laws to AI authors?
     
  • AI systems can create recordings of someone doing or saying something she did not really do or say. These “deepfakes” can be used for nefarious reasons such as manipulating elections or for blackmail. How can policy makers and other stakeholders work together to address the issues with “deep fakes”?
     
  • Does sufficiently advanced AI pose a threat to mankind? Do military AI and autonomous weapons pose a particular danger?
     
  • How can nations address AI’s potential for use in new forms of warfare (“hybrid warfare”), such as cyberattacks on banks and utilities and divisive “fake news” campaigns?
TAP scholars researching AI include:

Daron Acemoglu of the Massachusetts Institute of Technology studies the effects of AI on the future of work.

“AI can be used to increase human productivity, create jobs and shared prosperity, and protect and bolster democratic freedoms—but only if we modify our approach.” Quoted in Boston Review, May 20, 2021

Ifeoma Ajunwa of the University of North Carolina School of Law focuses on the ethical governance of workplace technologies and on diversity and inclusion in the labor market.

Erik Brynjolfsson of Stanford writes about AI’s effects on productivity.

“[Stanford’s ErikBrynjolfsson identified another threat. The world risked being flooded with bot-generated emails, posts and tweets peddling disinformation on a massive scale and warned there was a need for a control mechanism to separate the true from the false.” Quoted in The Guardian, January 20, 2023

Ryan Calo of the University of Washington has written a primer on AI public policy issues and writes extensively about the policy and legal implications of robotics.

“Citizens and residents, not industry or law enforcement, should decide what sort of roles robots can play in local communities. The alternative is to sleepwalk into science fiction.” Quoted in Forbes, January 24, 2023

Théodore Christakis of the Université Grenoble Alpes researches international security law, international protection of human rights, and artificial intelligence.

Danielle Citron of the University of Virginia writes about sexual privacy; privacy and national security challenges of deep fakes; and the automated administrative state.

“Deepfake videos and audios could undermine the democratic process by tipping an election.” Quoted in Wired, November 16, 2020

Kate Crawford of USC’s Annenberg School for Communication and Microsoft Research writes about the social and political implications of artificial intelligence.

“We've had chatbots like this for many decades. And while they can seem intelligent, if you're conducting a conversation with them, they are in fact just trained on large amounts of text that's drawn primarily from the internet.” Quoted in France 24, June 15, 2022

Joshua Gans of the University of Toronto Rotman School of Management studies the substitution of AI systems for human judgment.

Mary L. Gray of Microsoft Research and Indiana University at Bloomington studies the emerging field of AI and ethics, particularly research methods at the intersections of computer and social sciences.

Margot Kaminski of the University of Colorado examines autonomous systems, including AI, robots, and drones.

Sonia Katyal of Berkeley Law focuses on artificial intelligence and intellectual property, and the intersection between the right to information and human rights.

“The irony is that this [Waymo hiding safety-related information] has happened at the very same time that the opaque nature of algorithmic decision-making, coupled with the new interplay between government agencies and private technologies, has created a crisis regarding access to information by journalists, regulators and others working in the public interest.” Quoted from the Los Angeles Times, January 28, 2022

Ian Kerr of the University of Ottawa wrote about privacy, robots and big data.

Karen Levy analyzes the uses of monitoring for social control in various contexts, and researches how data collection and the algorithms trained by it uniquely impacts, and is contested by, marginalized populations.

Frank Pasquale of the Brooklyn Law School works on algorithmic accountability to bring the demands of social justice movements to AI law and policy.

Rob Seamans of New York University focuses on the economic consequences of AI, robotics and other advanced technologies.

Evan Selinger of the Rochester Institute of Technology addresses ethical issues concerning technology, including artificial intelligence, science, and the law.

Understanding AI issues

These sources are a good place to start in understanding artificial intelligence issues. 

Daron Acemoglu and Pascual Restrepo examine AI displacement of human workers in “Artificial Intelligence, Automation, and Work.” 

Ifeoma Ajunwa examines how emerging artificial intelligence systems may worsen labor inequalities in “Race, Labor, and the Future of Work.”

In “The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence,” Erik Brynjolfsson looks at how human-like artificial intelligence could lead to a realignment of economic and political power. 

Mary L. Gray and Siddarth Suri report findings from a study of the digital workers who support AI systems in Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass.

In “GDPR and the Importance of Data to AI Startups,” James Bessen, Stephen Michael Impink, Lydia Reichensperger, and Rob Seamans explore the impact of the EU’s General Data Protection Regime (“GDPR”) and data regulation on AI startups.

In "New Laws of Robotics: Defending Human Expertise in the Age of AI,“ Frank Pasquale analyzes the law and policy influencing the adoption of AI in varied professional fields. 

Joshua Gans, Avi Goldfarb, and Ajay Agrawal describe the limits of AI-based predictions, showing that human judgment will still be needed, in “Prediction, Judgment, and Complexity: A Theory of Decision Making and Artificial Intelligence.” 

Ryan Calo and Danielle Citron advocate for federal and state agencies to use automation to enhance fairness and effectiveness, not simply to cut costs in “The Automated Administrative State: A Crisis of Legitimacy.”

In “The Right to Contest AI,” Margot Kaminski and Jennifer Urban argue for an individual right to contest AI decisions, modeled on due process but adapted for the digital age.

In “Algorithms and Decision-Making in the Public Sector,” Kyla Chasalow, Karen Levy, and Sarah Riley provide a road map for the study of algorithmic systems used by local governments in the United States, including issues relating to procurement, bias, transparency, and regulation.

With “The Gender Panopticon: Artificial Intelligence, Gender, and Design Justice,” Jessica Jung and Sonia Katyal show how AI surveillance systems often use binary male/female gender classifications, failing to recognize the complexity of LGBTQ+ identity formation.

Robert Chesney and Danielle Citron explain how “deep-fake" technologies will exacerbate social divisions, manipulate election results, and erode trust in public institutions in “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.”

In Atlas of AI: Power, Politics, and the Planetary Costs of Artificial IntelligenceKate Crawford reveals how the global networks underpinning AI technology are damaging the environment, entrenching inequality, and fueling a shift toward undemocratic governance.

Media Contact

For media inquiries on a range of TAP topics, or for assistance facilitating interviews between reporters and academics, contact TAP@techpolicy.com.