FACT SHEET

October 15, 2019


Artificial Intelligence

Artificial intelligence (AI) refers to technologies that perform learning and reasoning in ways that simulate human cognitive abilities. Its development is at an inflection point, where the vast amount of data available for training, combined with the cloud’s powerful yet affordable computing power and storage capabilities, have facilitated great advances in innovative algorithms and machine learning in the past decade. 

The promise of AI is that the knowledge gained from applying analytics to the wealth of data that is available today will enhance any decision-making process with additional intelligence, leading to better outcomes. The intense global interest in AI is due to its potential to boost economic growth. Recent research by McKinsey estimated that AI could potentially provide an additional $13 trillion to the world economy by 2030. This will be realized primarily through innovation in products and services, and labor automation. Other factors such as the pace of adoption of AI, global connectedness, labor-market structure and industry structure of a country may also contribute to the size of the impact. 

Adoption of AI technologies available today is transforming society and changing processes in business and government. In the private sector, AI systems can diagnose disease, improve the accuracy of home price predictions, provide simple legal services, run self-driving cars, and manage therapeutic robots in the home. In the public sector, AI can identify surveillance images, control autonomous weapons, or guide the sentencing of criminal defendants. As with any technology, AI poses challenges that need to be addressed by policy makers and all stakeholders involved in its development and deployment.

Overview

The following are select issues that often arise in discussions of artificial intelligence:
  • What engineering practices, governance processes, principles, guidance, laws, regulations and policies will promote the development and deployment of responsible AI technologies without stifling innovation?
 
  • AI will be deployed in systems that make consequential decisions, including recommending access to credit, education, jobs, or criminal sentencing. How can policymakers work together with stakeholders to raise awareness of the potential for bias against disadvantaged groups and the need for mitigating solutions? What are some of the best practices that have been implemented in different sectors? What harms are already addressed by existing regulations, and what regulations are needed to address new harms?

  • Can transparency, that is, disclosure of the algorithms, rules and data sets used by AI systems, address concerns about privacy or fairness; if so, how should disclosures be designed?

  • AI systems use and collect large amounts of consumer data, whether such systems (such as smart cities) are controlled by the public sector or the private sector (such as medical robots in the home). How are concerns about privacy and cybersecurity being addressed? Will AI further exacerbate the dominance of a few firms, especially if large firms collect the most crucial training data? How will AI change the business strategies of firms that supply AI, firms that adopt AI, and create new business models and markets?

  • How rapidly will the use of AI spread throughout the economy? How does the pace of change effect productivity growth and the ability of society to adapt to AI-related issues, e.g., retraining? What data is needed to inform development of policy frameworks to mitigate these issues?

  • How will AI affect the future of work and the future of jobs, including traditional white-collar jobs such as those in law, medicine, economies?  Will AI displace human workers on a large scale, leading to widespread unemployment? Relatedly, should policymakers adopt a “universal basic income,” expand safety nets, or otherwise adapt tax and labor policies to ameliorate effects on displaced workers?

  • AI systems and robots can create content such as news articles and videos. How will courts apply first amendment rights of free speech and copyright laws to AI authors?

  • AI systems can create recordings of someone doing or saying something she did not really do or say. These “deepfakes” can be used for nefarious reasons such as manipulating elections or for blackmail. How can policy makers and other stakeholders work together to address the issues with “deep fakes”?

  • Does sufficiently advanced AI pose a threat to mankind? Do military AI and autonomous weapons pose a particular danger?

  • How can nations address AI’s potential for use in new forms of warfare (“hybrid warfare”), such as cyberattacks on banks and utilities and divisive “fake news” campaigns?
TAP Academics researching artificial intelligence include:
  • Daron Acemoglu of the Massachusetts Institute of Technology studies the effects of AI on the future of work.

  • Erik Brynjolfsson of the Massachusetts Institute of Technology writes about AI’s effects on productivity.    
 

The first-order effect [of AI] is tremendous growth in the economic pie, better health, ability to solve so many of our societal problems. If we handle this right, the next 10 years, the next 20 years, should be, could be, the best couple of decades that humanity has ever seen. . . . We need to be proactive about thinking about how we make this shared prosperity. . . . The challenge isn’t so much massive job loss, it’s more a matter of poor-quality jobs and uneven distribution. Quoted in The Mercury News, 3/18/2019.

 
  • M. Ryan Calo of the University of Washington has written a primer on AI public policy issues and has written extensively about the policy and legal implications of robotics.

  • Kate Crawford of Microsoft Research and the AI Now Institute at New York University writes about AI and political values. 
 

The use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation. Quoted in The Guardian, 4/16/2019.


  • Edward Felten of Princeton University considers the fairness of AI-based systems.

  • Joshua Gans of the University of Toronto Rotman School of Management studies the substitution of AI systems for human judgment.

  • Mary L. Gray of Indiana University at Bloomington studies the conditions of human workers (“ghost workers”) whose judgment supports AI-based services.

  • Margot Kaminski of the University of Colorado considers how constitutional rights of privacy and free speech should apply in cases involving AI.

  • Ian Kerr of the University of Ottawa writes about privacy, robots and big data.    

  • Deidre Mulligan of the University of California, Berkeley writes about marketing and designing AI to protect privacy.

  • Frank Pasquale of the University of Maryland writes about how smart cities, algorithms, and automated processes should be regulated to ensure fairness.    

These sources are a good place to start in understanding artificial intelligence issues. Daron Acemoglu and Pascual Restrepo examine AI displacement of human workers in “Artificial Intelligence, Automation, and Work.” Erik Brynjolfsson, Daniel Rock, and Chad Syverson consider the speed at which the benefits of AI will diffuse throughout the economy in “Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics.” Joshua Gans, Avi Goldfarb, and Ajay Agrawal describe the limits of AI-based predictions, showing that human judgment will still be needed, in “Prediction, Judgment, and Complexity: A Theory of Decision Making and Artificial Intelligence.” Mary L. Gray and Siddarth Suri report findings from a study of the digital workers who support AI systems in Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Margot Kaminski, Toni Massaro, and Helen Norton assess the application of constitutional rights of free speech in “Siri-ously 2.0: What Artificial Intelligence Reveals about the First Amendment.” 

 

Media Contact

For media inquiries on a range of TAP topics, or for assistance facilitating interviews between reporters and academics, contact TAP@techpolicy.com.

PDF Download

Share