Erik Brynjolfsson and Kate Crawford Share Thoughts on President Trump’s “American AI Initiative”

By TAP Staff Blogger

Posted on February 21, 2019


Last week, President Trump signed an Executive Order directing federal agencies to prioritize research and development in artificial intelligence (AI). The American AI Initiative has five “key pillars:”

  • Research and Development – to prioritize AI investments.
  • Infrastructure – to help researchers access federal data, algorithms, and computer processing.
  • Governance - general guidelines to be drafted for governing AI to ensure its safe and ethical use.
  • Workforce – to look for ways to continue workers' educations.
  • International Engagement – to collaborate on AI with other countries, while not compromising U.S. interests.

MIT’s Erik Brynjolfsson and AI Now Institute’s Kate Crawford shared their thoughts on the President’s AI initiative with an AP News article. Below are a few excerpts from “Trump Calls for Investment in Artificial Intelligence.”


The White House plan that Trump signed Monday doesn’t include any funding details. The administration says it’s up to Congress to appropriate money. That lack of specifics is troubling to AI experts such as Erik Brynjolfsson, a management professor at the Massachusetts Institute of Technology.


“The good news is America’s research infrastructure in artificial intelligence is leading the world,” Brynjolfsson said. “But other countries are making much more aggressive investments and rapidly closing the gap, especially China.”


Kate Crawford, a co-director of New York University’s AI Now Institute for studying the social implications of artificial intelligence, said the directive takes some steps in the right direction but is too light on details.


“AI policy isn’t an autonomous vehicle,” Crawford said. “You basically need a detailed plan or it’s going to run off the road.”


Crawford said she welcomed the Trump administration’s intention to accelerate research and regulate AI across different industrial sectors. But she said the administration also must ensure that AI’s potential ethical challenges are taken seriously.


Brynjolfsson said it’s important for U.S. policymakers to not only push the AI technology frontier, but also think hard about values and how the technology is implemented.


“China in many ways has very different values than we have in the West about things like surveillance, privacy, democracy, property rights,” he said. “If we want Western values to thrive, we need to play a role in maintaining and even extending the technological strength we’ve long had.”


More Information About AI Scholars Kate Crawford and Erik Brynjolfsson


Kate Crawford
Kate Crawford is a leading researcher, academic and author who has spent the last decade studying the social implications of data systems, machine learning and artificial intelligence. She is a Distinguished Research Professor at New York University, a Principal Researcher at Microsoft Research New York, and a Visiting Professor at the MIT Media Lab. Ms. Crawford is the co-founder and co-director of the AI Now Research Institute, along with Meredith Whittaker: a new interdisciplinary research center dedicated to studying the social impacts of artificial intelligence.


Recent Work from Kate Crawford


Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice
By Kate Crawford, Rashida Richardson, and Jason Schultz
New York University Law Review Online, Forthcoming



Law enforcement agencies are increasingly using algorithmic predictive policing systems to forecast criminal activity and allocate police resources. Yet in numerous jurisdictions, these systems are built on data produced within the context of flawed, racially fraught and sometimes unlawful practices (‘dirty policing’). This can include systemic data manipulation, falsifying police reports, unlawful use of force, planted evidence, and unconstitutional searches. These policing practices shape the environment and the methodology by which data is created, which leads to inaccuracies, skews, and forms of systemic bias embedded in the data (‘dirty data’). Predictive policing systems informed by such data cannot escape the legacy of unlawful or biased policing practices that they are built on. Nor do claims by predictive policing vendors that these systems provide greater objectivity, transparency, or accountability hold up. While some systems offer the ability to see the algorithms used and even occasionally access to the data itself, there is no evidence to suggest that vendors independently or adequately assess the impact that unlawful and bias policing practices have on their systems, or otherwise assess how broader societal biases may affect their systems.


In our research, we examine the implications of using dirty data with predictive policing, and look at jurisdictions that (1) have utilized predictive policing systems and (2) have done so while under government commission investigations or federal court monitored settlements, consent decrees, or memoranda of agreement stemming from corrupt, racially biased, or otherwise illegal policing practices. In particular, we examine the link between unlawful and biased police practices and the data used to train or implement these systems across thirteen case studies. We highlight three of these: (1) Chicago, an example of where dirty data was ingested directly into the city’s predictive system; (2) New Orleans, an example where the extensive evidence of dirty policing practices suggests an extremely high risk that dirty data was or will be used in any predictive policing application, and (3) Maricopa County where despite extensive evidence of dirty policing practices, lack of transparency and public accountability surrounding predictive policing inhibits the public from assessing the risks of dirty data within such systems. The implications of these findings have widespread ramifications for predictive policing writ large. Deploying predictive policing systems in jurisdictions with extensive histories of unlawful police practices presents elevated risks that dirty data will lead to flawed, biased, and unlawful predictions which in turn risk perpetuating additional harm via feedback loops throughout the criminal justice system. Thus, for any jurisdiction where police have been found to engage in such practices, the use of predictive policing in any context must be treated with skepticism and mechanisms for the public to examine and reject such systems are imperative.


Erik Brynjolfsson
Erik Brynjolfsson’s research examines the effects of information technologies on business strategy, productivity and performance, digital commerce, and intangible assets. He is Director of the MIT Initiative on the Digital Economy, Professor at MIT Sloan School, and Research Associate at NBER.


Recent Work from Erik Brynjolfsson


Machine, Platform, Crowd: Harnessing Our Digital Future
By Erik Brynjolfsson and Andrew McAfee
W.W. Norton & Company, Inc., September 2018



In The Second Machine Age, Andrew McAfee and Erik Brynjolfsson predicted some of the far-reaching effects of digital technologies on our lives and businesses. Now they’ve written a guide to help readers make the most of our collective future. Machine | Platform | Crowd outlines the opportunities and challenges inherent in the science fiction technologies that have come to life in recent years, like self-driving cars and 3D printers, online platforms for renting outfits and scheduling workouts, or crowd-sourced medical research and financial instruments.


From the First Chapter, The Triple Revolution:

Consider these three examples:

  • AlphaGo’s triumph over the best human Go players;
  • the success of new companies like Facebook and Airbnb that have none of the traditional assets of their industries;
  • and, GE’s use of an online crowd to help it design and market a product that was well within its expertise.

These examples illustrate three great trends that are reshaping the business world.


The first trend consists of the rapidly increasing and expanding capabilities of machines, as exemplified by AlphaGo’s unexpected emergence as the world’s best Go player.


The second is captured by [strategist Tom] Goodwin’s observations about the recent appearance of large and influential young companies that bear little resemblance to the established incumbents in their industries, yet are deeply disrupting them. These upstarts are platforms, and they are fearsome competitors.


The third trend, epitomized by GE’s unconventional development process for its Opal ice maker, is the emergence of the crowd, our term for the startlingly large amount of human knowledge, expertise, and enthusiasm distributed all over the world and now available, and able to be focused, online.


From the rise of billion-dollar, Silicon Valley unicorns to the demise or transformation of Fortune 500 stalwarts, the turbulence and transformation in the economy can seem chaotic and random. But the three lenses of machine, platform, and crowd are based on sound principles of economic and other disciplines. The application of these principles isn’t always easy, but with the right lenses, chaos gives way to order, and complexity become simpler. Our goal in this book is to provide these lenses.