Daron Acemoglu: The Direction of the Future of AI Is in Our Hands

By TAP Staff Blogger

Posted on July 19, 2021


Share

Current AI research is too narrowly focused on making advances in a limited set of domains and pays insufficient attention to its disruptive effects on the very fabric of society.

 

AI can be used to increase human productivity, create jobs and shared prosperity, and protect and bolster democratic freedoms—but only if we modify our approach.
- MIT economics professor Daron Acemoglu

 

MIT economics professor Daron Acemoglu is a leading thinker on the labor market implications of artificial intelligence (AI), robotics, automation, and new technologies.

 

In an article Professor Acemoglu recently wrote for Boston Review’s forum, "AI’s Future Doesn’t Have to Be Dystopian," he stresses that if AI technology continues to develop on its current path, it’s likely to cause social upheaval in two areas: AI could negatively affect the future of jobs with a focus on automation instead of human productivity; and, AI may undermine democracy and individual freedoms. Additionally, he breaks down how essential the interconnected elements of corporate funding, regulation, societal expectations, and functioning democratic institutions are for meeting the challenge of redirecting AI research.

 

Professor Acemoglu says, “In order to redirect AI research toward a more productive path, we need to look at AI funding and regulation, the norms and priorities of AI researchers, and the societal oversight guiding these technologies and their applications.”

 

Below are select excerpts from Professor Acemoglu’s opening article for Boston Review’s forum, “AI’s Future Doesn’t Have to Be Dystopian”.

 

The World Automation Is Making

 

Automation acted as the handmaiden of inequality. New technologies primarily automated the more routine tasks in clerical occupations and on factory floors. This meant the demand and wages of workers specializing in blue-collar jobs and some clerical functions declined. Meanwhile professionals in managerial, engineering, finance, consulting, and design occupations flourished—both because they were essential to the success of new technologies and because they benefited from the automation of tasks that complemented their own work. As automation gathered pace, wage gaps between the top and the bottom of the income distribution magnified.

 

The causes of this broad pattern—more automation and less effort directed toward increasing worker productivity—are not well understood. To be sure, much of this predates AI. The rapid automation of routine jobs started with applications of computers, databases, and electronic communication in clerical jobs and with numerical control in manufacturing, and it accelerated with the spread of industrial robots. With breakthroughs in digital technologies, automation may have become technologically easier. However, equally (if not more) important are changes in policy and the institutional and policy environments. Government funding for research—especially the type of blue-sky research leading to the creation of new tasks—dried up. Labor market institutions that pushed for good jobs weakened. A handful of companies with business models focused on automation came to dominate the economy. And government tax policy started favoring capital and automation. Whatever the exact mechanisms, technology became less favorable to labor and more focused on automation.

 

The State of Democracy and Liberty

 

Though we are still very much in the early stages of the digital remaking of our politics and society, we can already see some of the consequences. AI-powered social media, including Facebook and Twitter, have already completely transformed political communication and debate. AI has enabled these platforms to target users with individualized messages and advertising. Even more ominously, social media has facilitated the spread of disinformation—contributing to polarization, lack of trust in institutions, and political rancor. The Cambridge Analytica scandal illustrates the dangers. The company acquired the private information of about 50 million individuals from data shared by around 270,000 Facebook users. It then used these data to design personalized and targeted political advertising in the Brexit referendum and the 2016 U.S. presidential election. Many more companies are now engaged in similar activities, with more sophisticated AI tools. Moreover, recent research suggests that standard algorithms used by social media sites such as Facebook reduce users’ exposure to posts from different points of view, further contributing to the polarization of the U.S. public.

 

With AI-powered technologies already able to collect information about individual behavior, track communications, and recognize faces and voices, it is not far-fetched to imagine that many governments will be better positioned to control dissent and discourage opposition. But the effects of these technologies may well go beyond silencing governments’ most vocal critics. With the knowledge that such technologies are monitoring their every behavior, individuals will be discouraged from voicing criticism and may gradually reduce their participation in civic organizations and political activity. And with the increasing use of AI in military technologies, governments may be further empowered to act (even more) despotically toward their own citizens—as well as more aggressively toward external foes.

 

Individual dissent is the mainstay of democracy and social liberty, so these potential developments and uses of AI technology should alarm us all.

 

The AI Road Not Taken

 

Even though the majority of AI research has been targeted toward automation in the production domain, there are plenty of new pastures where AI could complement humans. It can increase human productivity most powerfully by creating new tasks and activities for workers.

 

Let me give a few examples. The first is education, an area where AI has penetrated surprisingly little thus far. Current developments, such as they are, go in the direction of automating teachers —for example, by implementing automated grading or online resources to replace core teaching tasks. But AI could also revolutionize education by empowering teachers to adapt their material to the needs and attitudes of diverse students in real time. We already know that what works for one individual in the classroom may not work for another; different students find different elements of learning challenging. AI in the classroom can make teaching more adaptive and student-centered, generate distinct new teaching tasks, and, in the process, increase the productivity of—and the demand for—teachers.

 

The situation is very similar in health care, although this field has already witnessed significant AI investment. Up to this point, however, there have been few attempts to use AI to provide new, real-time, adaptive services to patients by nurses, technicians, and doctors. Similarly, AI in the entertainment sector can go a long way toward creating new, productive tasks for workers. Intelligent systems can greatly facilitate human learning and training in most occupations and fields by making adaptive technical and contextual information available on demand. Finally, AI can be combined with augmented and virtual reality to provide new productive opportunities to workers in blue-collar and technical occupations. For example, it can enable them to achieve a higher degree of precision so that they can collaborate with robotics technology and perform integrated design tasks.

 

How to Redirect AI

 

The answer, I believe, lies in developing a three-pronged approach: government involvement, norms shifting, and democratic oversight.

 

First, government policy, funding, and leadership are critical. To begin with, we must remove policy distortions that encourage excessive automation and generate an inflated demand for surveillance technologies. Governments are the most important buyers of AI-based surveillance technologies. Even if it will be difficult to convince many security services to give up on these technologies, democratic oversight can force them to do so. As I already noted, government policy is also fueling the adoption and development of new automation technologies. For example, the U.S. tax code imposes tax rates around 25 percent on labor but less than 5 percent on equipment and software, effectively subsidizing corporations to install machinery and use software to automate work. Removing these distortionary incentives would go some way toward refocusing technological change away from automation. But it won’t be enough. We need a more active government role to support and coordinate research efforts toward the types of technologies that are most socially beneficial and that are most likely to be undersupplied by the market.

 

Second, we must pay attention to norms. In the same way that millions of employees demand that their companies reduce their carbon footprint—or that many nuclear physicists would not be willing to work on developing nuclear weapons—AI researchers should become more aware of, more sensitive to, and more vocal about the social consequences of their actions. But the onus is not just on them. We all need to identify and agree on what types of AI applications contribute to our social ills. A clear consensus on these questions may then trigger self-reinforcing changes in norms as AI researchers and firms feel the social pressure from their families, friends, and society at large.

 

Third, all of this needs to be embedded in democratic governance. It is easier for the wrong path to persist when decisions are made without transparency and by a small group of companies, leaders, and researchers not held accountable to society. Democratic input and discourse are vital for breaking that cycle.

 

Concluding Thoughts

 

The type of transformation I'm calling for would be difficult at the best of times. But several factors are complicating the situation even further.

 

For one thing, democratic oversight and changes in societal norms are key for turning around the direction of AI research. But as AI technologies and other social trends weaken democracy, we may find ourselves trapped in a vicious circle. We need a rejuvenation of democracy to get out of our current predicament, but our democracy and tradition of civic action are already impaired and wounded. Another important factor, as I have already mentioned, is that the current pandemic may have significantly accelerated the trend toward greater automation and distrust in democratic institutions.

 

Finally, and perhaps most important, the international dimension deepens the challenge. … Any redirection of AI therefore needs to be founded on at least a modicum of international coordination. Unfortunately, the weakening of democratic governance has made international cooperation harder and international organizations even more toothless than they were before.

 

None of this detracts from the main message of this essay: the direction of future AI and the future health of our economy and democracy are in our hands. We can and must act. But it would be naïve to underestimate the enormous challenges we face.

 

Boston Review’s forum, “AI’s Future Doesn’t Have to Be Dystopian
Note: This forum is featured in Boston Review's new book, Redesigning AI.

 
  • Read the full opening article written by Professor Daron Acemoglu.
     
  • Read Erik Brynjolfsson's response. "When it comes to AI's effect on the workforce, the real challenge is wages, not jobs." Professor Erik Brynjolfsson is Director of the Stanford Digital Economy Lab at the Institute for Human-Centered AI.
     
  • Read Kate Crawford's response. "The automation debate swings between dystopian and utopian visions of the future. Less attention is paid to the current experiences of AI-modulated workplaces, particularly for those in low-wage work." Professor Kate Crawford is a Research Professor of Communication and Science and Technology Studies at USC’s Annenberg School for Communication and Journalism and a Senior Principal Researcher at Microsoft Research in New York.
     

Daron Acemoglu is MIT’s Institute Professor in the Department of Economics. Professor Acemoglu is a leading thinker on the labor market implications of artificial intelligence, robotics, automation, and new technologies. His innovative work challenges the way people think about how these technologies intersect with the world of work. Professor Acemoglu’s recent research focuses on the political, economic and social causes of differences in economic development across societies; the factors affecting the institutional and political evolution of nations; and how technology impacts growth and distribution of resources and is itself determined by economic and social incentives.


Share

Recent TAP Bloggers