Who Should Make the Decisions? Humans or AI?

By TAP Staff Blogger

Posted on August 20, 2020


Share

How does the introduction of the AI affect human effort? When AIs predict well, might humans decrease effort too much (“fall asleep at the wheel”)? When should the AI or the human have the right to make the final decision? Are “better” AIs in a statistical prediction sense necessarily more profitable for an organization?
from “The Allocation of Decision Authority to Human and Artificial Intelligence” by Susan Athey, Kevin Bryan, and Joshua Gans

 

In “The Allocation of Decision Authority to Human and Artificial Intelligence” Stanford economist Susan Athey, and Rotman School of Management economists Kevin Bryan and Joshua Gans share an analysis of how humans and artificial intelligence (AI) could work effectively together in the decision-making process. Their paper focuses specifically on the interaction of decision making for humans versus AI.

 

Below are a few excerpts from “The Allocation of Decision Authority to Human and Artificial Intelligence.”

 

Artificial intelligence (AI) adoption is often equated with automation, with machines replacing humans in tasks and decisions. In practice, however, AI often augments human activity. Consider partially self-driving cars with human override; suggested scripts for customer service; and scoring for risk or priority in hiring, audits, judicial sentencing, and fraud detection. Decisions often involve considerations that are difficult to digitize. Prior knowledge can be important for anticipating outcomes in novel or unusual circumstances. In these contexts, the automated predictions of fully automated AI can be insufficient even when AI reduces the cost of prediction along some margins (Agrawal, Gans, and Goldfarb 2019). This motivates an analysis of precisely how humans and AIs would work together.

 

Cost of Delegating to an AI

 

When the AI has decision rights, the [human] agent is tempted to “fall asleep at the wheel” since the AI frequently makes the choices. Even when the agent has decision rights, if the AI is an attractive “backstop” … then the [human] agent also has reduced incentives for effort.

 

Thus, in determining whether to give the AI decision authority, P [the principal] will weigh the potentially greater reliability of the AI in selecting projects against the difficulty of motivating H [the human agent] to expend more effort to identify projects with nonnegative returns for P [the principal].

 

The Trade-off of Human Effort and Decision Alignment from Decision Rights and AI Quality

 

Professors Athey, Bryan, and Gans created a taxonomy to show how the different types of AI can be considered when incorporating artificial intelligence into organization decision making. Below are their descriptions of the different types of AI they considered:

 
  • Replacement AI. — If a high-performing AI is available ..., then the AI should hold decision rights, and AI training focuses on eventually fully replacing humans.
     
  • Augmentation AI. — If current AI performance is relatively weak …, human agents are sufficiently well aligned with the principal, and human effort is only weakly responsive to changes in AI performance, then human agents retain decision rights, and marginal improvements in AI performance or decreases in AI bias are profit enhancing.
     
  • Unreliable AI. — When human agents are poorly aligned with the principal and potential AI performance is relatively strong, the AI optimally holds final decision rights. However, human effort is still important when the AI does not learn the optimal action, so if human effort is highly responsive to incentives, “unreliable” AI … is optimal as it trades off worse performance when the AI thinks it learns the optimal action against more human effort when it does not.
     
  • Antagonistic AI. — If current AI performance is relatively weak and human agents are sufficiently well aligned with the principal, but human effort strongly responds to changes in AI performance, then humans should retain decision rights. However, unlike with augmentation AI, it is optimal to bias an AI such that the AI action is particularly bad for the agent. When the AI’s choice “antagonizes” human agents, they increase effort to avoid the AI’s recommendation being reported to the principal.
     
  • This taxonomy leaves many potential details out, but it maps the broad choices for organizations in terms of whether to give an AI or a human decision authority and, in turn, whether to favor a technically superior (i.e., reliable and unbiased) AI or not. This choice will depend on the nature of human reactions to working with the AI as well as what is technically available to the organization.
     

Read the full article: “The Allocation of Decision Authority to Human and Artificial Intelligence” by Susan Athey, Kevin Bryan, and Joshua Gans (AEA Papers and Proceedings, Volume 110, May 2020, pp 80-84).


Share