You Might Be a Robot

Innovation and Economic Growth, Privacy and Security and Artificial Intelligence

Article Snapshot

Author(s)

Bryan Casey and Mark Lemley

Source

Cornell Law Review (2019)

Summary

Policymakers show increasing interest in regulating robots. However, a "robot" can be hard to define. The increasing pace of innovation makes it hard to apply the plain language of laws to new cases.

Policy Relevance

Legislators should regulate actions, not things. Regulators, rather than legislators, should draft definitions.

Main Points

  • A California statute requires online "bots" to disclose their nonhuman status to consumers; the law applies when actions "are not the result of a person," but a "person" includes corporations as well as natural people, and most online activity is the result of such “persons.”
     
  • Experts have not agreed on definitions of terms such as a "robot," "artificial intelligence," or "sensory perception," and consensus might not be possible.
     
  • Attempts to define technology terms can fail by defining terms carelessly, too narrowly, or too broadly; one law that defines "federal interest computer" so as to include almost any device connected to the Internet.
     
  • Policymakers also err by applying laws to robots that were intended for human beings, such as laws intended to ensure that human drivers or pilots take time off to sleep.
     
  • In 2015, Google's efforts to design an entirely self-driving car were complicated by regulations requiring vehicle devices and controls be located near the "driver."
     
  • Legislators should focus on regulating actions, rather than things; the Better Online Ticket Sales Act of 2016 simply prohibits efforts to evade online security protocols, without trying to define a bot.
     
  • Six factors will help lawmakers decide how to apply laws to purported bots:
     
    • Agenda: Is the bot’s activity in the public interest?
       
    • Automaticity: Is the bot really acting automatically, or are humans involved?
       
    • Autonomy: How much leeway does the bot have in implementing goals?
       
    • Agency: Who is held responsible for the bot’s conduct?
       
    • Ability: Can the bot conform to a rule such as a safety goal?
       
    • Anthropomorphization: Do people act as if the bot is human?
       
  • Regulators rather than legislators should be charged with defining robotics-related terms, because regulators are more likely than legislators to consult experts in crafting rules, and it is easier to fix bad regulation than bad legislation.

Get The Article

Find the full article online

Search for Full Article

Share