Mark Lemley Proposes We Embrace the Ineffable Nature of AI

By TAP Staff Blogger

Posted on March 21, 2019


Share

Real tweets, believed to be fake, produced genuinely fake news, believed to be real. The serpent of fakery, it seemed, had eaten its tail. At the center of this odd story was the increasingly unshakable suspicion that virtually everywhere we look humanity is besieged by undercover bots.
   - from “You Might Be a Robot” by Mark Lemley and Bryan Casey, both of Stanford Law School

 

Stanford law professors Mark Lemley and Bryan Casey have outlined a conundrum: how can policymakers regulate something that isn’t easily defined?

 

As robots and artificial intelligence (AI) increase their influence over society, policymakers are increasingly regulating them. But to regulate these technologies, we first need to know what they are. And here we come to a problem. No one has been able to offer a decent definition of robots and AI — not even experts. What’s more, technological advances make it harder and harder each day to tell people from robots and robots from “dumb” machines.

 

A recent Bloomberg article examines how laws are starting to confuse people and machines. From “You Might Be a Robot. This Is Not a Joke.”:

 

A legal definition of artificial intelligence that perfectly captured the AI field as it exists today would probably be rendered useless by the next technological advance. If laws defined AI as it existed a decade ago, they’d probably have failed to cover today’s neural networks, which are used to teach machines to recognize speech and images, to identify candidate pharmaceuticals, and to play games like Go.

 

In Professors Lemley and Casey’s new article, “You Might Be a Robot,” they put forward: “Rather than trying in vain to find the perfect definition, we instead argue that policymakers should do as the great computer scientist, Alan Turing, did when confronted with the challenge of defining robots: embrace their ineffable nature.”

 

Below are a few excerpts from “You Might Be a Robot” by Mark Lemley and Bryan Casey.

 

AI and Analogous Issues

 

As robots and artificial intelligence (AI) come to play greater roles in all areas of life—from driving, to weather forecasting, to policing—analogous issues have begun cropping up across a staggeringly diverse array of contexts. In recent years, we’ve seen Google ask that their robots be regulated as if they were humans. We’ve seen “pseudo-AI” companies ask that their human workers be regulated as bots. We’ve seen countries grant robots legal rights (Saudi Arabia, for example, granted citizenship to a robot in 2017). What’s more, we’ve not only seen bots pretending to be human—the concern that prompted the California law—but an increasing number of humans pretending to be bots. One delightful example comes from Ford. In 2017, the automaker resorted to dressing its human drivers as car seats to run “driverless” vehicle experiments, thanks to state laws which forbade operating a car without a human driver at the wheel. But beyond this somewhat cartoonish example lie many troubling ones. In fact, a host of emerging technologies like “DeepFakes,” “Lyrebird,” and “Duplex” make it easier to realistically pretend to be something you’re not, without having to resort to dressing as a car seat.

 

Embrace the Ineffable Nature of Robots

 

While many—including ourselves—have written of the policy challenges posed by these emerging technologies, our focus is different. We ask not “What should be done?” but “What should it be done to?” The law will regulate robots, human enhancement technologies, and many things in between. Indeed, it already does. But the blurring of the lines between machines, robots, and humans means that regulations specifically targeting robots need to be pretty clear about exactly who or what they’re attempting to regulate. So too, for that matter, do regulations targeting humans but not robots.

 

Simply defining “robot” may seem like an obvious place to start. But as California’s misbegotten “bot” legislation and Ford’s costumed car seat indicate, crafting a one-size-fits-all definition can be surprisingly hard. Indeed, our central claim is that it can’t be done, at least not well. The overlap between people, algorithms, computers, robots, and ordinary machines is sufficiently great that there is no good legal definition of a robot. As the great computer scientist Alan Turing observed almost a century ago, there’s something exceptional about robots and AI that make them exceptionally difficult to define. And, in the end, it might be impossible to come up with a satisfying definition that regulates only the robots or humans we really want to. This is particularly true because the nature of robots is changing fast, and legal definitions set with today’s technology in mind will rapidly become obsolete.

 

If we need to regulate robots but can’t explicitly define them, what do we do? …

 

In this Article, we argue that a better approach is to embrace the ineffable nature of robots and adapt our legal tools accordingly. We may not be able to successfully define robots ex ante. But as with obscenity, quite often we will know them when we see them. In other words, a common law, case-by-case approach may provide a promising means of successfully navigating the definitional issues presented by robots—one that builds and adapts its definitions inductively over time rather than trying to legislate it.

 

Inductive definition has significant implications for how we regulate. First, we should avoid attempts to explicitly define robots in statutes and regulations whenever possible. Society is better served by regulating acts rather than entities. Some behavior may be more common among robots than humans. But it is the behavior and its consequences that we will normally care about, not who (or what) engaged in it. Put another way, given the definitional challenges, the law is better off regulating verbs, not nouns. Second, when we do need to tailor our laws to specific entities, courts are better than legislatures at these sorts of accretive, bottom-up approaches. As such, we should rely on the common law to the extent we can, rather than rushing in with new regulations. Third, if the common law proves insufficient, we should prefer regulatory rulemaking to legislation. Regulation can more easily incorporate evidence and diverse perspectives, and it’s also easier to change when we (inevitably) screw it up. Finally, if we do need legislation specific to bots, it should be tailored as narrowly as possible and should include safeguards that allow us to revisit definitions as the technology evolves.

 

“You Might Be a Robot” Overview

 

In Part I, we discuss the origins and growth of robots, the blurring of lines between machine and human behavior, and the human impacts that robots are beginning to produce. In Part II, we discuss efforts to define robots and AI in legislation, regulation, and academic discourse, and argue that those efforts are doomed to fail. Finally, in Part III, we offer suggestions for how to regulate robotic behavior even when we don’t really know what a robot is.

 

From the Conclusion

 

As the human impacts of robots and AI increase, so too will our efforts to regulate them. To regulate robots, we’ll first need to establish what one is. As it turns out, this is not a straightforward task. As we’ve seen, many of our current attempts to define robots have failed miserably. Indeed, if you’re reading this, you’re (probably) not a robot, but the law might already treat you as one. The problem, however, isn’t simply a matter of failing to hit on the right definition. Rather, for a variety of reasons, there may be no right definition of robots.

 

… Want to regulate robots? Try instead regulating worrisome behavior regardless of who or what engages in it. Doing so avoids definitional traps and sharpens our regulatory focus—thereby making it less likely that the law will be easy to game and less likely that it will inadvertently interfere with innovation.

 

Read the full article: “You Might Be a Robot” (by Mark Lemley and Bryan Casey, Cornell Law Review, 2019).

 


Share