TAP Scholars Discuss Legal and Ethical Issues with Robotics

By TAP Staff Blogger

Posted on April 20, 2018


Share

At last week’s We Robot conference, academics, policy makers, roboticists, economists, ethicists, entrepreneurs, and lawyers gathered to discuss robots and the future of robot law and policy. The increasing sophistication of robots and their widespread deployment everywhere—from the home, to hospitals, to public spaces, and even to the battlefield—disrupts existing legal regimes and requires new thinking on policy issues.

 

During this 7th annual conference, which was held at Stanford University, participants explored topics ranging from the effects of automation and artificial intelligence (AI) on employment; the need to modify tax policies given that robots are not good taxpayers; what machine learning (ML) diagnostics mean for the future of medical service, medical malpractice law, and doctors; whether AI can reduce the psychological burden of social isolation; and the complex interactions between robotic systems and humans in urban settings.

 

Several TAP scholars participated in the 2018 We Robot conference. Below are introductions to the papers they presented.

 

Remedies for Robots
Authors: Mark Lemley, Stanford Law School and Bryan Casey, CodeX, The Stanford Center for Legal Informatics

 

Engineers training an artificially-intelligent self-flying drone were perplexed. They were trying to get the drone to stay within a circle and to head towards the center of that circle. Things were going well for a while. The drone received positive reinforcement for successful flights, and it was learning to fly towards the middle of the circle more quickly and accurately. Then, suddenly, things changed. When the drone was near the edge of the circle, it would turn and fly away from the center, leaving the circle.

 

What went wrong? After a long time puzzling over the problem, the designers realized that when the drone left the circle during the test, they shut it down, and someone picked it up and carried it back into the circle to start the test over again. The learning algorithm in the drone had figured out – correctly – that if it was sufficiently far from the center, the easiest way for it to get back to the middle was to leave the circle. From the drone’s perspective, when it left the circle altogether, it was magically teleported back to the middle of the circle. The drone had found a short cut. It had complied with the rules it was given, but it had done so in a way that subverted the trainer’s intent.

 

What happens when artificially intelligent robots misbehave, as the drone did here? The question is not just hypothetical. As robots and artificial intelligence (AI) increasingly integrate into our society, they will do bad things. Sometimes they will cause harm because of a design or implementation defect: we should have programmed the self-driving car to recognize a graffiti covered stop sign but we failed to do so. Sometimes they will cause harm because doing so is a necessary byproduct of the intended operation of the machine. Cars kill lots of people every year, sometimes unavoidably. Self-driving cars will too. Sometimes the accident will be caused by an internal logic all of its own—but one that, nonetheless, does not sit well with us. And sometimes, as with our drone, they will do unexpected things for reasons that doubtless have their own logic but which we can’t understand or predict.

 

Our focus here is: what remedies can and should the law provide once a robot has caused harm.

 

In this paper, we begin to think about how we might design a system of remedies for robots. We might have to focus less attention on moral guilt and more on a no-fault liability system (or at least one that defines fault differently) to compensate plaintiffs. But paying for injury solves only part of the problem. Often we want to compel defendants to do (or not do) something in order to prevent injury. Injunctions, punitive damages, and even remedies like disgorgement are all aimed, directly or indirectly, at modifying or deterring behavior. But ordering a robot to do something is different than ordering a person to do it – sometimes easier, sometimes harder. And deterring robot misbehavior is going to look very different than deterring people. Deterrence of people often takes advantage of cognitive biases and risk aversion. People don’t want to go to jail, for instance, so they tend to avoid conduct that might lead to that result. But robots can be deterred only to the extent that their algorithms are modified to include external sanctions as part of the risk-reward calculus. Perhaps we need a “robot death penalty” – shutting down dangerous robots as a sort of specific deterrence against bad behavior.

 

Anonymous Robot Speech
Authors: Madeline Lamo, University of Washington School of Law and Ryan Calo, University of Washington School of Law

 

Bots talk to us every day. These often simple programs trace their lineage at least back to Joseph Weizenbaum who, in 1966, published a program known as Eliza. Eliza, named for the character in My Fair Lady, interacted credibly with people by posing Rogerian-style questions. Today, virtually any platform capable of supporting communications—from Facebook to Twitter to phone messaging apps—plays host to thousands of bots of varying sophistication. Bots can be entertaining and helpful. They can constitute art. But bots also have the potential to cause harm in a wide variety of contexts by manipulating the people with whom they interact and by spreading misinformation.

 

Concerns about the role of bots in American life have, in recent months, led to increased calls for regulation. Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, offered in a September 2017 New York Times op-ed the rule that “an A.I. system must clearly disclose that it is not human.” Billionaire businessman and technology investor Mark Cuban tweeted in January 2018 that Twitter and Facebook should “confirm a real name and real person behind every account” and ensure that there is “a single human behind every account.” Senators Klobuchar, Warner, and McCain have drafted legislation known as the “Honest Ads Act” that would modify FEC regulations about use of social media, including bots, in the political context. As drafted, the bill would require all “digital platforms with over 1 million users” to “maintain a public file of all electioneering communications purchased by a person or group who spends more than $10,000 aggregate dollars for online political advertisements.” Senator Warner stated that he “wants Americans seeing an ad to ‘know whether the source of that ad was generated by foreign entities’” and that “users should know whether a story is trending because real people shared it or because bots or fake accounts engaged with it.”

 

This paper considers the role that the First Amendment would play in any such regulations. Scholars who have considered the threshold question of First Amendment coverage of bot speech generally agree that constitutional free speech protections apply in this context. We tend to agree that bots are within the “scope” of First Amendment protection, to borrow Frederick Schauer’s famous terminology. But scope is only a threshold question; coverage does not tell us whether any given government intervention will succeed or fail. Generally speaking, the government may with adequate justification require the disclosure of truthful information—such as the calorie count of food—especially if the speech is commercial in nature. The government can apply reasonable time, manner, or place restrictions to any speech. To require a bot to identify as a bot, rather than as any individual speaker, feel intuitively different from censoring bot speech or unmasking the anonymous proponent of an idea.

 

Our thesis is that restricting bot speech, including through coerced self-identification, may be trickier than it first appears. As we explore below in greater detail, courts may look with skepticism at a rule requiring all bots to reveal themselves in all circumstances. Does a concern over consumer or political manipulation, for instance, justify a requirement that artists tell us whether a person is behind their latest creation? Moreover, even interventions that appear First Amendment sensitive on their face may wind up impossible to enforce without abridging free speech or otherwise facilitating censorship.

 

Is Tricking A Robot Hacking?
Authors: Ryan Calo, Ivan Evtimov, Earlence Fernandes, David O’Hair, and Tadayoshi Kohno – all with the Tech Policy Lab, University of Washington

 

The term “hacking” has come to signify breaking into a computer system. Lawmakers penalties for hacking as early as 1986 in supposed response to the movie War Games three years earlier in which a teenage hacker gained access to a military computer and nearly precipitated a nuclear war. Today a number of local, national, and international laws seek to hold hackers accountable for breaking into computer systems to steal information or disrupt their operation; other laws and standards incentivize private firms to use best practices in securing computers against attack.

 

The landscape has shifted considerably from the 1980s and the days of dial-ups and mainframes. Today most people carry around the kind of computing power available to the United States military at the time of War Games in their pockets. People, institutions, and even everyday objects are connected via the Internet. Driverless cars roam highways and city streets. Yet in an age of smartphones and robots, the classic paradigm of hacking, in the sense of unauthorized access to a protected system, has sufficed and persisted.

 

All of this may be changing. A new set of techniques, aimed not at breaking into computers but at manipulating the increasingly intelligent machine learning models that control them, may force law and legal institutions to reevaluate the very nature of hacking. Three of the authors have shown, for example, that it is possible to use one’s knowledge of a system to fool a machine learning classifier (such as the classifiers one might find in a driverless car) into perceiving a stop sign as a speed limit. Other techniques build secret blind spots into learning systems or reconstruct the private data that went into their training.

 

The unfolding renaissance in artificial intelligence (AI), coupled with an almost parallel discovery of its vulnerabilities, requires a reexamination of what it means to “hack,” i.e., to compromise a computer system. The stakes are significant. Unless legal and societal frameworks adjust, the consequences of misalignment between law and practice include (i) inadequate coverage of crime, (ii) missing or skewed security incentives, and the (iii) prospect of chilling critical security research. This last one is particularly dangerous in light of the important role researchers can play in revealing the biases, safety limitations, and opportunities for mischief that the mainstreaming of artificial intelligence appears to present.

 

The authors of this essay represent an interdisciplinary team of experts in machine learning, computer security, and law. Our aim is to introduce the law and policy community within and beyond academia to the ways adversarial machine learning (ML) alter the nature of hacking and with it the cybersecurity landscape. Using the Computer Fraud and Abuse Act of 1986—the paradigmatic federal anti-hacking law—as a case study, we mean to evidence the burgeoning disconnect between law and technical practice. And we hope to explain what is at stake should we fail to address the uncertainty that flows from the prospect that hacking now includes tricking.

 

We are living in world that is not only mediated and connected, but increasingly intelligent. And that intelligence has limits. Today’s malicious actors penetrate computers to steal, spy, or disrupt. Tomorrow’s malicious actors may also trick computers into making critical mistakes or divulging the private information upon which they were trained. We hope this interdisciplinary project begins the process of reimagining cybersecurity for the era of artificial intelligence and robotics.

 

The issues discussed in this paper are also explored in the TAP blog, So You Tricked a Robot, Are You Liable for Hacking?, written by David O’Hair.

 

When AIs Outperform Doctors: The Dangers of a Tort-Induced Over-Reliance on Machine Learning and What (Not) to Do About It
Authors: Michael Froomkin, University of Miami School of Law, Ian Kerr, University of Ottawa, and Joëlle Pineau, McGill University

 

Someday, perhaps sooner, perhaps later, machines will have demonstrably better success rates at medical diagnosis than human physicians, at least in particular medical specialties.

 

We can reasonably expect that machine-learning -based diagnostic competence, which we will sometimes call “AI” for short, will only increase. It is thus appropriate to consider what the dominance of machine-based diagnostics might mean for medical malpractice law, the future of medical service provision, the demand for certain kinds of physicians, and—in the longer run—for the quality of medical diagnostics itself.

 

In this article, we interrogate the legal implications of superior machine-generated diagnosticians, particularly those based on neural networks, currently a leading type of machine learning used in prediction. We argue that existing medical malpractice law will eventually require superior ML-generated medical diagnosis as the standard of care in clinical settings. We further argue that—unless implemented carefully—a physician’s duty to use ML in medical diagnostics could, paradoxically, undermine the very safety standard that malpractice law set out to achieve. Once mechanical diagnosticians demonstrate better success rates than their human trainers, effective machine learning will create legal (and ethical) pressure to delegate much if not all of the diagnostic process to the machine. If we reach the point where the bulk of clinical outcomes collected in databases are ML-generated diagnoses, this may result in future decision scenarios that are difficult to validate and verify. Many ML systems currently are not easily audited or understood by human physicians and, if this remains true, it will be harder to detect sub-par performance, jeopardizing the system’s efficacy, accuracy, and reliability. We maintain that such unintended consequences of medical malpractice law must be avoided, and canvass various possible technical and legal solutions.

 

Unless we are very confident in our technical solutions, we argue, there is a strong case for altering existing medical liability rules in order to maintain focus— when it comes to determining the appropriate role of humans and machines in medical diagnostics—on both ethics and cost rather than defensive medicine. A revision of the standard of care to avoid allowing a machine-only diagnostic regime, would require meaningful participation by people in the loop. As such it risks being expensive since the machine will cost money and the rule we propose will negate potential cost savings from reducing the number of physicians in reliance on the new technology. We suggest, however, that our proposal could be a first step in preventing law from overriding these other important considerations, preserving many long-term beneficial outcomes that would otherwise be at risk due to pressure from the legal system and from cost-cutting.

 

To read more papers from We Robot 2018, see the We Robot Agenda page.

 


Share