So You Tricked a Robot, Are You Liable for Hacking?

By TAP Guest Blogger

Posted on April 11, 2018


Share

This post was written by David O’Hair.

 

In “Is Tricking a Robot Hacking?,” Professors Ryan Calo and Tadayoshi Kohno, along with Ivan Evtimov, Earlence Fernandes, and David O’Hair, explore the Computer Fraud and Abuse Act’s (CFAA) applicability to methods of adversarial machine learning used to compromise systems powered by artificial intelligence. The CFAA is the nation’s preeminent anti-hacking law.

 

Machine learning is a body of techniques taken out of the larger body of artificial intelligence. Machine learning refers to a system’s ability to improve its performance by refining a model. The approach typically involves spotting patterns in large bodies of data that in turn permit the system to make decisions or claims about the world. Adversarial machine learning refers to the ways people can cause machine learning systems to make predictable errors by exploiting system blind spots.

 

The CFAA’s expansive application to protected computer systems leaves security researchers and legal teams wanting for a clear answer on the potential liability of adversarial machine learning techniques.

 

Consider the following scenario:

 

Causing a car crash by defacing a stop sign to appear like a speed limit. An engineer extensively tests the detector used by the driverless cars company where she works. She reports to the founder that she’s found a way to knowingly deface a stop sign to trick the car into accelerating instead of stopping. The founder suspends operations of his own fleet but defaces stop signs near his competitor’s driverless car plant. A person is injured when a competitor driverless car misses a stop sign and collides with another vehicle.

 

While foul play is clear, the CFAA’s anti-hacking provisions are not as definite. In defacing the stop sign, the founder can be said to have caused the transmission of “information” — from the stop sign to the car — that led to a public safety risk. Courts demonstrate leeway with applying the CFAA to situations that clearly endanger public safety, but might not squarely fit the CFAA statutory language. For example, in U.S. v. Mitra the court found Mitra to have violated the CFAA by jamming a 911 call center’s radio signal by simply broadcasting his own radio signal that cancelled-out the call center’s signal.

 

Adversarial machine learning has non-life threatening applications that still challenge the CFAA case law.

 

Consider the following scenario:

 

Shoplifting with anti-surveillance makeup. An individual steals from a grocery store equipped with facial recognition cameras. In order to reduce the likelihood of detection, the individual wears makeup she understands will make her look like another person entirely to the machine learning model. However, she looks like herself to other shoppers and to grocery store staff.

 

While there was no physical harm, instead financial, the CFAA’s transmission opaqueness survives. Comparing the two examples above, it can be argued that both the founder and the shopper transmitted information with the intent to compromise a protected computer in the eyes of the CFAA, but the difference is the founder had no right to deface stop signs, while the shopper has a right to apply makeup to her own face.

 

What a comparison between the CFAA and adversarial machine learning reveals is ambiguity. It simply is not clear how or when the CFAA applies to “tricking” a robot as opposed to “hacking” it. This ambiguity creates potential problems with prosecutorial overreaching and creating a dangerous new scope of the CFAA. Related, the risk of chilling research is prominent as security researchers currently do not know how far they can test before the CFAA is implicated.

 

Abstract from “Is Tricking a Robot Hacking?”

 

The term “hacking” has come to signify breaking into a computer system. A number of local, national, and international laws seek to hold hackers accountable for breaking into computer systems to steal information or disrupt their operation. Other laws and standards incentivize private firms to use best practices in securing computers against attack.

 

A new set of techniques, aimed not at breaking into computers but at manipulating the increasingly intelligent machine learning models that control them, may force law and legal institutions to reevaluate the very nature of hacking. Three of the authors have shown, for example, that it is possible to use one’s knowledge of a system to fool a driverless car into perceiving a stop sign as a speed limit. Other techniques build secret blind spots into machine learning systems or seek to reconstruct the private data that went into their training.

 

The unfolding renaissance in artificial intelligence (AI), coupled with an almost parallel discovery of its vulnerabilities, requires a reexamination of what it means to “hack,” i.e., to compromise a computer system. The stakes are significant. Unless legal and societal frameworks adjust, the consequences of misalignment between law and practice include (i) inadequate coverage of crime, (ii) missing or skewed security incentives, and the (iii) prospect of chilling critical security research. This last one is particularly dangerous in light of the important role researchers can play in revealing the biases, safety limitations, and opportunities for mischief that the mainstreaming of artificial intelligence appears to present.

 

The authors of this essay represent an interdisciplinary team of experts in machine learning, computer security, and law. Our aim is to introduce the law and policy community within and beyond academia to the ways adversarial machine learning (ML) alter the nature of hacking and with it the cybersecurity landscape. Using the Computer Fraud and Abuse Act of 1986—the paradigmatic federal anti-hacking law—as a case study, we mean to evidence the burgeoning disconnect between law and technical practice. And we hope to explain what is at stake should we fail to address the uncertainty that flows from the prospect that hacking now includes tricking.

 

Read the full article: “Is Tricking a Robot Hacking?

 

David O'Hair is a JD candidate at the University of Washington School of Law and a Research Fellow at the Tech Policy Lab.

 


Share