We need new privacy law for the digital age.
The following article was originally published in the Boston Review on February 25, 2016. It is republished with permission.
It is not often that a legal battle over smartphone firmware captures the national imagination, but such is the case as the FBI tries to access the data contained on suspected San Bernadino shooter Syed Farook’s iPhone. The feds want Apple to help it break into the phone, under the authority of an obscure 1789 law called the All Writs Act. Thus an ancient statute meets an icon of the digital age. This odd pairing is strangely appropriate, as the Apple case, and others like it, will help to determine whether our hard-fought gains in civil liberties will survive today’s technology.
The FBI has a search warrant, and thus the right to examine the phone of the suspected shooter. But it ran into a technical problem when it tried to do so. Apple uses encryption in its iPhone design, and if you don’t know the password, it is almost impossible to break in. While one can attempt to “brute force” a password—enter every possible combination of allowable characters until at last alighting on the right one—an optional feature causes the phone to delete all of its data after ten failed attempts to enter the right password. No one seems to know if this feature has been enabled on this particular iPhone.
Faced with this problem, the FBI has obtained a court order demanding that Apple provide “reasonable technical assistance” to bypass or disable the phone’s auto-erase function. Apple is fighting the order, arguing that if it creates the software tool the FBI wants, this tool could be used in the future to prevent any iPhone from erasing itself, thereby undermining the product’s security features. Apple is also concerned about the legal precedent this case could set, forcing the company to aid any government—democratic or repressive—seeking to breach the security and privacy of its millions of users around the globe. The FBI argues that this case concerns only a suspected terrorist’s iPhone. Apple argues that at stake is the future of privacy and security in the digital world.
Both Apple and the FBI are right, and that is the problem. Courts decide cases one at a time, but those cases together create the law. This case is one of several pending that may guide the law toward a point where every digital interaction is within reach of the government. Or, by contrast, these cases could set limits on the ability of the government to break security technologies, such as passwords and encryption, that guard us from prying eyes of domestic and foreign governments, criminals, and terrorists.
What principles will guide the courts facing novel questions about surveillance in an era when our data is held by dozens of companies? Law enforcement wants to analogize to bank and telephone records, which it can access without warrants under what is known as the third-party doctrine. But there is reason to resist this logic today. It has become essentially impossible for most Americans to avoid committing their lives to the digital record. Access to all of this data would mark a vast expansion of the government’s power to tear down the walls of privacy. It is therefore essential that the public support tech firms standing up for privacy, even as we may have good reason to question those companies’ motives.
• • •
Not long ago, as “sharing” was taking off, it would have been unusual to see a major tech company going to the mat for privacy. For years now, it has been trendy for commentators and technology executives to claim that the idea of privacy is outdated. The New York Times’s Thomas Friedman predicted that privacy was “going bye-bye.” Facebook’s Mark Zuckerberg claimed that “the age of privacy is over.” Google’s Vint Cerf was slightly more circumspect, but still suggested to the Federal Trade Commission that privacy may have been a twentieth-century anomaly.
Yet privacy refuses to die without a fight. Apple’s stand in the San Bernadino case is part of a broader effort to position itself in the post–Steve Jobs, post–iCloud hack era as a company committed to the privacy of its users and the security of their data. CEO Tim Cook has championed strong encryption and opposed federal efforts to require the installation of “back doors” that law enforcement could use to subvert encrypted security measures. During an appearance on The Late Show, and in other communication with employees and the public, he has spoken eloquently about the importance of data privacy.
And Apple is not the only tech firm positioning itself as pro-privacy. Microsoft is also resisting a federal search warrant seeking access to customer emails stored at a data center in Ireland. Here the question is what legal standard governs access to email and whether data held in one country by a global company can be demanded by any government, anywhere in the world. Microsoft has taken its fight against the federal government to the Second Circuit Court of Appeals in New York, where a decision is expected soon.
Any tech company’s attempt to protect their customers’ data from government scrutiny will at some point run into the third-party doctrine. According to this controversial legal thinking, the Constitution’s Fourth Amendment doesn’t protect a person’s data when someone else possesses it. Thus any personal data held by companies becomes fair game for government seizure without the warrant that would be required if law enforcement wanted to search papers in a private home. The doctrine emerged in the 1970s and ’80s when the Supreme Court heard criminal cases involving bank records and telephone company records of the phone numbers their customers dialed.
The Court’s intuition in those cases was that when we put information “out there,” we no longer can treat it as though it were private. There is a certain amount of sense in this logic: if you tell someone your secrets, you don’t get to complain when they blab. The doctrine had obvious application in an analog world in which our documents usually remained in our homes, we read exclusively on paper, and the phone company recorded just the phone numbers we dialed and not the contents of conversations themselves. In the phone numbers case, Smith v. Maryland (1979), the Supreme Court seems to have been persuaded by Maryland’s argument to the effect that, in the old days, a caller had to tell a human operator the recipient’s number. In that case, too, the stakes for civil liberties seemed small, and the defendant, a purse-snatcher turned stalker, clearly guilty.
But in a digital world, the Court’s intuition not only makes much less sense but also threatens the end of the Fourth Amendment as we know it. Pushed far from its origins, Smith has been used to justify, among other things, warrantless dragnet surveillance by the National Security Agency, the collection of cell-phone GPS and other metadata, and the use of “stingrays” or cell-site simulators—government-run cell tower simulators that trick your phone into trusting them and then gobble up its cellular data.
To claim that these data collection schemes are of a piece with their analog predecessors is too clever by half. Today, because so many technologies work by transmitting and storing our data, the civil-liberties implications of the third-party doctrine are far greater—limitless, really. Presumably the government will also claim the right to any data captured by the new wave of “Internet of things” technologies arriving or on the horizon—Internet-connected televisions, light switches, washing machines, electrical meters, Barbie dolls, connected and then self-driving cars, and even toilets.
This situation puts the Fourth Amendment at risk. For centuries, our criminal justice system has limited the power of the state by presuming not only that one is innocent until proven guilty, but also that one has the right to be free of police monitoring and interference unless there is probable cause to suspect that one has committed a crime. The warrant requirement forces police to persuade a judge that an individual is up to no good. It can be inconvenient, which is precisely the point. But if our data—the facts of our lives—are no longer locked up in analog technologies and also unprotected by the warrant requirement, surveillance and interference becomes much easier, and the specter of a police state looms large. The straightforward logic of the third-party doctrine, applied to our present moment, ensures not access to a narrow range of suspects’ records but the potential for constant and widespread population monitoring. This is a threat to individuality and eccentricity. It undermines not just intellectual privacy but indeed any claim that we live in a free society.
How, then, to guard the practices of the modern digital society and restore the critical balance between individual privacy in general and government power to investigate particular crimes? The solution is to expand the protections of the Fourth Amendment to encompass data stored by new technologies. In the past, the Supreme Court has expanded the Fourth Amendment to include letters and the contents of telephone calls, even those made from public phone booths. The current Court has indicated some sympathy for this approach in recent cases forcing police to obtain warrants in order to install GPS trackers on cars and seize smartphones. But data subject to the third-party doctrine remains in legal limbo. Cloud-stored data lies under a Fourth Amendment question mark; so does email, although one prominent lower court has ruled that emails are protected.
Apple’s and Microsoft’s stands on behalf of privacy and security are therefore deeply important. In these cases and others like them, we are witnessing an emerging truth about civil liberties today: when our lives rely upon digital technologies, we have little choice but to rely in turn on those technologies’ providers to protect our interests. When the government comes to them seeking our data, they, not we, are in the best position to protect that data, and, by extension, our civil liberties. In essence, we are forced to trust them.
Some might suggest that we shouldn’t. After all, such companies usually are profit-maximizing organizations for which quarterly returns and shareholder value are more important than something as ethereal as civil liberties. And while every other major democracy has a national privacy law regulating the corporate sector, American privacy law is piecemeal, highly constrained, and often allows companies to do what they want with our data, as long as they don’t lie about it. These concerns are warranted, and customers should be vigilant in their interactions with tech firms. Not all treat privacy with equal seriousness—it is worth noting that the companies taking the strongest stands for privacy are the ones that sell software and devices in exchange for customers’ money, not the “free” services that “monetize” their users by using their data to serve them advertisements. Sometimes we will need government to regulate companies, too. A fair and just digital future demands checks and balances among users, governments, and technology firms.
But we need not always be cynical. Yes, businesses benefit when we trust them: Apple and Microsoft want our confidence, and their privacy positions reflect this. But when corporate and civil-liberties interests coincide, users should embrace the alignment. For better or for worse, the nature of modern technologies is such that the companies creating, designing, and controlling those technologies must play a role in maintaining our vital liberties. We can therefore be cautiously optimistic about the San Bernadino iPhone and Irish email cases. These are important moves from companies that we, as democratic citizens, need on our side.
The preceding is republished on TAP with permission by its author, Neil Richards, Professor of Law at Washington University in St. Louis, and the Boston Review. “The iPhone Case and the Future of Civil Liberties” was originally published February 25, 2016 in the Boston Review.