Helen Nissenbaum and Finn Brunton Discuss Using Obfuscation for Privacy and Protest

By TAP Staff Blogger

Posted on December 3, 2015


Share

New York University’s Helen Nissenbaum and Finn Brunton offer ways to fight today’s pervasive digital surveillance—the collection of our data by governments, corporations, advertisers, and hackers—in their new book, Obfuscation: A User’s Guide for Privacy and Protest. Obfuscation is the deliberate use of ambiguous, confusing, or misleading information to interfere with surveillance and data collection projects.

 

In Obfuscation, the authors provide tools and a rationale for evasion, noncompliance, refusal, and even sabotage—especially for average users. The book provides a guide to the forms and formats that obfuscation has taken and explains how to craft its implementation to suit the goal and the adversary. The authors describe a series of historical and contemporary examples, including radar chaff deployed by World War II pilots, Twitter bots that hobbled the social media strategy of popular protest movements, and software that can camouflage users’ search queries and stymie online advertising. They go on to consider obfuscation in more general terms, discussing why obfuscation is necessary, whether it is justified, how it works, and how it can be integrated with other privacy practices and technologies.

 

In an interview with the Christian Science Monitor (“How to Hide Your Digital Trail in Plain Sight”), Professors Nissenbaum and Brunton discuss the moral case for and against the use of obfuscation. Below are a few excerpts.

 

Type of Situations that Call for Obfuscation

Professor Nissenbaum: I go back to my own experience with working and developing tools like TrackMeNot. In that situation Google was logging all our searches and there was no constraint on what they could do with those searches. I have no access to say to Google, "Don't do it." TrackMeNot, is a typical type of scenario, where you need to engage with someone for something to work. You need to provide information in order for them to respond. But you have no say about what they do with that information on the other side.

 

Professor Brunton: We refer to obfuscation as a “weapon of the weak.” This is a concept from the political theorist Jim C. Scott, who did extensive research in peasant agricultural communities in Southeast Asia. His particular interest was how people who lack access to the tools for redress that others might have, like the ability to vote, access to the law, or violence carve out find different ways to kind of push back against inequities in their situation.

 

What he tried to identify were "weapons of the weak," which included things like pilfering, deliberate stupidity, and slow downs, all these sorts of small scale ways to resist situations, which are not ones where you can take a noble Spartacus-like stand against injustice. Obviously, Google and consumers have a different kind of power relationship. Consumers don't necessarily know or even are in a position to understand what is being done to their data.

 

It's not just they can't sort of selectively refuse, but it's that anyone who is not a professional in data mining or machine learning is not really going to be able to grasp the potential of what can be done with these things. For us, it became really interesting to take this idea of weapons of the weak and take it in this direction of people who are nonetheless weak in relation to the powers that are gathering their data. Then see what kinds of tools are available to them to use. Obfuscation really jumped out as one of that classic approaches.

 

Reasons Not to Use Obfuscation

Professor Nissenbaum: I mean there are two kinds of questions. One is this discussion we've been having, which is the use of obfuscation for purposes that we considered to be problematic, like stifling speech. Let's just assume we're in the space where we agree that the end that we're trying to achieve is correct. That doesn't mean that any means to the end is correct, and a lot of the critiques that we write in the book were actual ones that had come up when we presented the ideas.

 

For example, who are we free riding on? Are we free riding on the servers by taking advantage of an online service we are implicitly agreeing to give our data to in exchange for a service? Are we free riding on other people? If you doing using obfuscation really relies on other people not using it, then you're gaining advantage on other people's disappointment. Our argument is that when you're looking at any obfuscation tool, in order to come out at the end and say, "Yes, this is morally acceptable,” you need to analyze the specific design and implementation of that system.

 

Professor Brunton: Obfuscation strategies that involve many people all hiding their identities at once as a way of concealing the activities of one person can backfire. If only one person is wearing a mask, they’re much more identifiable than if no one is wearing a mask. There's lots of situations in which a single person trying to obfuscate might be at greater risk. To say that it's very contingent on the threat model, on the adversary, on the goals that the user has, so those are kinds of the things that shape the circumstances in which we can say, "Obfuscation is going to work better than another privacy technique here, but worse than it here."

 

Obfuscation is something that needs to optimized to particular threats, particular adversaries in particular roles. No one is saying everyone should start wearing their hoodies up over their heads all the time. Part of what makes this technique exciting is that we’re still at the starting point for a larger inquiry into how to use these things.

 

In an interview with the Slate, Professors Nissenbaum and Brunton explain why they believe obfuscation could be a useful—and even necessary—tool for regaining some privacy online. Below are a few excerpts from “Does That Look Like Me?

 

Ideal Audience to Learn about Obfuscation

Our goal for our readership is kind of threefold. First, people in general, literally anyone who is interested in how contemporary privacy online can work.

 

At the same time, we have a couple of more specific groups in mind. One of them is communities of actual developers, people who are producing new kinds of software tools, who can potentially take advantage of these techniques to allow themselves to provide services and even create businesses that won’t end up compromising the information that their users provide, in one way or another.

 

And then the last group is actually people who are making policy and creating regulations around information and the collection of user activity online, who may be able to find in these ideas, we hope, provocation to think about how that information could be protected, without losing functionality.

 

Incorporating Obfuscation into Peoples’ Everyday Online Activities

[Obfuscation] is something that we hope can be picked up as a practice by individuals, groups, and different communities in steadily growing numbers as ways to be able to participate in the network while making clear that the ways in which services on the network are being paid for, the kinds of services that are being built on data from us has flaws and that those flaws need to be corrected and that there are problems with our current arrangements that demand change and that this provides a way for us to begin to do so.

 

[Obfuscation] is not a replacement, but rather a supplement, a complement that we would see added to the existing toolkit of privacy protection practices that range from select disclosure and shared illusion among groups up to cryptography and the many, many related technologies to that.

 

We would begin to see obfuscation become a common part of that vocabulary, become something that could provide use in lots of different contexts and part of the use it could provide would exactly be the way of saying, through practice, that the way we are online now is vitally important and technologically amazing, but in many ways is fundamentally unjust and works against our autonomy, and that’s not OK.

 

Obfuscation: A User’s Guide for Privacy and Protest is available at MIT Press and Amazon.

 

Helen Nissenbaum is Professor of both Media, Culture and Communication, and Computer Science at New York University, where she is also Director of the Information Law Institute. Her areas of expertise span the social, ethical, and political implications of information technology and digital media.

 

Finn Brunton is Assistant Professor of Media, Culture, and Communication at New York University, Steinhardt School of Culture, Education, and Human Development. He focuses on the adoption, adaptation, modification and misuse of digital media and hardware; privacy, information security, and encryption; network subcultures; hardware literacy; and obsolete and experimental media platforms.

 


Share