PII 2.0

By Paul M. Schwartz and Daniel J. Solove

Posted on January 16, 2012


Share
This post is provided to TAP by Professor Paul Schwartz and Professor Daniel Solove. Paul Schwartz is Director of the Berkeley Center for Law & Technology and Professor of Law at the University of California, Berkeley. He is a leading international expert on information privacy and information law. Daniel Solove, Professor of Law at George Washington University, is an internationally-known expert in privacy law.
 
On January 5, 2012, we presented our paper, The PII Problem, as part of the speaker’s series, @Microsoft: Conversations on Privacy. Personally identifiable information (PII) is one of the most central concepts in information privacy regulation.  The scope of privacy laws typically turns on whether PII is involved. The basic assumption behind the applicable laws is that if PII is not involved, then there can be no privacy harm. At the same time, there is no uniform definition of PII in information privacy law.  Moreover, computer science has shown that in many circumstances non-PII can be linked to individuals, and that de-identified data can be re-identified. PII and non-PII are thus not immutable categories, and there is a risk that information deemed non-PII at one time can be transformed into PII at a later juncture.
 
In our presentation, we argued that although the current approaches to PII are flawed, the concept of PII should not be abandoned. We develop a new approach called “PII 2.0,” which accounts for PII’s malleability. Based upon a standard rather than a rule, PII 2.0 utilizes a continuum of risk of identification. PII 2.0 regulates information that relates to either an “identified” or “identifiable” individual, and it establishes different requirements for each category.
 
The panelists who responded to our paper were Christopher Calabrese, American Civil Liberties Union; Maneesha Mithal, Federal Trade Commission; and D. Reed Freeman, Jr., Morrison & Foerster. In his remarks, Chris agreed that de-identified data can offer great benefits to the public. He also liked our suggestion that a legal system based on a continuum of risk of identification would incentive the use of anonymous data by companies. Finally, he thought that it was important to emphasize that risks existed beyond identified and identifiable data, such as the use of anonymous data in redlining practices by financial institutions.
 
Maneesha agreed with our identification of the problem: the PII and non-PII distinction does not make as much sense as it once did. She wondered if a drawback of our approach was to provide less certainty for business. She also pointed to privacy-by-design programs as an existing example of an acknowledgment of the need for ongoing risk assessments in the use of personal data.
 
Finally, D. Reed Freeman worried about whether our standard of “identifiable” would be one that he could give advice about in his privacy practice. In his judgment, the FTC has done a good job of keeping up with technology while also drawing on input from stakeholder groups. FTC enforcement actions and FTC reference guides have created at least a basic legal framework for companies, and Reed wondered if our framework would make things worse rather than better for regulated parties.


Share