Can Privacy Self-Regulation Work for Consumers?

By Chris Hoofnagle

Posted on January 26, 2011


Share

The Department of Commerce's Green Paper explores the idea that industries could create self-regulatory codes of conduct for privacy, and that the FTC could enforce those codes.  In a previous post, I explain why the assumption that the FTC can police these promises is problematic. Here, I hope to remind participants in the debate that we've tried voluntary codes for over a decade now, and in the privacy field, it hasn't gone too well.

America's sectoral privacy system has created a culture where many businesses build their systems just outside a regulatory regime, use the same data covered by the regime, and even sell it to the actors that the regime anticipates regulating.  One such example is the commercial data broker.  These companies sell data very similar to a credit report, to entities that typically buy credit reports, for purposes very close to credit reporting purposes, but nonetheless, much of their activities remains outside the FCRA.


Commercial data brokers did organize to create a code of conduct.  It was called the IRSG (to see what happened to it, visit
irsg.org). The FTC, in reviewing the IRSG’s provisions, noted the very risks that came to fruition in the ChoicePoint data breach (the problem of malicious insiders).  Privacy advocates pointed out the IRSG’s laughable provisions, including the problem that it greenlighted the sale of data basically to anyone, except the “general public,” which apparently meant only individuals too incompetent to get a business license. Non-IRSG member companies soon emerged, and sold data to the “general public” anyway.  IRSG created an illusory opt out right, one that companies complied with by simply stating that the consumer had no right to opt out, because the data broker did not engage in any practices subject to the opt out provisions. Soon after the passage of GLBA, the IRSG stopped functioning, making the promises of auditing and enforcement illusory too. 


The history of the Network Advertising Initiative is similarly dismal. The principles that it proposed in 2000 cannot even be found on its own website.  One would hope that a self-regulatory program could at least maintain an archive of its own documents.  IRSG failed at this too. 


The NAI opt out only prevents members from targeting advertising based upon tracking, but it still allows tracking.  Thus opting out creates a worst case scenario outcome: the user is still tracked but does not enjoy the putative benefit of targeted advertising. 


These self-regulatory approaches were very bad for privacy.  In the absence of substantive privacy law, commercial data brokers created the very citizen databases that the Privacy Act of 1974 sought to prevent.  The government can simply buy data on its citizens now instead of collecting it directly.  Citizens have no way to prevent this short of living “off the grid.”  Similarly, non-NAI member network advertisers have multiplied, developed more sophisticated methods to track individuals, and engaged in the very behaviors that NAI promised it would prevent, such as the merging of data collection online and off.


What lessons can be learned about this?  Self-regulatory groups in the privacy field often form in reaction to the threat of regulation.  They create protections that largely affirm their current and prospective business practices.  The consumer rights created are narrow.  They do not update their standards in response to changes, until the regulatory spotlight returns.  Nor do they address new actors that raise similar concerns but fall outside of the self-regulatory regime.  Promises to audit and enforce are often empty.  Increasingly, these self-regulatory efforts lack moral force, in part because
troubling critiques of them go unaddressed or unanswered.

How could the Department address this?  It could start by remembering the history of some of the actors in this field.  But looking forwad, the Framework must endeavor to create self-regulatory systems with much stronger incentives to police the industry. The Department should consider:

  • Is it adequately broad to cover the harms posed by the system?
  • Is it adequately strong so that consumers are given real rights and choices?
  • Does it have incentives for oversight to cause regular review and update in light of new technologies and risks?
  • Is it specific enough to clearly delineate between compliant and non-compliant actors?
  • Are the audits proposed meaningful, and publicly available?
  • Is it powerful enough to discipline its own members?
  • Is it powerful enough to broaden its scope when new actors emerge that implicate the same concerns yet fall outside the strict definitions of the program?
  • Does it create measurable benchmarks that if not satisfied, will trigger removal of the safe harbor?

Share

About the Author

  • Chris Hoofnagle
  • University of California, Berkeley
  • 212 South Hall
    Berkeley, CA 94720


Recent TAP Bloggers