Contextual integrity (CI) was first proposed by TAP scholar Helen Nissenbaum in 2004 as a new framework for reasoning about privacy. Rather than defining privacy as control over information, CI takes into account the context of how the information is acquired and used. Recently, the University of Chicago hosted the third annual symposium on the applications of Contextual Integrity. The symposium was organized by Helen Nissenbaum.
Below is an executive summary of the symposium, written by Jake Chanenson.
* * * *
In late September The University of Chicago hosted the third annual symposium on the applications of Contextual Integrity. Facilitated in a hybrid format -- where both the in person and online participants had a zoom presence -- this event brought together a diverse group of privacy minded scholars from Computer Science, Information Science, Law, Economics, and industry totaling over 100 attendees. (See APPENDIX for welcome slides and program.)
The symposium was held as a hybrid event allowing for in-person gatherings for those able to travel to Chicago and using Zoom to facilitate remote attendance. The sessions were structured to maximize interaction between in-person and remote attendees. This allowed all remote attendees and in-person attendees to have a presence and close-up camera-angle and give a sense of community. Throughout the event, the Zoom attendees were visible to in-person attendees on the screens at the back of the room and outside the main conference room when breaks were taking place. This led to a feeling of togetherness despite the hybrid nature of the event. In breakout sessions, remote and in-person attendees were paired in smaller conference rooms and breakout sessions to help discussions proceed between all attendees seamlessly. Finally, the camera-angles were manned within the plenary room throughout the two-day event: the camera views were switched from speakers to audience members as needed during times when speakers were talking and audience members were asking questions to again give remote attendees a better view of the room.
What is Contextual Integrity?
Contextual integrity, abbreviated as CI, is a theory of privacy that equates privacy with information flows that match the existing norms for a given context. In this case, a context encapsulates how the information was collected, how it is transmitted, and which stakeholders are involved. Consider a user who has a heart condition and consents for their heart to be monitored by their treating physician. If this information was sent to their treating physician via a secure portal -- as matches current norms -- then privacy is preserved. If, however, the cardiovascular information flow is facilitated by posting the data to a public webpage then this would constitute a privacy violation because this information flow does not match currently existing norms surrounding the transmission of medical data. Moreover, if the user’s heart data was, unbeknownst to them, sent to their estranged family member this too would constitute a privacy violation because sending medical information to estranged family members is not an existing norm of medical information flow.
Session 1: Transmission Principle
Presentations in this session discussed the transmission principle parameter. Simply put, the transmission principle is the constraint on a flow of information. (i.e., the conditions surrounding what information can be transmitted and what information can be collected). Pennsylvania State University’s Priya Kumar analyzed users’ attitudes to various information flows of their fitness data. Her team uncovered a range of transmission principles governing the flow of fitness information which underscores how comfort with fitness data information flow can vary based on context.
Shifting focus to the private sector, David Shekman and New York University School of Law’s Sebastian Benthall used fiduciary duties as a case study to examine how loyalty becomes a sophisticated transmission principle. In this context, loyalty requires the agent in the fiduciary role to control the client’s information according to the client’s best interest. Thus, any deviation would both be a breach of fiduciary duty but also, through the lens of CI, a privacy violation.
Session 2: Privacy and CI
This session focused on CI as a framework to understand privacy. RIKEN AIP’s Katsuhito Nakasone, discussed employing Daniel Solove's taxonomy of privacy to create an enriched heuristic that would make CI more robust when dealing with normative aspects of privacy. Taking a different angle to strengthen CI, Leanne Wu and her team at the University of Calgary explored the varied connotations of the word “contextual” across multiple domains. Since “contextual” means different things to different people, Wu argues that this linguistic murkiness creates an added stumbling block for the wide application of CI. As such she and her team proposed a new definition of “context” that allows for “multiple definitions for context to co- exist.” This new definition could be leveraged to implement contextual integrity across domains with less friction.
Session 3: CI and Law
Presentations in this session explored the different ways that CI and the law can interact. Andrew Gilden from Willamette University College of Law discussed how the deceased’s right to privacy causes issues for the living; since the deceased is unable to give consent, it falls to Judges to posthumously protect the deceased's information flows. This leads to tricky situations where a family member may need access to an account of a deceased individual but can’t convince the Judge that their request is not a violation of the deceased’s privacy. If a user had a mechanism to assign others access to their accounts and information stores before they die, then they could preserve their privacy postmortem and circumvent a painful judicial process for their loved ones.
Along the same vein of privacy in the courtroom, McGill University’s Ignacio Cofone observed that the Courts -- specifically US Federal Courts -- have difficulties defining and identifying privacy harm. To address this issue, Cofone proposed a new three step framework for determining privacy loss and harm. He argues that if his framework was applied in court it would solve the inconsistent rulings between regional circuits on how to constitute a privacy injury.
Lastly, shifting from privacy to commerce, University of Chicago Law School’s Lior Strahilevitz and Columbia Business School’s Lisa Yao Liu presented empirical findings about the economic harms of data breaches by examining consumer behavior patterns before and after data breaches. They found that there was a significant short-term effect on consumer spending and credit card usage after localized data breaches. In addition to being interesting in its own right, the answer to this question could determine whether consumers who have suffered a data breach have suffered a legally cognizable “injury in fact” that would allow them to sue in federal court.
Session 4: CI and COVID-19
This session discussed how the COVID-19 pandemic affected norms surrounding various information flows in society. Frederic Gerdon from the Mannheim Centre for European Social Research, discussed shifting public attitudes on sharing health data as an information flow. Predictably, people are more in favor of sharing health data when it will be used to combat a global health crisis in late spring of 2020 as compared to late summer of 2019. However, the authors warned that this may be a temporary shift -- once the pandemic fades into memory, the context surrounding health information will change and people may ascribe a different value to that information flow.
Dialing in on COVID-19 specific information flows, Martin Degeling and his team at the Ruhr University Bochum conducted an online survey to investigate German, Chinese, and US participants’ attitudes towards COVID-19 related tracking apps on smartphones. They found that Chinese participants preferred the collection of personalized information while the US and German participants did not. Despite the differing views of information collection, however, all three populations found that contract tracing was viewed more positively than quarantine enforcement.
Session 5: Applications of CI
All the presentations in this section were thematically tied together through the application of CI concepts to specific privacy issues in tech. University of Calgary’s Allan Lyons and Joel Reardon highlighted how smartphone apps could be collecting sensor data outside of contexts users expect -- possibly resulting in a privacy violation. They plan to investigate what triggers an app to access sensor data (e.g., tapping, unlocking the phone, or accelerometer-based gestures) in hopes of finding contextual clues about that could warn of sensor access.
In a similar vein of internet connected sensors, the University of Edinburgh’s Gideon Ogunniye presented his work that employs CI to build a comprehensive understanding of IOT privacy. According to Ogunniye, the current state of the field lacks an ontology that accounts for contexts-based privacy. This is a troubling deficiency since IOT enthusiasts envision a future where smart homes, smart cities, and smart wearables are widely deployed and are in constant communication. In response, Ogunniye hopes to build a framework that -- via CI -- allows agents (user, device, platform) to dynamically change privacy preferences, communicate these privacy decisions and provide explanations of these decisions. Thus, in theory, ushering in a fully connected IOT world that is privacy preserving.
Lastly, Stephen Kaplan and his team at the University of Maine proposed a new CI-based framework entitled the Lattice-Based Contextual Integrity Analysis (LCIA) to help users make sense of privacy policies pedaled by online social networks and quantitatively compare between them. In addition, they evaluated LCIA on 13 privacy policies of online social networks and found that “online social networks with more privacy-violating information flow practices are more likely to mislead users through ambiguous statements.”
Session 6: CI and Education
During this session the presenters discussed how to integrate the principles of CI into educational contexts. Penn State University’s Priya Kumar presented her work on children’s privacy literacy which investigated how children conceptualize password privacy in different contexts. She used that qualitative work to develop a new approach to privacy literacy for children using CI. Additionally, Colgate University’s Noah Apthorpe led a fruitful symposium-wide brainstorming session on how to best teach CI concepts in classrooms of all ages.
Panel: CI and Differential Privacy
Panelists Rachel Cummings (Columbia University), Ero Balsa (Cornell Tech), Michael Carl Tschantz (ICSI & UC, Berkeley), and Alexandra Wood (The Berkman Klein Center, Harvard University) discussed how to synthesize CI and differential privacy. These two topics can be viewed as complementary to each other: CI is adept at defining privacy in the societal domain but lacks a computational implementation -- which impedes wide adoption in computing systems -- while differential privacy lends itself well to computational implementation but is not robust to quantifying privacy in the societal domain. Many good points were brought up during the panel discussion. One of the big takeaways was that while the cross pollination of CI and differential privacy is a productive research direction, neither subject can enrich all of the deficiencies of the other.
Session 7: CI and Tracking
The presentations in this session focused on insights gleaned by filtering online tracking through a CI lens. The University of Maryland’s Ido Sivan-Sevilla and Cornell Tech’s Helen Nissembaum presented a context-sensitive empirical study on the usage of persistent identifiers by third parties on the Internet. They gleaned a contextual understanding of tracking practices across different types of websites and used this information to get a glimpse into the way third parties construct users’ identity online.
Looking towards the future, Gabriel Nicholas from the Center for Democracy & Technology examined Google’s proposed, and purported, privacy-friendly replacement of cookies -- known as Federated Learning of Cohorts (FLoC) -- from a CI perspective. FLoC bins users into anonymous buckets based on their browsing history and, since each of these buckets contain users with similar interests, serves a targeted ad to the bucket. This is an important distinction from cookies where an ad is served to a unique user instead of a bucket of users. Nicholas argues that on a surface level FLoC is a big improvement over cookies when it comes to the transmission principles at play in the online ad world, However, he notes that there are still problems. Since these cohorts are based on shared interest, FLoC can still target ads based on ethnicity, gender, age, socioeconomic status, or religion.
Break Out Sessions
There were three topics during the breakout session: Theory of CI, HCI and CI, and Systems and CI. Like the rest of the symposium, these breakout sessions were facilitated over zoom so that the online and in-person participants could contribute equally. In the Theory of CI breakout session, Helen Nissembaum led a discussion focused on the pitfalls of CI theory and brainstorming different ways to extend CI theory to correct these shortcomings. In the HCI and CI session, Marshini Chetty facilitated a discussion on the strengths and weaknesses of using a CI lens in different HCI subfields in an effort to find optimal use cases of CI in HCI. Lastly, Blase Ur fostered a discussion in the Systems and CI session around how CI can be operationalized in systems with an eye towards CI both as a design philosophy and as a principle to be enforced via an implementation.
Art & Technology Break
Students enrolled in a joint course on Privacy, Art & Technology -- offered through a collaboration between The University of Chicago and School of the Art Institute of Chicago - presented class projects conceptualizing how to preserve and protect privacy. These innovative projects pushed students to synthesize technical knowledge and artistic methods in their presentations.
* * * *
About Helen Nissenbaum
Helen Nissenbaum is Professor of Information Science at Cornell Tech. Her research takes an ethical perspective on policy, law, science, and engineering relating to information technology, computing, digital media, and data science. Topics have included privacy, trust, accountability, security, and values in technology design.
Professor Nissenbaum’s books include Obfuscation: A User's Guide for Privacy and Protest, with Finn Brunton (MIT Press, 2015) and Privacy in Context: Technology, Policy, and the Integrity of Social Life (Stanford, 2010).