How the GDPR Approaches Algorithmic Accountability
Publication Date: November 08, 2019 9 minute readDiscrimination and bias are unintentionally becoming enmeshed in artificial intelligence (AI) systems that utilize algorithms to automate decisions. Automated-decision systems are influencing important outcomes such as who sees online housing ads, which patients will benefit from extra medical care, which applicants will be selected for interviews, risk assessment in criminal sentencing, and teacher evaluations.
Policymakers and researchers are questioning how to fully assess the short and long-term impacts of these automated-decision systems. Whose interest do they serve? Are the systems sophisticated enough to contend with complex social contexts?
Kate Crawford and her colleagues at the AI Now Institute have developed an Algorithmic Impact Assessment framework in order to aide communities and stakeholders in evaluating automated-decision systems. The AI Now Institute emphasizes:
While these [automated decision] systems are already influencing important decisions, there is still no clear framework in the US to ensure that they are monitored and held accountable. Indeed, even many simple systems operate as “black boxes,” as they are outside the scope of meaningful scrutiny and accountability. This is worrying. If governments continue on this path, they and the public they serve will increasingly lose touch with how decisions have been made, thus rendering them unable to know or respond to bias, errors, or other problems.
- from “Algorithmic Impact Assessments: Toward Accountable Automation in Public Agencies” by the AI Now Institute (Medium, February 21, 2018)
Concerned about algorithmic biases, U.S. lawmakers have proposed the Algorithmic Accountability Act of 2019 (S. 1108 and H.R. 2231). This Act would require companies to study and fix flawed computer algorithms that result in inaccurate, unfair, biased, or discriminatory decisions impacting Americans. In introducing the bill, Senator Ron Wyden, one of the bill sponsors, said:
Computers are increasingly involved in the most important decisions affecting Americans’ lives –whether or not someone can buy a home, get a job or even go to jail. But instead of eliminating bias, too often these algorithms depend on biased assumptions or data that can actually reinforce discrimination against women and people of color.
- from Ron Wyden press release: “Wyden, Booker, Clarke Introduce Bill Requiring Companies To Target Bias In Corporate Algorithms”
Professor Margot Kaminski, University of Colorado Law, and Gianclaudio Malgieri, Vrije Universiteit Brussel (VUB), have delved into how the EU is tackling algorithm accountability through the General Data Protection Regulation (GDPR) requirements. In their paper, “Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations”, Professor Kaminski and Mr. Malgieri explore how a Data Protection Impact Assessment (DPIA) links the two faces of the GDPR’s approach to algorithmic accountability: individual rights and systemic collaborative governance.
Below are a few excerpts from “Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations:”
Algorithmic Accountability in the GDPR
It is also crucial to understand the mode through which the GDPR governs. The GDPR largely governs—both in the sense of coming up with the substance of data controllers’ duties, and in the sense of monitoring compliance— through an approach known in the legal literature as “collaborative governance”: the use of public-private partnerships. This form of regulatory design has alternatively been referred to as “new governance,” “co-governance,” partial delegation to the private sector, and “meta-regulation.” Importantly, it is not equivalent to self-regulation; the government still has an important, even central, role to serve. Because the GDPR often effectively outsources governance decisions to private companies, accountability takes on added significance. Accountability in the GDPR is not just about protecting individual rights. It is about ensuring that this process of co-governing with private parties receives appropriate oversight from the public, from civil society, and from both expert and affected third parties.
Lessons for Calls for Algorithmic Impact Assessments Generally
Our GDPR-specific analysis has implications for proposals for algorithmic impact assessments (AIA) generally. Our research into the GDPR’s version of AIAs suggests that the proposals discussed above have largely missed several important observations.
First, AIAs are not best understood as a stand-alone mechanism. In the context of the GDPR, they are one part of a much larger system of governance. … Our analysis suggests that impact assessments are just one tool in a larger regulatory ecosystem, and may work best when they are not deployed alone, and are instead understood as entwined with other regulatory tools.
Second, impact assessments can serve as a connection between collaborative governance and individual rights. The information a company creates during the Impact Assessment process can feed into what it provides to individuals and to the public at large. The procedures an Impact Assessment puts in place can serve not just to prevent error, bias, and discrimination, but also to legitimize a system or even respect an individual’s dignity within it. This dual role is exemplified by the GDPR’s DPIA [Data Protection Impact Assessment].
Third, as part of a larger system of governance, there are unexplored connections between the GDPR’s DPIA and its underlying substantive individual rights and substantive principles. It is true that many of the GDPR’s individual rights and principles about algorithmic decision-making are articulated in broad, sometimes aspirational, terms. Unlike an EIS, the GDPR’s version of the AIA has a substantive backstop, in for example Recital 71’s admonishment that a data controller should minimize the risk of error and prevent discriminatory effects. The oddity is the GDPR’s circularity: the AIA helps not just to implement but to constitute both these substantive backstops and the GDPR’s individual rights. Thus there is a substantive backstop to company self-regulation through impact assessments—but it is a moving target, in part given meaning by affected companies themselves.
Finally, because the AIA links individual and systemic governance, we understand the GDPR’s version of the AIA to be both the potential source of, and the mediator between, what we refer to below as “multi-layered explanations” contemplated in the GDPR. … The GDPR’s system of individual rights threatens by itself to miss the impact of surveillance, or in this case, automated decision-making, on groups, locations, and society at large. A recent AI Now report provides an illustrative example of the problem: providing an individualized explanation for a single “stop and frisk” incident in New York City would have failed to reveal that over 80% of those subjected to stop and frisk by the NYPD were Black or Latino men. But the Impact Assessment with its systemic approach to risk assessment and risk mitigation requires data controllers to analyze how the system impacts not just individuals but groups. We believe that systemic and group-based explanations uncovered during an AIA can and should be communicated to outside stakeholders, and that a case can be made that such release is required under the GDPR.
A Model Algorithmic Impact Assessment: Towards Multi-layered Explanations
To begin this conversation, we suggest that a Model Algorithmic Impact Assessment process should do at least the following. It should contemplate the involvement of civil society as a form of underused oversight. It should better involve and engage impacted individuals, not just through surveys but through representative boards, before an algorithm is deployed. It should contemplate requiring companies, or regulators, to help fund the involvement of both of the above, and provide technical expertise or the resources for obtaining technical expertise. It should involve not just external technical experts, but external experts in law and ethics to help define, or at least frame discussions of, what we mean by terms like “discrimination” or “bias.”
A Model Algorithmic Impact Assessment process should also deliberately widen the lens from algorithms as a technology in isolation, to algorithms as systems embedded in human systems—both those that design the technology, and those that use it. There is a growing awareness that addressing problems of unfairness or bias in the technology in the abstract will be inadequate for mitigating these problems when an algorithm is implemented in practice. The risks come not just from the technology by itself, and not just from the humans who embed their values into the technology during its construction and training, but from how the humans using the algorithm are trained and constrained, or not constrained, in their use of it. This connects to our suggestion that a Model Algorithmic Impact Assessment be truly continuous: a process that produces outputs, but also includes ongoing assessment and performance evaluation, especially for those algorithms that change quickly over time.
Conclusion
This analysis, we hope, will have value for other discussions of Algorithmic Impact Assessments beyond the GDPR. In particular, moving from individual transparency rights and governance accountability duties in the field of automated decision-making, we suggest a model of multi-layered explanations drawn from Algorithmic Impact Assessments. Since there are several layers of algorithmic explanation required by the GDPR, we recommend that data controllers disclose a relevant summary of a system, produced in the DPIA process, as a first layer of algorithmic explanation, to be followed by group explanations and more granular, individualized explanations. More research is needed, in particular about how different layers of explanations—systemic explanations, group explanations, and individual explanations—can interact each other and how technical tools can help in developing an Algorithmic Impact Assessment that might be re-used towards GDPR-complying explanations and disclosures.
Read the full article: “Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations” by Margot Kaminski and Gianclaudio Malgieri.
Margot Kaminski is an Associate Professor at the University of Colorado Law and the Director of the Privacy Initiative at Silicon Flatirons. She specializes in the law of new technologies, focusing on information governance, privacy, and freedom of expression. Recently, her work has examined autonomous systems, including AI, robots, and drones (UAS). In 2018, she researched comparative and transatlantic approaches to sensor privacy in the Netherlands and Italy as a recipient of the Fulbright-Schuman Innovation Grant. This paper is one of the outcomes of her work on the Fulbright Grant.
Gianclaudio Malgieri is an Attorney at Law and a doctoral researcher at LSTS - Vrije Universiteit Brussel (VUB), where he is Work Package Leader of EU H2020 PANELFIT Project, for the development of Legal & Ethical Guidelines on Data Processing Consent and Automated Decision-making in ICT Research in the EU.
About Kate Crawford
Kate Crawford is a Research Professor of Communication and Science and Technology Studies at USC’s Annenberg School for Communication and Journalism and a Senior Principal Researcher at Microsoft Research in New York. Professor Crawford is a leading scholar of the social and political implications of artificial intelligence. Over her 20-year career, her work has focused on understanding large-scale data systems, machine learning and AI in the wider contexts of history, politics, labor, and the environment.
See more with Kate Crawford
About Margot Kaminski
Margot Kaminski is an Associate Professor of Law at Colorado Law where she researches and writes on law and technology. Her work has focused on privacy, speech, and online civil liberties, in addition to international intellectual property law and legal issues raised by AI and robotics. Recently, much of her work has focused on domestic drones (UAVs or UASs).