Algorithms and Explanations Conference at NYU Law’s Information Law Institute (Video Available)

By TAP Guest Blogger

Posted on October 4, 2017


Share

This conference summary was written by Eli Siems, a student at NYU Law.

 

The Event

 

Explanation and transparency are powerful shields against abuses of discretion, mistake, and arbitrariness in governmental or otherwise socially consequential decisions. As more and more decisions of consequence get turned over to algorithmic machine-learning tools, the question of how to apply the traditional bulwarks of transparency and explanation to opaque and often hard-to-explain complex algorithms becomes an issue of vital social and public importance.

Image: Algorithms and Explanations poster

This past spring, academics, researchers, lawyers, software developers, entrepreneurs, and public officials gathered for Algorithms and Explanations, a two-day conference hosted by the Information Law Institute at NYU School of Law.

 

Pulling together a diverse list of speakers was essential to achieving discussion and conceptual harmony among the different interests at play. What resulted was a rich and riveting display of conflict and collaboration, a radical swapping of ideas, and profound contributions toward a shared vocabulary for questions of transparency in automated decisionmaking.

 

The Discussion

 

Panels spoke on questions of transparency and explanation from eclectic and diverse perspectives.

 

Day 1 focused on broad foundations, laying out theories of accountability and transparency, detailing technical aspects of machine learning and algorithmic decision-making as they relate to explainability, and discussion potential methods of encouraging transparency through both technology and public policy.

 

Reasons for Reasons from Law and Ethics: The first panel focused on legal, philosophical, and historical underpinnings of explanatory regimes, and provided a theoretical foundation upon which the following panels would build.

  Image: Day 1 of the conference

Katherine Strandburg discussed explanation as a core, legitimizing practice in law as exemplified by published judicial decisions and administrative rulemaking procedures. She identified accuracy, fairness, trust, and dignity as distinct legitimizing results of explanation, and provided a framework from legal study for interpreting the accuracy and utility of an explanation. Kevin Stack built upon this foundation, discussing the variable qualities of explanations that might be demanded in different legal contexts, for example in rational basis review versus strict scrutiny in constitutional claims. Andrew Selbst presented on the issue of inscrutability in algorithmic explanation and its consequences in a legal context, noting that one can be faced with an entire list of reasons or explanations for a decision and still not understand why the decision was made.

 

Watch the video of the Reasons for Reasons from Law and Ethics panel.

 

Automated Decisionmaking and Challenges to Explanation-Giving: The second panel delved into the nature of algorithmic and machine-learning tools to identify the unique challenges and limitations to transparency in that context.

 

Image: AlgorithmsJenna Burrell discussed opacity in machine-generated explanations, urging people to continue challenging those explanations as we work toward greater clarity. Solon Barocas explored and in part challenged the prevailing belief that algorithms with stronger prediction accuracy are less interpretable by nature. Duncan Watts addressed particular hurdles to demanding causal explanations from algorithms.

 

Watch the video of the Automated Decisionmaking and Challenges to Explanation-Giving panel.

 

Modes of Explanation in Machine Learning: What is Possible and What are the Tradeoffs?: The third panel put forward theories, methodologies, and practices concerning transparency in machine decisionmaking, with a critical and nuanced view of the limitations and costs of each potential approach.

 

Enrico Bertini presented his team’s work in using specific, instance level visualizations in order to produce a global understanding of a model. Foster Provost discussed the utility of aggregated decision data in understanding a model, pursuing a counterfactual investigation of a hypothetical targeted advertisement by asking how the result might be changed with the fewest possible alterations. Zach Lipton discussed the myriad ways in which algorithms fail to adequately explain their failures, including the potential for explanatory tools to learn to placate human users. Alexandra Chouldechova addressed opacity versus performance in risk and need modeling tools. Anupam Datta discussed models for controlling bias and creating effective explanatory systems for reviewing what factors went into a decision. Finally, Krishna Gummadi discussed the need for accurate and standardized explanations so that consumers of a service can effectively understand and make use of the explanations regarding the automated decisions affecting them.

 

Watch the video of the Modes of Explanation in Machine Learning: What is Possible and What are the Tradeoffs? panel.

 

Regulatory Approaches to Explanation: The Fourth Panel surveyed existing regulatory approaches that would encourage or require explanation, discussed context-specific nuances in regulation, and put substantial focus on Europe’s GDRP as both model and case study in its decision to stop short of requiring decisional explanation.

 

Image: Microsoft's GDPR compliance planDeven Desai provided a glimpse into potential regulatory approaches to algorithmic accountability, exploring ways in which developers might be incentivized to build analyzable tools. Sandra Wachter identified a few central rights in Europe’s GDPR for discussion, including the right to be notified, the right to access one’s data, and the right not to be subjected to a solely automated decision-making program producing legal effects. Ms. Wachter pointed out that no right exists to demand an explanation of algorithmic decisions. Alison Howard then provided perspective into Microsoft’s intentions and efforts to comply with GDPR, as well as the company’s incentives toward privacy. Video.

 

Watch the video of the Regulatory Approaches to Explanation panel.

 

To close out the first day, Jer Thorpe of the Office for Creative Research & NYU Tisch ITP gave a presentation and led a “happy hour discussion” on the nature, history, and capability of algorithms and the many and creative ways in which they can be put to use. Video.

 

Watch the video of Jer Thorpe’s happy hour discussion.

 

Day 2 focused on the role of algorithmic explainability in several of the areas in which machine-generated decisions play an important and consequential role.

 

Explainability in Context—Health: The first panel of the second day introduced the unique place of automated decisionmaking in the health field, noting that, distinct from some of the contexts below, health care is defined by an almost-entirely-collaborative, “same-team” dynamic, and that the use of data in the health field is already heavily regulated by federal law.

 

Frencesca Rossi discussed a model of human-AI collaboration in medicine of which effective explanation and the trust it builds are essential components. Rich Caruana focused his presentation on the importance of transparency in modeling when doing critical applications like healthcare, stressing that accuracy without transparency is particularly problematic in the medical field. Federico Cabitza focused on the difficulty of obtaining certainty and credibility when applying machine learning to real life matters, reflecting an emerging understanding of uncertainty-as-reality in the medical field.

 

Watch the video of the Explainability in Context—Health panel.

 

Explainability in Context—Consumer Credit: The second panel of the day focused on the use of data and algorithms in an area with complex social dynamics related to personal data. Multifaceted discussion emerged from the well-known opacity of consumer credit reporting and the intricate question of the social impacts of broader, more data-replete credit assessment tools.

 

Dan Raviv proposed the utility of algorithms in overcoming the shortcomings of traditional US credit scoring systems as applied to international or otherwise scoreless persons. Aaron Rieke discussed federal regulatory regimes like the FCRA and ECOA and other limitations to the use of web data in massive data-driven algorithmic credit scoring. Frank Pasquale identified two competing views on algorithms in credit scoring: that the use of massive data sets will ultimately increase access to credit for those people who may not currently have access, and that credit scoring is too powerful a tool to be left to private entities.

 

Watch the video of the Explainability in Context—Consumer Credit panel.

 

Explainability in Context—Media: The third panel of the day discussed the subtle and not-so-subtle effects of algorithmic decisionmaking on news and media dissemination, getting at the far-reaching social impacts of more “everyday” decision automation.

 

Image: Buzzfeed/BBC tennis match fixing investigationGilad Lotan discussed polarization of information depending on social circles and drew attention to the difference between optimization and manipulation. Nicholas Diakopolous identified several layers in the media distribution process where transparency might be introduced, from the onboarding of data to the application of the model to the models inferences to the level at which the model and its results interface with a human user. Professor Diakopolous discussed various depths of reason-giving suited for different audiences. Brad Greenberg discussed Facebook’s decision to remove its human moderator team as a case study in the risks of automatic news content delivery.

 

Watch the video of the Explainability in Context—Media panel.

 

Explainability in Context—the Courts: The fourth panel of the day addressed hot-button concerns such as risk-assessment tools as well as broader evidentiary challenges posed by an increase in automated actors in the creation of proof.

 

Andrea Roth spoke on the subject of algorithms generating proof that could be used at trial including new proprietary breathalyzer software, google maps data, and DNA mixture interpretation software. Paul Rifelj made a case study of the COMPAS risk assessment score tool, digging into the consequences of and possible incentives for the company’s decision to keep its software proprietary. Julius Adebayo discussed algorithmic tools designed to audit black-box algorithms; one such tool, FairML, he has had a hand in designing. He discussed work currently being done on “adversarial” training for machine learning, a process by which key flaws like bias in a machine learning algorithm might be extensively tested for and rooted out.

 

Watch the video of the Explainability in Context—the Courts panel.

 

Explainability in Context—Policing and Surveillance: The final panel discussed the contentious use of machine-driven decisions in strategic policing and surveillance.

 

Jeremy Heffner provided an overview of the capabilities of Hunchlab predictive policing software, which provides geographic predictive policing. Dean Esserman, a prominent former police chief and Police Foundation representative, argued that predictive policing models are wrong in identifying police chiefs as their customers. He said that in order to address the pervasive truth issues in American policing, the predictive policing models should be designing tools with officers and citizens in mind that would provide equal explanations and information to both citizens and police in an effort to use the technology to revive citizenry-generated policing. Kiel Brennan-Marquez addressed several constitutional issues in predictive policing algorithms, drawing attention to the equal enforcement issues generated by generally undetectable “false negative” results and to the ways in which a black-box algorithm influences the behavior of law enforcement officers by frustrating supervision of their decisions.

 

Read more about the Algorithms and Explanations conference, view slides from presenters, and link to videos of panel discussion on the Algorithms and Explanations web page.

 

Moving Forward

 

The event showcased a remarkable variety of perspectives, with each panel carefully composed of diverse and sometimes conflicting voices. Many of the electrifying conversations begun at Algorithms and Explanations are likely to continue for years to come.

 

Image: ILI logoAt the Information Law Institute, it was our ultimate goal in organizing the conference to facilitate conversation between the many and diverse actors in this complex field, and we will continue to facilitate these essential discussions through our ongoing Privacy Research Group meetings and future events.

 

 

A note about sponsorship: Microsoft’s Academic Relations program sponsored the Algorithms and Explanations conference. While Microsoft provided financial support for the conference, NYU’s Information Law Institute and NYU School of Law were solely responsible for the agenda and invited participants.

 

Microsoft is also the sponsor of this TAP website, providing administrative and financial support for the site’s platform and content. Microsoft respects academic freedom; thus, there is no payment made for appearing or blogging on the site. Scholars and academic institutions featured on the site have direct access to make content changes.

 

 

Eli Siems is a second-year law student at New York University School of Law. He is studying public defense and is interested in the intersection between emerging technologies and criminal procedure, evidence, and constitutional protections. He is currently working with professor Katherine Strandburg to research the use of proprietary algorithms in criminal proceedings.

 


Share