Presented by the Information Law Institute at NYU
Explanation is a crucial aspect of accountability. By requiring that powerful actors explain the bases of their decisions, we reduce the risk of error, thus producing more socially desirable decisions. Decision-making processes employing machine learning algorithms complicate this equation. When and to what extent should decisionmakers be legally or ethically obligated to provide humanly meaningful explanations of individual decisions? Several TAP scholars will participate.
Return to Events Calendar