Private Accountability in the Age of Artificial Intelligence

Artificial Intelligence and Privacy and Security

Article Snapshot

Author(s)

Sonia Katyal

Source

UCLA Law Review, Vol. 66, pp. 54-141, 2019

Summary

Algorithmic decision-making may perpetuate bias if the data used to train the system reflects bias. Thus far, regulators and the courts have not addressed algorithmic bias effectively.

Policy Relevance

Self-regulation and whistleblower protections could support algorithmic accountability.

Main Points

  • For members of certain groups, such as the less wealthy, an algorithmic mistake can lead to disastrous denial of employment, housing, or insurance.
     
  • Automated decision-making initially appears free from human prejudice and irrational biases, but algorithmic models are the product of fallible creators.
     
  • Bias in big data generally arises from one of two causes:
     
    • Errors in data collection, which leads to inaccurate depictions of reality.
       
    • The algorithm is trained using data that reflects bias (for example, data on promotions are collected from an industry that systematically promotes men over women), and thus perpetuates the bias.
       
  • An algorithm labelled black defendants as future criminals at twice the rate of white defendants; nonetheless, the Wisconsin Supreme Court upheld the algorithm’s use in sentencing, even though trade secret law barred examination of the algorithm in court.
     
  • Two foundational concepts in antidiscrimination law come into conflict in cases involving big data.
     
    • Anticlassification principles suggest that the very act of classification risks unfairness.
       
    • Antisubordination theory uses classifications to remedy inequalities.
       
    • The law cannot address algorithmic bias without adopting antisubordination theory, but this is constitutionally infeasible.
       
  • Private firms and organizations such as the Association for Computing Machinery are formulating ethical standards for artificial intelligence; to be most effective, these standards require regulatory oversight.
     
  • Human impact assessments could guide firms in addressing algorithmic bias; such assessments should include:
     
    • A substantive commitment to the fair treatments of all races, cultures, and income levels.
       
    • Structures that promote oversight of programmers (who develop the algorithm) by controllers (those responsible for compliance).
       
    • The assessment should examine the algorithm, its output, and the training data.
       
  • Algorithmic civil rights concerns require a new approach to trade secret law, because accountability is impossible without transparency.
     
  • Whistleblowing involves the disclosure of an organization’s wrongful practices by a member of the organization; whistleblower protections could help address algorithmic bias.
     
    • Whistleblowing is especially effective when there is information asymmetry.
       
    • Whistleblowing is appropriate when the government relies on private entities to carry out public functions.
       
  • The Defend Trade Secrets Act of 2016 immunizes whistleblowers from liability under trade secret law for making confidential disclosures to regulators; similar protections could immunize whistleblowers from trade secret liability to promote algorithmic transparency.
     

Get The Article

Find the full article online

Search for Full Article

Share