Taming the Golem: Challenges of Ethical Algorithmic Decision-Making

Artificial Intelligence and Privacy and Security

Article Snapshot

Author(s)

Jules Polonetsky and Omer Tene

Source

North Carolina Journal of Law & Technology, Vol. 19, pp. 125-173 (2017)

Summary

Any algorithm can be biased. However, policy-neutral algorithms should be distinguished from policy-directed algorithms, those purposely designed to pursue a policy agenda.

Policy Relevance

Firms should disclose policy-directed algorithms to users. Algorithms can support policies intended to redress bias.

Main Points

  • All algorithms are designed by humans and might be biased; however, observers should distinguish between policy-neutral algorithms and policy-directed algorithms.
     
    • Policy-neutral algorithms provide largely unedited results (which might be fair or unfair).
       
    • Policy-directed algorithms are intentionally crafted to promote a policy agenda.
       
  • Policy-neutral algorithms offer a neutral mathematical result, such as the most profitable location for a new business; such algorithms should not be tailored to yield outcomes that a designer views as fair.
     
  • For policy-directed algorithms, transparency and oversight are vital to avoid backlash; individuals have a right to know when they are part of a social experiment.
     
  • Requiring firms to instill algorithms with liberal values is inadvisable.
     
    • Further manipulation of norms by government and by firms would follow.
       
    • Business entities that lack due process would become arbiters of ethical norms.
       
    • Other cultures would see this as an imposition of Western values.
       
  • Policymakers should compare algorithmic outcomes with real-world outcomes, not with utopian ideals; human intervention with automated processes could heighten the risk of bias.
     
  • Data-driven decision-making can reduce discrimination; for example, one school district used standardized testing to identify more black and Hispanic students as “gifted,” as compared to reliance on parent or teacher referrals.
     
  • Algorithmic decision-making can incorporate different types of bias, ranging from unlawful discrimination (such as discrimination based on race) to ethically unsettled discrimination issues like price discrimination (such as charging Asian families more for exam prep materials).
     

Get The Article

Find the full article online

Search for Full Article

Share