ACADEMIC ARTICLE SUMMARY
Accountable Algorithms
Article Source: University of Pennsylvania Law Review, Vol. 165, No. 3, pp. 633-705, 2017
Publication Date:
Time to Read: 2 minute readSearch for the full article on Bing
ARTICLE SUMMARY
Summary:
Computers now make many decisions formerly made by humans. Procedures used to oversee human decision-makers cannot be applied to computers. This article describes technological tools to help developers design algorithms consistent with social goals.
POLICY RELEVANCE
Policy Relevance:
Fairness and accountability should be built in to computerized processes from the start. Policymakers and computer scientists should work together to ensure accountability.
KEY TAKEAWAYS
Key Takeaways:
- Computers use algorithms to approve loan applications, target travelers for search, grant visas, and more; the public and the courts are ill equipped to ensure that algorithms are fair.
- Some assert that transparency would promote accountability, but this would be ineffective.
- Firms could disclose their source code, but only experts could understand it.
- Users could game the system; for example, disclosure of code intended to target tax audits could enable cheaters to evade detection.
- Randomization improves some computerized processes, but complicates the evaluation process.
- The Roomba floor cleaner moves in random patterns, so programmers need not program every type of possible motion.
- It is hard to detect when a corrupt developer skews “random” results.
- Computer decisions should be made with “procedural regularity,” that is, the same procedure should be applied to everyone.
- Many technological tools can improve automated decisions, including software that tests every possible outcome, and the use of encryption to “seal” files.
- The U.S. State Department uses a lottery to grant “green cards” to permanent residents, and some question the fairness of the process.
- The State Department should publish commitments to its source code and proofs of the code’s fairness.
- The State Department could work with a trusted third party to ensure that random selections are truly random, and to audit compliance with its commitments.
- Technological tools can ensure that machine learning systems such as those used to target police searches do not discriminate by race.