Margot Kaminski Delves into the Current State of the Law of AI Ethics
Publication Date: July 28, 2023 4 minute readColorado law Professor Margot Kaminski discusses the issues surrounding how to regulate and build ethical AI systems. Additionally, she explains some of the risks of regulating AI, and the importance of an individual right to contest AI decisions.
Professor Margot Kaminski, from her talk, “The Law of AI Ethics”The White House recently released a draft blueprint for an AI Bill of Rights. … It says that if AI is going to be used widely, these systems should be safe and effective, there should be protections against bias, there should be protections for privacy, there should be individualized notice and explanation, and you should have human alternatives, consideration, and fallback.
Artificial intelligence (AI) is increasingly in use by both the government and the private sector to make important decisions, from university admissions selections to loan determinations to which neighborhoods will be frequently patrolled by police. These uses of AI raise a host of concerns about discrimination, accuracy, fairness, and accountability.
Professor Margot Kaminski of the University of Colorado Law School discusses the issues surrounding how to regulate and build ethical AI systems in the AI Ethics Series hosted by the Daniels Fund Ethics Initiative Collegiate Program at Colorado Law and Silicon Flatirons. In her talk, “The Law of AI Ethics,” Professor Kaminski outlines what lawmakers have proposed or enacted to address the problems of unethical or untrustworthy AI. Additionally, she explains some of the risks of regulating AI, and the importance of an individual right to contest AI decisions.
Below is a summary of Professor Kaminski’s talk, “The Law of AI Ethics.” Recorded November 22, 2022 and presented by the Daniels Fund Ethics Initiative Collegiate Program at Colorado Law and Silicon Flatirons.
Summary
Artificial intelligence-based systems (AI) can suffer from bias and are not always reliable. Policymakers are considering new regulatory regimes to ensure that these systems are fair and reliable.
Main Points
- AI involves the use of software to automate tasks that would otherwise require human intelligence.
- AI systems are already in use and can cause real harm; one common problem with AI is “garbage in, garbage out,” that is, training data is used that reflects historic societal biases.
- The National Institute of Standards and Technology (NIST), a United States government agency, has worked with stakeholders to describe principles of a trustworthy AI system: According to NIST, AI should be:
- Valid and reliable.
- Safe.
- Fair.
- Secure and resilient.
- Explainable.
- Privacy enhanced.
- The developing law of AI ethics offers three different types of AI law, as follows:
- “Human in the loop” rules solve problems by ensuring decisions are not made entirely by AI.
- Risk regulation requires firms involved in AI development to identify and mitigate AI-related risks.
- Individual rights allow the person affected by an AI decision to counter or contest it.
- In regulating AI, lawmakers seek to accomplish three types of goals.
- Instrumentalist goals call for problems like bias to be identified and fixed.
- Justificatory goals require oversight and accountability mechanisms.
- Dignity and autonomy, often supported by individual rights.
- The “Human in the loop” approach may be problematic because systems that combine human and machine are complex.
- Hand-off problems arise when AI systems transfer tasks to humans at inappropriate times.
- Automation complacency problems arise, because when a person works regularly with a machine his attention may lapse.
- Risk Regulation typically requires AI developers to perform impact assessments, map data, check for bias, and mitigate bias; risk regulation is the dominant approach to AI regulation, but it has drawbacks.
- Firms conduct self-assessments without accountability or stakeholder input.
- Risk regulation works best on easily quantified problems.
- Risk-focused regimes may be influenced by industry.
- Does not make injured individuals whole.
- An individual rights-based approach to AI regulation could be an important part of the solution to AI problems; one example is the recently proposed AI Bill of Rights, which includes protection for privacy.
Conclusion
Policymakers are addressing concerns with the reliability and fairness of AI. One type of AI regulation calls for a human to be involved in AI-related processes. Another approach is known as risk regulation; this approach calls for firms to assess and address AI risks, but lacks accountability mechanisms. An individual rights-based approach, which gives individuals affected by AI decisions to contest them, is promising.
“The Law of AI Ethics” with Professor Margot Kaminski. Recorded November 22, 2022 and presented by the Daniels Fund Ethics Initiative Collegiate Program at Colorado Law and Silicon Flatirons.
Watch Professor Kaminski's talk on YouTube.
Read More of Professor Margot Kaminski’s Work on the Ethics of AI:
- “Regulating the Risks of AI” (Boston University Law Review, Vol. 103, pp. 1-85, 2023)
- “Humans in the Loop” with Rebecca Crootof and W. Nicholson Price II (76 Vanderbilt Law Review 429, 2023)
- “Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations” with Gianclaudio Malgieri (University of Colorado Law Legal Studies Research Paper No. 19-28, October 6, 2019)
- “Robots in the Home: What Will We Have Agreed To?” (Idaho Law Review, Vol. 51, No. 3, pp. 661-677, 2015)
About Margot Kaminski
Margot Kaminski is an Associate Professor at the University of Colorado Law School and the Director of the Privacy Initiative at Silicon Flatirons. She specializes in the law of new technologies, focusing on information governance, privacy, and freedom of expression. Recently, her work has examined autonomous systems, including AI, robots, and drones (UAS).
In 2018, Professor Kaminski conducted research on comparative data privacy law as a recipient of the Fulbright-Schuman Innovation Grant.