ACADEMIC ARTICLE SUMMARY
Regulating the Risks of AI
Article Source: Boston University Law Review, Vol. 103, pp. 1-85, 2023
Publication Date:
Time to Read: 4 Minute readSearch for the full article on Bing
ARTICLE SUMMARY
Summary:
Artificial intelligence-based systems (AI) present risks that might be hard to predict or quantity. Policymakers in Europe and the United States have developed proposals to regulate AI risks.
POLICY RELEVANCE
Policy Relevance:
Policymakers should consider a broad range of tools in developing proposals to regulate AI.
KEY TAKEAWAYS
Key Takeaways:
- The known risks of AI systems include software crashes, unpredictable or irrational outcomes, overfitting and underfitting to training data, and replication or perpetuation of bias.
- Policymakers in the Europe Union (EU) and the United States have proposed risk regulation regimes for AI.
- The National Institute of Standards and Technology (NIST) is developing Artificial Intelligence Risk Management Framework (AI RMF)
- The draft EU AI Act is modelled on risk regulation in product safety.
- The National Institute of Standards and Technology (NIST) is developing Artificial Intelligence Risk Management Framework (AI RMF)
- Framing AI-related rules as risk regulation assumes that the technology will be adopted even though it is harmful; furthermore, marginalized groups and hard-to-quantify harms may not be addressed.
- Ex post litigation offers one alternative to risk regulation; however, litigation requires determination of causation, which is difficult with AI systems.
- Litigation is not suited to address externalities or hard-to-observe harms.
- Ex ante regulation can prevent harms rather than merely providing compensation for them.
- Litigation is not suited to address externalities or hard-to-observe harms.
- The precautionary principle may call for regulators to regulate until the technology's safety is proven; the Food and Drug Administration's regulation of medicine offers one example.
- Licensing is a common precautionary tool.
- Regulatory sandboxing allows use of a technology under close supervision.
- Licensing is a common precautionary tool.
- Some limitations of risk regulation include the following:
- Risk regulation works best when risk can be quantified in advance, and less well when risks are hard to identify in advance.
- Risk regulation does not compensate individuals who have been injured.
- Risk regulation must keep pace with rapidly changing technologies.
- Risk regulation works best when risk can be quantified in advance, and less well when risks are hard to identify in advance.
- The four types of risk regulation include:
- Quantitative risk assessment, which often use cost-benefit analysis.
- Democratic oversight.
- Risk management by a centralized administrator, as proposed in the Draft EU AI Act.
- Risk management within enterprises such as firms.
- Impact assessments are a hybrid risk regulation model.
- Quantitative risk assessment, which often use cost-benefit analysis.
- Europe's General Data Protection focusses on threats to human rights, which are difficult to quantify; The draft EU AI Act is designed to address risks to health and safety as well as risks to fundamental rights, but focusses on AI development rather than individual rights.
- The NIST's AI Framework emphasizes control of AI risks through organizational culture across the entire life cycle of the AI system.
- The NIST framework lacks substantive standards.
- Other legislative proposals include the Algorithmic Accountability Act and a bill proposed in Washington state.
- The NIST framework lacks substantive standards.
- AI risks are hard or impossible to quantify, vary widely, and are hard to predict and define; successful risk management will require both substantive and procedural requirements.
- Regulators could improve risk regulation of AI by:
- Requiring AI developers to identify best-case and worst-case outcomes.
- Requiring stress testing using worst-case scenarios.
- Requiring identification of technical uncertainties.
- Developing complementary rules for compensation and individual rights.
- Requiring conditional licensing of AI systems.
- Requiring AI developers to identify best-case and worst-case outcomes.