Evan Selinger Discusses the Ethical Imperatives of AI

By TAP Staff Blogger

Posted on July 10, 2019


Share

Through design decisions, implementation strategies, and unintended consequences, A.I. has the potential to impact lives across the globe. In military, law enforcement, banking, criminal justice, employment, and even product delivery contexts, algorithmic systems can threaten human rights by automating discrimination, chilling public speech and assembly, and limiting access to information.
   -   from “A.I. Ethics Boards Should Be Based on Human Rights” by Evan Selinger and Brenda K Leong

 

In a recent article in Medium, Rochester Institute of Technology philosophy professor Evan Selinger and Brenda K Leong (Future of Privacy Forum) argue that technology companies “should ensure their ethics boards are guided by universal human rights and resist bad faith arguments about diversity and free speech.”

 

In “A.I. Ethics Boards Should Be Based on Human Rights,” Professor Selinger and Ms. Leong point out that “artificial intelligence changes power dynamics” (see the AI Now Report 2018). And given that businesses that provide artificial intelligence (AI) are accountable to their shareholders, “fulfilling their fiduciary obligations can mean prioritizing growth, emphasizing profit, and working with international clients whose political allegiances vary along the democratic-authoritarian continuum.” All this adds up to “skepticism about the sincerity of corporate ethics.”

 

Professor Selinger and Ms. Leong advocate that technology companies providing artificial intelligence and machine learning systems “should proactively consider the ethical impacts of their inventions.” But what version of ethics should they choose? “Ethical norms, principles, and judgments differ between time, place, and culture, and might be irreconcilable even within local communities.”

 

Below are a few excerpts from “A.I. Ethics Boards Should Be Based on Human Rights”:

 

Google’s Short-Lived Advanced Technology External Advisory Council (ATEAC)

 

When Google appointed Kay Cole James, president of the Heritage Foundation, to its technology advisory council, many critics — including a number from within Google — objected that the company was pandering to conservatives. Google’s A.I. Principles were central to the debate because they include a corporate commitment to avoid creating or using “technologies whose purpose contravenes widely accepted principles of international law and human rights.” Since James is known for being anti-LGBTQ+ concerning trans individuals who don’t fit within her personal views on human sexuality, and leads the Heritage Foundation, long a proponent of “traditional” marriage, how could she be expected to hold Google accountable to its stated ideals?

 

Likewise, because Google implicitly validated Cole’s position by inviting her to join the council, the company inadvertently harmed marginalized communities. Their suffering wouldn’t be negated even if, somehow, James set aside her conflicting opinions in order to hold the company to its ideals during board meetings — ideals that, at least in part, clashed with James’ own convictions.

 

See “Google scraps AI ethics council after backlash: 'Back to the drawing board'” (The Guardian, April 2019).

 

Meaningful Ethical Practices

 

If tech companies want to create meaningful ethical practices, they need to invite the right people to their ethics boards and empower these folks to make publicly available recommendations that hold businesses accountable to transparent standards.

 

Human Rights and A.I. Ethics

 

…leading frameworks for A.I. governance base their approach on human rights, including the European Commission’s Ethics Guidelines for Trustworthy A.I., the Organisation for Economic Co-operations and Development’s Principles on A.I., Business for Social Responsibility’s A.I.: A Rights-Based Blueprint for Business, Data and Society’s Governing A.I.: Upholding Human Rights and Dignity, and Access Now’s Human Rights in the Age of A.I.

 

Technology companies should embrace this standard by explicitly committing to a broadly inclusive and protective interpretation of human rights as the basis for corporate strategy regarding A.I. systems. They should only invite people to their A.I. ethics boards who endorse human rights for everyone.

 

Once board members are selected, tech companies must require these experts to maintain a demonstrated commitment to human rights throughout their tenure. If due process reveals that a board member says or does something that is substantively out of line with human rights, she should be removed no matter how high her profile or how significant her past contributions. It’s that simple. The penalty is strong but appropriate, and it disincentivizes “digital ethics shopping,” which is the corporate malpractice of appealing to malleable ethical benchmarks to justify status quo behavior.

 

Commitment to Human Rights

 

A foundational commitment to human rights should lead to better ethical decisions about A.I.-based systems. As a start, it puts companies on notice. They shouldn’t be in the business of lethal autonomous weapons, government scoring systems, and government facial recognition systems if they can’t make a robust case for how these endeavors can coexist with human rights protections. And that doesn’t even begin to address the less obvious gray areas where A.I. will create a myriad of unforeseen consequences. Before more lives are impacted, we all deserve assurance that tech companies will roll out A.I. services with the aim of protecting essential rights and liberties.

 

Read the full article: “A.I. Ethics Boards Should Be Based on Human Rights” (Medium, June 20, 2019)

 

Evan Selinger is Professor of Philosophy at Rochester Institute of Technology. His research primarily addresses ethical issues concerning technology, privacy, science, the law, and expertise. Professor Selinger is also a Senior Fellow at the Future of Privacy Forum.

 

Brenda K Leong is Senior Counsel & Director of Strategy at the Future of Privacy Forum.

 


Share