Reconfiguring Diversity and Inclusion for AI Ethics

Article Source: AIES '21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pp. 447-457, July, 2021
Publication Date:
Time to Read: 3 minute read
Written By:

 Emma Lurie

Emma Lurie

 Nicole Chi

Nicole Chi



Corporate documents addressing artificial intelligence (AI) ethics issues tend to be abstract and leave responsibility for diversity and inclusion with AI customers.


Policy Relevance:

A framework linking AI ethics with the goals of civil rights movements would improve diversity programs.


Key Takeaways:
  • AI ethics initiatives focusing on issues such as bias, equity, and inclusion have begun to proliferate in the private sector; firms developing such programs include Salesforce, Google, and Microsoft.
  • Some are concerned that corporate diversity and inclusion programs superficially acknowledge diversity-related issues without challenging problematic historical and deep structural dynamics.
  • Typically, corporate documents offer vague, abstract examples of diversity challenges, leaving responsibility for defining ethical AI outcomes with customers of the firm’s AI products.
    • Firms present themselves as experts, but leave responsibility with their customers.
    • One implication is that firms and engineers are not responsible for the real-world ethical outcomes to which their tools contribute.
  • Corporate documents often give engineers and designers responsibility for providing mechanisms for diversity, inclusion, and fairness work.
    • Engineers are directed to use diverse data sets, diverse test users, and diverse use cases.
    • Diversity is presented as a key consideration for product development teams.
    • Diversity is presented as a means by which products may be improved and profits increased.
  • Documentation may mention legally protected categories, but more often emphasizes that firms’ inclusion efforts should go beyond legally protected categories to people of different ages, lifestyles, educational backgrounds, and perspectives, and those from different regions.
  • Documents suggest replacing legally relevant attributes like race with traits more amenable to technological measurement, like skin color or observable historical data.
  • AI ethics documents usually use the more general term "fairness" rather than “equity”; a specific political commitment to equity has been replaced by an undifferentiated commitment to all forms of fairness, which might mask disagreements.
  • Google, Microsoft, and Salesforce documents offer definitions of fairness for use in product testing; engineering and design failures with a significant ethical impact in the broader social context may be lumped in with other technical errors.
    • For example, if users in one region reject meat-based meal suggestions for religious reasons, the documents describe this as a “context error” fixable by personalization.
    • AI ethics is meant to question assumptions behind AI systems, but the documents present AI as part of the solution.
  • Firms’ focus on technical artifacts such as data sets allows firms to avoid addressing the historical disadvantages which produce biased data sets.
  • Firms’ focus on AI ethics as technical work will advance development of products and services that yield more equitable outcomes; engineers will come to see ethics as a mandatory part of their work.
  • Bringing engineering logic into AI ethics is necessary, but engineering logic will not advance civil rights ideals without a clear structure linking AI ethics with the civil rights movement.



Deirdre K. Mulligan

About Deirdre Mulligan

Deirdre K. Mulligan is a Professor in the School of Information at UC Berkeley, a faculty Director of the Berkeley Center for Law & Technology, a co-organizer of the Algorithmic Fairness & Opacity Working Group, an affiliated faculty on the Hewlett funded Berkeley Center for Long-Term Cybersecurity, and a faculty advisor to the Center for Technology, Society & Policy. Professor Mulligan’s research explores legal and technical means of protecting values such as privacy, freedom of expression, and fairness in emerging technical systems.