In “Regulating Bot Speech,” University of Washington robotics expert Ryan Calo and Madeline Lamo examine how mandatory disclosure laws that disallow bots to operate unless they identify themselves as non-human might fare under principles of free expression.
Daron Acemoglu, MIT, and Pascual Restrepo, Boston University, argue that AI can be the basis of two types of technological progress: automation and enhancement; and they show that “there is scope for public policy to ensure that resources are allocated optimally between the two in order to ensure fulfillment of AI’s potential for growth, employment, and prosperity.”
A report from Cornell Tech’s Speed Conference shares research in areas of autonomous vehicles, warfare, information security, labor and manufacturing, content moderation, and finance.
danah boyd, Catherine Tucker, and Joseph Turow share essays about their work with artificial intelligence and ethics.
University of Maryland law professor Frank Pasquale discusses concerns with mental health apps being used as digital substitutes for mental health professionals.
Stanford economics professor Susan Athey explains why she believes the field of economics and machine learning is “on the verge of exploding.”
A new article by Danielle Citron and her co-author Robert Chesney provides the first comprehensive survey of the harms caused by “deep fake” technology, and examines the powerful incentives that deep fakes produce for privacy-destructive solutions.
George Washington law professor Daniel Solove discusses several ethical issues connected with the advances of artificial intelligence.
Rotman School of Management economists Joshua Gans, Avi Goldfarb, and Ajay Agrawal discuss how regulatory policy and policies to mitigate potential negative consequences could impact the adoption of AI.
Professors Evan Selinger, Rochester Institute of Technology, and Woodrow Hartzog, Northeastern University, expose the dangers of facial recognition technology.