Ryan Calo Delves into Bot Speech and the Principles of Free Expression
Publication Date: January 18, 2019 5 minute readDo bots, like citizens, have the right to free speech? Put another way, do you have a right to know if you’re talking to a bot? How about the bot? Does it have the right to keep that information from you?
“Bot” refers to an automated online account that appears to be controlled by a human, but is actually powered by artificial intelligence (AI).
Lawmakers and technology policy thought-leaders are grappling with these questions. On one side of the debate are those who support legislation that would minimize the malicious influence of bots on social media (e.g., spreading fake news and inciting extremist vitriol) by requiring upfront identification of online communication bots. On the other side are those who argue bot disclosure laws risk unmasking anonymous speakers and could lay a foundation for censorship by private actors and other governments.
In “Regulating Bot Speech,” co-authors Ryan Calo and Madeline Lamo say, “Ultimately bots represent a diverse and emerging medium of speech. Their use for mischief should not overshadow their novel capacity to inform, entertain, and critique.”
A recent Philadelphia Inquirer article, “Is That Tweet from a Human? …” asked Professor Calo for his reaction to a proposed New Jersey bill that would require upfront identification of online communication bots. Professor Calo is a cyberlaw and robotics expert at the University of Washington.
“So while on its face it doesn’t require someone to say who they are, as enforced it has that potential, and it creates a tool to unmask people just by calling them bots,” Calo said.
Even if that doesn’t happen, Calo said, he worries about the chilling effect: What accounts might never get made, what person’s speech might never get heard?
Below are a few excerpts from “Regulating Bot Speech,” by Ryan Calo and Madeline Lamo.
Concerns with Bots
Recent developments—technological, as well as political and economic—have elevated attention to automated agents, or “bots.” Social media in particular has proved a fertile ground for this phenomenon. The presence of thousands upon thousands of automated accounts on Twitter, Facebook, and other platforms can be disconcerting and even dangerous. Bots can create an appearance of false consensus, make a candidate or idea seem more popular than the reality, and even hijack attempts at genuine dialogue and community building. There is evidence that bots created in Russia played a significant role in spreading disinformation during the 2016 presidential election. Bots continue to foment political and cultural discord as of this writing.
The Scope of “Regulating Bot Speech”
This work examines how mandatory disclosure laws that disallow bots to operate unless they identify themselves as non-human might fare under principles of free expression. The question is an interesting one, in part because a cursory First Amendment analysis obscures a deeper tension. Requiring a bot merely to acknowledge that it is a bot does not appear at first blush to implicate censorship or threaten the right to anonymous speech. Nevertheless, crafting a narrowly tailored, enforceable law requiring bot disclosure turns out to be much harder than proponents realize, and indeed threatens to curtail an emerging form of expression.
While a series of recent contributions have assessed whether bot speech is covered by the First Amendment, this essay is among the first to discuss the protections offered by the First Amendment in light of coverage. Thus, not only does the essay contribute a novel analysis of a real-world speech regulation, it opens the door to a category of questions around the potentially unique ways First Amendment law may come to interact with autonomous speakers. Of particular interest are the new forms of expression that bots permit, including through the very ambiguity surrounding their nature.
Recommendations for Legislating Bot Speech
“Regulating Bot Speech” offers a series of recommendations to policymakers around how best to approach bot speech.
First, to the extent feasible, governments should begin by updating and leveraging existing law to address harms caused by bots.
Second, and relatedly, governments should regulate bot speech, if at all, through individual restrictions aimed at (i) particular categories of bots, (ii) within specific contexts, and (iii) supported by the specific harms the government hopes to mitigate.
Third, governments should anticipate and address inevitable issues around enforcement.
And fourth, governments should acknowledge the downstream effects of officially differentiating between bot speech and other forms of online communication.
Conclusion
Time will tell whether the many and varied bots of today and tomorrow meet this threshold of utility. They have already displayed the capacity for significant mischief, and some measure of wonder. This essay has shown that a popular response to the harms of bots may look innocuous on the surface but, upon deeper analysis, implicates core free speech concerns. Bots represent a new form of communication—whether in their capacity to surprise, their ability to produce speech at scale, or the way in which some bots test our intuitions about the boundary between person and machine. This novelty is frightening, perhaps, and even harmful. Any response must nevertheless be measured and respect age old principles of free expression.
Read the full article: “Regulating Bot Speech” (by Ryan Calo and Madeline Lamo, UCLA Law Review, 2019)
About M. Ryan Calo
Ryan Calo is the Lane Powell and D. Wayne Gittinger Professor at the University of Washington School of Law. He is a founding co-director (with Batya Friedman and Tadayoshi Kohno) of the interdisciplinary UW Tech Policy Lab and the UW Center for an Informed Public (with Chris Coward, Emma Spiro, Kate Starbird, and Jevin West). Professor Calo holds adjunct appointments at the University of Washington Information School and the Paul G. Allen School of Computer Science and Engineering.