Danielle Citron Hosts Symposium to Explore Deep Fakes and the Implications for Privacy and National Security

By TAP Staff Blogger

Posted on February 1, 2019


The specter of politically motivated deep fakes disrupting elections is at the top of [Danielle] Citron’s concerns. “What keeps me awake at night is a hypothetical scenario where, before the vote in Texas, someone releases a deep fake of Beto O’Rourke having sex with a prostitute, or something,” Citron told me. “Now, I know that this would be easily refutable, but if this drops the night before, you can’t debunk it before serious damage has spread.” She added: “I’m starting to see how a well-timed deep fake could very well disrupt the democratic process.”
- You Thought Fake News Was Bad? Deep Fakes Are Where Truth Goes to Die” (The Guardian, November 12, 2018)


Earlier this week, at the Worldwide Threats hearing before the US Senate's Select Committee on Intelligence, the leaders of the National Security Agency, the CIA and the FBI pointed at tech issues as their biggest worry.


The Tuesday hearing covered issues like weapons of mass destruction, terrorism, and organized crime, but technology's problems took center stage. … But concerns over technology aren't limited to cyberattacks: Lawmakers also brought up deepfakes, artificial intelligence, disinformation campaigns on social media, and the vulnerability of internet of things devices.
- Deepfakes, Disinformation Among Global Threats Cited at Senate Hearing” (C|Net, January 29, 2019)


Deep fakes refers to “digital manipulation of sound, images, or video to impersonate someone or make it appear that a person did something—and to do so in a manner that is increasingly realistic, to the point that the unaided observer cannot detect the fake.” (Deep Fakes: A Looming Crisis for National Security, Democracy and Privacy?” by Danielle Citron and Robert Chesney, Lawfare, February 21, 2018)


Today (February 1, 2019), privacy law expert Danielle Citron and the University of Maryland Francis King Carey School of Law is hosting a symposium to discuss the full array of implications that deep fakes have on our society.


Truth Decay: Deep Fakes and the Implications for Privacy, National Security” will explore a number of questions: What happens as the boundary between truth and falsity dissipates into little more than a subjective illusion? What can law, media, companies, and society do to protect democratic principles, individual reputations, national security, free expression, and intellectual property? How can individuals and online users contribute to the protection of crucial democratic values?


This symposium is inspired in part by Professor Danielle Citron and Professor Robert Chesney’s article, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” forthcoming in California Law Review, which provides the first assessment of the causes and consequences of deep fake technology.


Read more about the “Deep Fakes” article in a TAP blog post: “Danielle Citron Provides an In-depth Assessment of the Causes and Consequences of “Deep Fake” Technology.” And below is the abstract from “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security:”


Harmful lies are nothing new. But the ability to distort reality has taken an exponential leap forward with “deep fake” technology. This capability makes it possible to create audio and video of real people saying and doing things they never said or did. Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection. Deep-fake technology has characteristics that enable rapid and widespread diffusion, putting it into the hands of both sophisticated and unsophisticated actors. While deep-fake technology will bring with it certain benefits, it also will introduce many harms. The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases. Deep fakes will exacerbate this problem significantly. Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage. The risks to our democracy and to national security are profound as well. Our aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it. We survey a broad array of responses, including: the role of technological solutions; criminal penalties, civil liability, and regulatory action; military and covert-action responses; economic sanctions; and market developments. We cover the waterfront from immunities to immutable authentication trails, offering recommendations to improve law and policy and anticipating the pitfalls embedded in various solutions.