Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security

Article Source: California Law Review, Vol. 107, No 6, Dec. 2019
Publication Date:
Time to Read: 2 minute read
Written By:

 Robert Chesney

Robert Chesney



"Deep fake" technology makes it possible to create audio and video files of real people saying and doing things they never said or did. These technologies create policy and legal problems. Possible responses include technological solutions, criminal and civil liability, and regulation.


Policy Relevance:

Online platforms should be held accountable for distributing “deep fake” audio and video files.


Key Takeaways:
  • "Deep-fake" technologies use machine learning techniques to create video and audio records that make it appear that a real person did or said something that she did not do; for example, a fake video might show an American soldier murdering an innocent civilian.
  • One technique involves "generative adversarial networks" (GANs), which use two neural networks simultaneously; one simulates a dataset or image, while the other assesses the simulation’s success, allowing for the creation of convincing deep fakes quickly on a large scale.
  • Deep fakes will undermine public safety, manipulate election results, exacerbate social divisions, and erode trust in public institutions.
    • Fake videos could depict public officials or politicians taking bribes or displaying racism.
    • Fake videos showing Muslims or Israeli officials doing something inflammatory could provoke violence against entire communities.
    • Journalists that reveal controversial truths will be accused of creating “fake news.”
  • Technology cannot reliably detect deep fakes; market responses such as firms that create a comprehensively record of all of one's movements might be helpful, but would threaten privacy.
  • A ban on “deep fake” technologies would violate free speech rights, but a carefully tailored prohibition on deep fakes that amount to defamation, fraud, or the incitement of imminent violence would be permissible.
  • Section 230 of the Communications Decency Act gives online platforms immunity from suits for harmful content; the law should be amended so that online platforms are held accountable for failing to take reasonable steps to ensure it is not being used for illegal ends.
  • Federal agencies such as the Fair Trade Commission could play a role in regulating deep fakes; however, the idea of a federal agency judging the truthfulness of news stories or assessing the content of election advertising is troubling.



Danielle Citron

About Danielle Citron

Danielle Citron is the Jefferson Scholars Foundation Schenck Distinguished Professor in Law at the University of Virginia School of Law. She writes and teaches about privacy, free expression and civil rights. She is an Affiliate Scholar at the Stanford Center on Internet and Society, Affiliate Fellow at the Yale Information Society Project, Senior Fellow at Future of Privacy, Affiliate Faculty at the Berkman Klein Center at Harvard Law School, and a Tech Fellow at the NYU Policing Project.