Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security

Privacy and Security, Innovation and Economic Growth and Artificial Intelligence

Article Snapshot


Robert Chesney and Danielle Citron


California Law Review, Vol. 107, No 6, Dec. 2019


"Deep fake" technology makes it possible to create audio and video files of real people saying and doing things they never said or did. These technologies create policy and legal problems. Possible responses include technological solutions, criminal and civil liability, and regulation.

Policy Relevance

Online platforms should be held accountable for distributing “deep fake” audio and video files.

Main Points

  • "Deep-fake" technologies use machine learning techniques to create video and audio records that make it appear that a real person did or said something that she did not do; for example, a fake video might show an American soldier murdering an innocent civilian.
  • One technique involves "generative adversarial networks" (GANs), which use two neural networks simultaneously; one simulates a dataset or image, while the other assesses the simulation’s success, allowing for the creation of convincing deep fakes quickly on a large scale.
  • Deep fakes will undermine public safety, manipulate election results, exacerbate social divisions, and erode trust in public institutions.
    • Fake videos could depict public officials or politicians taking bribes or displaying racism.
    • Fake videos showing Muslims or Israeli officials doing something inflammatory could provoke violence against entire communities.
    • Journalists that reveal controversial truths will be accused of creating “fake news.”
  • Technology cannot reliably detect deep fakes; market responses such as firms that create a comprehensively record of all of one's movements might be helpful, but would threaten privacy.
  • A ban on “deep fake” technologies would violate free speech rights, but a carefully tailored prohibition on deep fakes that amount to defamation, fraud, or the incitement of imminent violence would be permissible.
  • Section 230 of the Communications Decency Act gives online platforms immunity from suits for harmful content; the law should be amended so that online platforms are held accountable for failing to take reasonable steps to ensure it is not being used for illegal ends.
  • Federal agencies such as the Fair Trade Commission could play a role in regulating deep fakes; however, the idea of a federal agency judging the truthfulness of news stories or assessing the content of election advertising is troubling.


Get The Article

Find the full article online

Search for Full Article