Ten Years Hence 2021 – Trust and Truth in the Age of Deep Fakes

Subscribe to the ThinkND podcast on Apple, Spotify, or Google.

Featured Speakers: 

  • Hany Farid, Professor, University of California, Berkeley
  • James S. O’Rourke, Teaching Professor of Management, University of Notre Dame

By now, many of us are familiar with the problem of “deep fake” images and videos, inauthentic creations designed to look like the real thing used for a variety of purposes, some benign and entertaining, others malicious and harmful. But how are these images created? And how can we detect them in order to distinguish fact from fiction? These and other critical issues in the disinformation landscape were the topics covered by Hany Farid, a Professor at the University of California, Berkeley with a joint appointment in Electrical Engineering & Computer Sciences and the School of Information, during a lecture as part of ND’s 2021 Ten Years Hence series.

In a compelling presentation on the mechanics of deep fakes, Farid walked the audience through how the process of making the doctored images and video works. It involves a back and forth computerized “dialogue” between a “generator” and a “discriminator” The generator begins with a random assortment of pixels, and the discriminator, which has access to real images of people, rejects the image the generator sends until the former produces an image that the discriminator will accept as a person. Once that happens, the deep-fake has reached a level of sophistication at which the human eye will likely be fooled.

In recent years, the technology to develop deep fakes has become more advanced and easier to access, meaning that building reliable detection mechanisms for these kinds of images and videos has never been more important. Fortunately, Farid and others have pioneered methods that allow users to spot inauthentic media through a variety of means. To name but one, people tend to have specific body or facial movements they make during speaking, sometimes correlated to another body movement that occurs before, simultaneously, or after. These movements can serve as a “soft” biomarker, which one would assume is present in all authentic footage of a person. In contrast, footage that lacks these movements for features, or that present two features as occurring, say, one after another when they should be happening simultaneously, can thus be identified as a deep fake.

Disinformation has been with us as long as information has, and it was a problem long before deep fakes. But what makes today different from the past is the sheer scale of social media platforms and the role they command in the everyday life of their billions of users. Previously, these big tech platforms may have been able to evade taking action against polarization-driving fake news or misinformation by arguing that they were neutral platforms, the content on which was not their concern. Those days, as Farid explained, are long over. In order to ensure continued trust in institutions, companies and individuals need to foster an environment in which users are informed and aware of the information landscape they navigate online. Only then can we avoid an outcome in which the “liar’s dividend,” the ability of any person or group to dismiss photo or video evidence that casts them in a bad light as “fake” regardless of its authenticity, rules the day.

Visit the event page for more.


  • “Belief in conspiracies erodes trust in government and institutions.” (Hany Farid 15:06)
  • On average, people only believe slightly less than half of the true information presented to them about COVID, and believe on average 18 percent of the false coronavirus information they encounter. (16:56).
  • “You can’t really think about the problem of creating synthetic media if you don’t think about what it is building on, and what it is building on is an existing ecosystem of lies, conspiracies, half-truths, and misinformation spinning around the Internet.” (Hany Farid 19:00)
  • Creation of deep-fake images is a back-and-forth process between a “generator” and a “discriminator” The generator begins with a random assortment of pixels, and the discriminator, which has access to real images of people, rejects the image the generator sends in a back-and-forth iterative process until the generator produces an image that the discriminator will accept as a person. (21:45)
  • Establishing a facial mannerism or expression regularly used by a person can become a “soft biometric” that can be used to spot deep-fakes where such an expression is either absent or distorted (32:44).
  • “The single most common misuse of deep fakes is non-consensual pornography.” (Hany Farid 26:19)
  • “We [] have a world where the scale and speed of social media is unprecedented. There are some 500 hours of YouTube video uploaded every minute.”  (Hany Farid 49:49)
  • “For a variety of reasons, we’re becoming an increasingly polarized public, so when somebody creates a fake video of Biden or Trump or Pelosi or Cruz, there are people out there who are eager and willing not just believe it, but to share it and amplify it.”   (Hany Farid 50:05)
  • The battle between pioneering new deep-fake creation methods and pioneering ways to detect them will continue for decades to come. (52:04)
  • The days of social media companies being able to dismiss dangerous or fake content by saying that they are nothing but neutral platforms are long gone. (52:39).