Ten Years Hence 2021 – Automated Approaches to Detecting, Attributing, and Characterizing Falsified Media

Subscribe to the ThinkND podcast on Apple, Spotify, or Google.

Featured Speakers: 

  • Matt Turek, Program Manager, DARPA’s Information Innovation Office (I2O)
  • James S. O’Rourke, Teaching Professor of Management, University of Notre Dame

As he began the fourth lecture in Notre Dame’s Ten Years Hence series on Fake News, Deep Fakes, and Disinformation, Matthew Turek of the U.S. government’s Defense Advanced Research Projects Agency made clear how past surprises have informed present policy on emerging technologies. The agency, most often referred to by its acronym DARPA, was formed in response to the Soviet Union’s launching of Sputnik, the first artificial satellite, a feat which took the U.S. government by surprise and which it could not match technologically when it happened. In the decades since, DARPA has invested in new and emerging technologies – everything from the Internet to the messenger RNA vaccine type used in many COVID shots – to make sure Washington is prepared to meet the tech challenges of tomorrow, challenges which recently have extended into the disinformation space.

As Turek walked the audience through the technological processes of developing synthetic media such as “deep fake” videos, he made it clear that that there exists a spectrum of such technologies. Some of the more cutting-edge deep fakes and synthetic media tech require Hollywood-like levels of time and money to produce, making them the near exclusive provenance of nation states or other large entities. At the same time, many of the researchers and organizations pioneering synthetic media technologies for non-nefarious purposes – including commercial interests and entertainment – have shared their methods as open-source technology in the interest of scientific transparency, which makes it easier for malicious actors or groups with far fewer resources at their disposal than a nation state to produce convincing falsified media.

As it becomes easier for such individuals and groups to engage in malicious deep-fake activities, Turek voiced concerns that society would see far more targeted personal attacks, creation of fake news stories for events that never happened, and so-called “ransomfake” extortions (a pormanteau of “ransomware” and “deepfake”) where individuals could be forced to pay blackmail in order to avoid identity attacks using synthetic or otherwise falsified media. In order to curb these threats, media-authentication processes have to develop at a rate similar to deep-fake technologies; and in furtherance of that goal, DARPA has invested in a number of digital-forensics processes designed to separate truth from fiction and inform users of any ways in which media they come across has been altered.

Here the use of AI is critical. Turek explained that automated techniques for producing falsified media often leave behind tell-tale “statistical fingerprints” easily recognizable by machine learning but impossible for the human eye to spot. Assessing a video or image’s digital integrity (are pixels and representations consistent?) is essential to gauging authenticity. At the same time, the physical (are the laws of physics violated in the image or video?) and semantic (is a hypothesis about a piece of media disputable?) elements of media integrity are just as important to distinguishing fact from fiction. And growth in manipulated media is not limited to deep-fake videos, but is also being seen in text and audio mediums as well, for which there are similar automated approaches underway to assessing authenticity. In sum, hard work is being done to remedy a daunting technological problem with technological solutions.

Visit the event page for more.


  • “We’ve seen a really rapid growth over the years in automated techniques for creating and manipulating media.” (Matthew Turek 11:20)
  • Many companies and researchers pioneering synthetic image creation techniques for non-nefarious purposes (gaming, entertainment, various commercial interests) publish their source code in the interest of scientific transparency, but in doing so they open up their technology for appropriation by malicious actors. (15:20).
  • The creation of a deep-fake video involves using footage of an original face, footage of whatever face one wants to replace that with, and the use of a neural network trained to swap the two faces. (22:20)
  • “One of the challenges in the face of all these media manipulation techniques is: How do we authenticate media?” (Matthew Turek 25:45)
  • The essential elements of media integrity are digital (are pixels/representations consistent?), physical (are the laws of physics violated in the image or video?), and semantic (is a hypothesis about a piece of media disputable?) (28:50).
  • On Digital Forensics: “Some of these automated techniques like deepfakes…leave ‘statistical fingerprints’ in place that are relatively easy for machine learning approaches to find [but] are not visually identifiable by a human.” (Matthew Turek 29:10)
  • “In general, large nation states can invest Hollywood-level resources in creating manipulated media.”  (Matthew Turek 38:55)
  • “There are some unscrupulous [scientific] researchers who will manipulate some of their images. . . to try and show that they got an experimental result that they didn’t.”   (Matthew Turek 45:25)
  • In addition to deep-fake videos, growth in manipulated media is being seen in text and audio files. (47:15)
  • Future threats stemming from falsified media include targeted personal attacks, creation of fake news stories for events that never happened, and “ransomfake” extortions, where malicious actors could perform blackmail attacks using synthetic media. (53:24).