As he began the fourth lecture in Notre Dame’s Ten Years Hence series on Fake News, Deep Fakes, and Disinformation, Matthew Turek of the U.S. government’s Defense Advanced Research Projects Agency made clear how past surprises have informed present policy on emerging technologies. The agency, most often referred to by its acronym DARPA, was formed in response to the Soviet Union’s launching of Sputnik, the first artificial satellite, a feat which took the U.S. government by surprise and which it could not match technologically when it happened. In the decades since, DARPA has invested in new and emerging technologies – everything from the Internet to the messenger RNA vaccine type used in many COVID shots – to make sure Washington is prepared to meet the tech challenges of tomorrow, challenges which recently have extended into the disinformation space.
As Turek walked the audience through the technological processes of developing synthetic media such as “deep fake” videos, he made it clear that that there exists a spectrum of such technologies. Some of the more cutting-edge deep fakes and synthetic media tech require Hollywood-like levels of time and money to produce, making them the near exclusive provenance of nation states or other large entities. At the same time, many of the researchers and organizations pioneering synthetic media technologies for non-nefarious purposes – including commercial interests and entertainment – have shared their methods as open-source technology in the interest of scientific transparency, which makes it easier for malicious actors or groups with far fewer resources at their disposal than a nation state to produce convincing falsified media.
As it becomes easier for such individuals and groups to engage in malicious deep-fake activities, Turek voiced concerns that society would see far more targeted personal attacks, creation of fake news stories for events that never happened, and so-called “ransomfake” extortions (a pormanteau of “ransomware” and “deepfake”) where individuals could be forced to pay blackmail in order to avoid identity attacks using synthetic or otherwise falsified media. In order to curb these threats, media-authentication processes have to develop at a rate similar to deep-fake technologies; and in furtherance of that goal, DARPA has invested in a number of digital-forensics processes designed to separate truth from fiction and inform users of any ways in which media they come across has been altered.
Here the use of AI is critical. Turek explained that automated techniques for producing falsified media often leave behind tell-tale “statistical fingerprints” easily recognizable by machine learning but impossible for the human eye to spot. Assessing a video or image’s digital integrity (are pixels and representations consistent?) is essential to gauging authenticity. At the same time, the physical (are the laws of physics violated in the image or video?) and semantic (is a hypothesis about a piece of media disputable?) elements of media integrity are just as important to distinguishing fact from fiction. And growth in manipulated media is not limited to deep-fake videos, but is also being seen in text and audio mediums as well, for which there are similar automated approaches underway to assessing authenticity. In sum, hard work is being done to remedy a daunting technological problem with technological solutions.
Visit the event page for more.