By now, many of us are familiar with the problem of “deep fake” images and videos, inauthentic creations designed to look like the real thing used for a variety of purposes, some benign and entertaining, others malicious and harmful. But how are these images created? And how can we detect them in order to distinguish fact from fiction? These and other critical issues in the disinformation landscape were the topics covered by Hany Farid, a Professor at the University of California, Berkeley with a joint appointment in Electrical Engineering & Computer Sciences and the School of Information, during a lecture as part of ND’s 2021 Ten Years Hence series.
In a compelling presentation on the mechanics of deep fakes, Farid walked the audience through how the process of making the doctored images and video works. It involves a back and forth computerized “dialogue” between a “generator” and a “discriminator” The generator begins with a random assortment of pixels, and the discriminator, which has access to real images of people, rejects the image the generator sends until the former produces an image that the discriminator will accept as a person. Once that happens, the deep-fake has reached a level of sophistication at which the human eye will likely be fooled.
In recent years, the technology to develop deep fakes has become more advanced and easier to access, meaning that building reliable detection mechanisms for these kinds of images and videos has never been more important. Fortunately, Farid and others have pioneered methods that allow users to spot inauthentic media through a variety of means. To name but one, people tend to have specific body or facial movements they make during speaking, sometimes correlated to another body movement that occurs before, simultaneously, or after. These movements can serve as a “soft” biomarker, which one would assume is present in all authentic footage of a person. In contrast, footage that lacks these movements for features, or that present two features as occurring, say, one after another when they should be happening simultaneously, can thus be identified as a deep fake.
Disinformation has been with us as long as information has, and it was a problem long before deep fakes. But what makes today different from the past is the sheer scale of social media platforms and the role they command in the everyday life of their billions of users. Previously, these big tech platforms may have been able to evade taking action against polarization-driving fake news or misinformation by arguing that they were neutral platforms, the content on which was not their concern. Those days, as Farid explained, are long over. In order to ensure continued trust in institutions, companies and individuals need to foster an environment in which users are informed and aware of the information landscape they navigate online. Only then can we avoid an outcome in which the “liar’s dividend,” the ability of any person or group to dismiss photo or video evidence that casts them in a bad light as “fake” regardless of its authenticity, rules the day.
Visit the event page for more.