An Interview on Deepfakes

With an overload of visual information online, it is becoming easier for media consumers to be duped by the ever-growing threat of deepfakes.

Faked and altered videos, images and audio – sometimes referred to as synthetic media – are morphing rapidly into the latest obstacle in our quest for accurate information. Since World War II, edited photographs have warped reality. The process was made far easier by programs like Photoshop.

Today, artificially intelligent technology makes manipulated videos as convincing as manipulated photos. These videos are known as “deepfakes” – maliciously used synthetic media that are either partly or wholly generated by artificially intelligent technology.

To keep up with the latest mutation of deepfake technology, experts around the world are racing to find solutions to avoid further problems to what author Nina Schick calls the “infocalypse”.

John Sohrawardi is a researcher who focuses on various aspects of deepfake technology and its effects on journalism, privacy and security at the Rochester Institute of Technology in the USA.

In an interview with The MediaSphere, John talked about some of the issues that society must prepare for in the wake of new deepfake technology.

Do you believe that deepfakes really are an existential threat considering the solutions in place and society’s current level of media literacy?

“Deepfakes are an existential threat, but they somewhat go hand-in-hand with typical disinformation and misinformation. While there are solutions in place, the majority of society does not follow them. Thusly, deepfakes do not even need to be convincing in order for them to be spread as real. Especially since a lot of the time we are driven by our own predetermined bias.”

How and why do deepfakes differ from realistic photoshopped images and documents?

“Deepfakes differ from past photoshop and CGI in a way that they are computer generated so do not leave the same signatures. They can potentially be made more easily and without much prior training and do not necessarily require a keen eye.”

What can we learn from a past of Photoshop when preparing for a world of deepfakes?

“Well, the past of Photoshop doesn’t give us that much hope. However, in the research and journalism world, there is indeed hope as there was research to detect photoshopped manipulations and journalists are trained to cross-examine information. Having said that, not all journalists are as well trained, and some even well-informed journalists can potentially be caught unaware by a well-crafted and connected forgery.”