Consider the billions of users that are bamboozled by ‘bad information’ each day. The typical information diet is a smorgasbord of inaccurate and misleading content with a side of reality. If you thought that it was already difficult for people to reach a consensus on how to represent or perceive the world, brace yourself. It’s about to get a whole lot worse.
According to author Nina Schick, who specialises in how technology and artificial intelligence (AI) are reshaping politics, we are already living in a technological dystopia on account of our severely polluted information ecosystem. She labels the culmination of misinformation, disinformation and fake news that we experience online as the ‘infocalypse’.
Schick sees potential for the infocalypse to worsen. The next-generation of mis- and disinformation seems likely to shape a world where real and fake media become indiscernible. This chilling prospect will be achieved with the advancements of ‘deepfakes’ – maliciously used images, video and audio that are either manipulated or wholly generated by AI1.
Here’s the annoying part about this existential threat: it’s hilarious. At least it can be. It’s hard not to smirk when you see the faces of Sylvester Stallone and Arnold Schwarzenegger superimposed into a scene of Stepbrothers. Or when Mr Bean’s face replaces Charlize Theron’s in a Dior commercial.
YouTubers have reaped millions of views for their face swap creations by using free AI software that is completely automatic. An original video and a lot of images of the face they want swapped is fed into the software. Frame-by-frame, the AI modifies the original video until a convincing deepfake is produced2.
AI is able to learn facial movements and patterns through a process of ‘deep learning’ which emulates functions of the human brain, prompting AI to learn to experience like humans do1.
Free AI software is beginning to challenge even the most expensive Hollywood CGI efforts that de-age, face swap and even bring actors back from the dead. An amateur managed to use the software to improve upon a scene of Martin Scorsese’s film The Irishman, which received a lot of criticism for its mediocre CGI de-aging.
The results of deepfakes and CGI characters can be awe-inspiring and super entertaining, but so far, they are quite easy to spot. Regardless of budget, the mouth and eye movements of digitised characters have not yet been perfected. However, perfection is not far off.
It is important to remember that CGI and deepfake technology is still nascent. Yet, the speed at which AI software is improving is intimidating when the threats of deepfakes are considered. When weaponised, the fun of deepfakes dissipates.
Without question deepfakes will be used by bad actors and criminals for new ways to dupe people online. Video evidence from security cameras, police body cameras and smartphones will be altered to wrongly accuse innocent people or get others off the hook. It will even be possible to manipulate the stock market by creating a deepfake of a CEO saying something controversial.
On a global scale, mis- and disinformation campaigns will have a compounded influence when using deepfakes to cause civil unrest. Whether the perpetrators are domestic or international, deepfakes are the next big thing of information warfare.
Deepfakes are already being mis-used in developed and developing nations. In India, for example, President Manoj Tiwari was able to reach a wider audience when criticising his political opponents by releasing two deepfakes of the same original video; one in English and another in a Hindi dialect3.
As it stands, the negative impact of deepfakes is being felt mainly by women. The most predominant category of deepfake production so far is non-consensual porn – mostly featuring famous females – which is estimated to make up 96 percent of deepfakes1. It could be a matter of time before licentious users make deepfake non-consensual porn of anybody, perhaps using social media images.
Avoiding future atrocities and controlling deepfake technology is not a simple task. Companies developing deepfake detection technology will always be a step behind as deepfakes improve. As Schick puts it, ‘Deepfake detection is a constantly evolving game of cat-and-mouse. As AI detection tools get better, so too will deepfakes1.’
Most of the focus to combat deepfakes is on technological solutions, but some suggest that there are more proactive strategies. Shifting the focus to educating the public and improving media literacy could be more pragmatic. Nudges, for instance, are a simple way to remind people to doublecheck and assess the content they consume.
Governmental intervention could be implemented to ensure the safe and responsible development and use of AI technology. Whatever the solutions might be, the time to act is now, while the technology is still early in its development.
There’s no telling just how significant an impact deepfakes could have. In an infocalypse already so ravaged by bad information, an outbreak of deepfakes could be the last straw toward a world where it’s impossible to be certain whether media content is real or fake. Philosopher Harry G. Frankfurt sees truth as a critical ingredient for a society to flourish, survive and cope prudently with problems. He says:
‘It seems even more clear to me that higher levels of civilisation must depend even more heavily on a conscientious respect for the importance of honesty and clarity in reporting the facts, and on a stubborn concern for accuracy in determining what the facts are4.’
How then, in a world infected by deepfakes, would we satisfy the prerequisites of Frankfurt’s philosophy and function as a civilisation? The answer is that we couldn’t.
- Schick, N. (2020). London: Monoray. Deep Fakes and the Infocalypse
- Frankfurt, H G. (2006). New York City: Random House, Inc. On Truth