Synthetic Media & The Liar’s Dividend

We all contribute our fair share of lies. Some liars’ intentions are innocent; to escape danger or humiliation. And the white lies that we tell almost pass as virtuous in sparing the feelings of others.

Some, on the other hand, are malevolent liars – people who deliberately deceive others for personal gain or to simply raise hell. With social media, the liar has never been so effective when spreading their lies online.

As artificial intelligence (AI) improves, synthetic media becomes more realistic and easier to produce. Synthetic media includes images, video and audio that is manipulated or wholly generated by AI1. Synthetic media can range from Hollywood CGI effects to selfie filters to more concerning kinds of altered content.

Joseph Stalin, one of the founding fathers of synthetic media rewrote history by editing comrades out of photographs when it suited his agenda. For Stalin to successfully dupe his citizens, he required a team of highly skilled practitioners with patience and attention to detail2.

Now, anybody can make synthetic media with little to no skill at all! For the liar, the latest weapon in the arsenal of deception is deepfake technology.

Though still nascent, deepfake technology is the equivalent of photoshop when creating synthetic video media. This AI software, often free of charge, allows users to make people say and do anything.

The are obvious consequences of synthetic media: fraud, blackmail, humiliation etc. Yet there’s another side to this coin that the liar can capitalise on. In a world where fake can be dismissed as real and vice versa, the liar will have plausible deniability for any piece of synthetic media that may expose controversy.

How will consumers discern true from false? How will they know if image, video or audio evidence is synthetic media or real media? How will they know if the accused is truthful or not? This is the liar’s dividend.

In the African nation of Gabon, the liar’s dividend almost paid off when rumours circulated around the mysterious absence of President Ali Bongo. To subdue speculation, Bongo released a video to the public in the nation’s annual New Years address.

Questions were asked about the subtle and unusual changes in Bongo’s facial expressions. Rumours flared up yet again, claiming that the video was a deepfake and ultimately led to an attempted coup.

Eventually, the deepfake theory was debunked when it was confirmed that Bongo had suffered a stroke, which explained the changes in his face. Since then, he has been seen in public1.

In this case, uncertainty surrounding the deepfake conspiracy was the “dividend” that the military used to attempt to overthrow the government. This time, they failed. Next time, who knows?

Australia has been fortunate so far in that our national security has not yet been targeted with deepfakes or other synthetic media. However, we got a taste test in typical Aussie fashion.

Advance Australia, an independent political movement, released a deepfake of Queensland Premier Pannastacia Alazczuk in a faked press conference3. Though satirically crude and honest in its fabricated nature, it seems like a gentle warning of how deepfakes can be used, especially when AI technology improves further.

The ubiquity of lying and social media is an intimidating combination. Synthetic media is undoubtedly fuel for mis- and disinformation, leaving consumers sceptical perhaps when they don’t need to be.

A healthy balance of trust and scepticism in the media is critical for the clockwork of the mediasphere to work efficiently. Synthetic media can be the rust that stops the cogs from turning. This is a liar’s delight and where the liar’s dividend pays off.


  1. Schick, N. (2020). London: Monoray. Deep Fakes and the Infocalypse