David Doran

Breaking news: a video “leaks” of a famous terrorist leader meeting in secret with an emissary from a country in the Middle East. News organisations air the video as an “unconfirmed report”. American officials can neither confirm nor deny the authenticity of the video – a typically circumspect answer on intelligence matters.

The US president condemns the country that would dare hold secret meetings with this reviled terrorist mastermind. Congress discusses imposing sanctions. A diplomatic crisis ensues. Perhaps, seeing an opportunity to harness public outrage, the president orders a cruise missile strike on the last known location of the terrorist leader.


All of this because of a few seconds of film – a masterful fabrication.

In 2019, we will for the first time experience the geopolitical ramifications of a new technology: the ability to use machine learning to falsify video, imagery and audio that convincingly replicates real public figures.

Read next This CIA spy game reveals the secrets of successful teams This CIA spy game reveals the secrets of successful teams

“Deepfakes” is the term becoming shorthand for a broad range of manipulated video and imagery, including face swaps (identity swapping), audio deepfakes (voice swapping), deepfake puppetry (mapping a target’s face to an actor’s for facial reenactment), and deepfake lip-synching (synthetic video created to match an audio file and footage of their face). This term was coined in December 2017 by a Reddit user of the same name, who used open-source artificial intelligence tools to paste celebrities’ faces on to pornographic video clips. A burgeoning community of online deepfake creators followed suit.

Deepfakes will continue to improve in ease and sophistication as developers create better AI and new techniques that make it easier to create falsified videos. The telltale signs of a faked video – subjects not blinking, flickering of the facial outline, over-centralised facial features – will become less obvious and, eventually, imperceptible. Ultimately, maybe in a matter of a few years, it will be possible to synthetically generate footage of people without relying on any existing footage. (Current deepfakes need stock footage to provide the background for swapped faces.)


Perpetrators of co-ordinated online disinformation operations will gladly incorporate new, AI-powered digital impersonations to advance political goals, such as bolstering support for a military campaign or to sway an electorate. Such videos may also be used simply to undermine public trust in media.

Aside from geopolitical meddling or disinformation campaigns, it’s easy to see how this technology could have criminal, commercial applications, such as manipulating stock prices. Imagine a rogue state creating a deepfake that depicts a CEO and CFO furtively discussing missing expectations in the following week’s quarterly earnings call. Before releasing the video to a few journalists, they would short the stock – betting on the stock price plummeting when the market overreacts to this “news”. By the time the video is debunked and the stock market corrects, the perpetrator has already made away with a healthy profit.

Perhaps the most chilling realisation about the rise of deepfakes is that they don’t need to be perfect to be effective. They need to be just good enough that the target audience is duped for just long enough. That’s why human-led debunking and the time it requires will not be enough. To protect people from the initial deception, we will need to develop algorithmic detection capabilities that can work in real time, and we need to conduct psychological and sociological research to understand how online platforms can best inform people that what they’re watching is fabricated.

Read next Covid-19 has shown how easy it is to automate white-collar work Covid-19 has shown how easy it is to automate white-collar work

In 2019, the world will confront a new generation of falsified video deployed to deceive entire populations. And we may not realise the video is fake until we’ve already reacted – maybe overreacted. Indeed, it may take such an overreaction for us all to consider how we relate to fast-moving information online, not only from a technological and platform point of view, but from the perspective of everyday citizens and the mistaken assumption that “seeing is believing”.

Yasmin Green is the director of research and development for Jigsaw, an Alphabet company focused on security

More from The WIRED World 2019

– Meet the companies fixing depression by stimulating neurons

– An e-bike revolution is about to upend urban transport


– How companies will use AI tackle workplace harassment

– The blockchain needs protecting from quantum hackers

Get the best of WIRED in your inbox every Saturday with the WIRED Weekender newsletter