If you already know of Deepfakes, you may also have coupled the search with your favorite celebrity. If you haven’t, probably don’t do this at work.

Deepfakes is an artificial-intelligence based program that allows the creation of fake videos of people that is nearly indistinguishable from real footage.

The current applications of the program are at best … morally gray.

Although much of the footage generated can be differentiated to be fake at the moment, it will get increasingly more difficult as the algorithm “learns” and is trained for longer periods of time. The fake footage can even be combined with A.I. generated audio from programs like Lyrebird to become even more compelling.

Media speculation has already proposed nefarious ways A.I. generated sound bites and footage could be used, from generating fake audio of what was never said, to disrupting entire national democratic processes and the spread of false propaganda through generated footage of powerful figureheads.

Those who have kept up to date with recent Cambridge Analytica and Facebook political events may argue that nefarious political organizations won’t need A.I. generated footage and audio clips to disrupt political processes, but with these tools, they will certainly become viciously more effective at spreading fake news and propaganda to fit their agenda.

See for yourself of what these programs are capable of (safe for work):

Who can take advantage of these programs?

The program is so accessible that any lay person or powerful organization can leverage it for lighthearted humor, debaucherous desires, political influence and slander, as long as the hardware is available to them. Most of the computing power required is available in lower end computer graphics card found in many people’s computers.[1]

If the footage and audio generated by these programs become indistinguishable from live footage, what are the implications?

If firms like Cambridge Analytica was simply able to manipulate an entire population just from well placed ads on a social media network, imagine what fake video and audio propaganda can do to disrupt political status quo and influence people’s beliefs and opinions.

Not only can the video and audio be used as a smear campaign against people in power, but those who have said or done things that may have been career ending or even criminal can have the plausible deniability to defend themselves. With the programs so prevalent, anyone can argue that those statements recorded may have been artificially generated or falsified from what occurred in actuality, or never to have occurred at all.

This short piece from NPR on Lyrebird provides a summary of how the generated media can destroy credibility and who we can really trust.[2]

“Eighteen months ago when that audio recording of President Trump came out … if that was today, you can guarantee that he would have said ‘it’s fake and he would have had some reasonable credibility in saying that as well’”

Knowing what the President Trump calls fake news today, it can be hard to know who to trust.

If the Steele Dossier, allegedly containing the infamous Trump pee tape, is verified by the Justice Department, does President Trump have plausible deniability?

Although there will be other ways for digital forensics experts to verify the authenticity of video and footage, the time and resources required would not be trivial. The need for some way to distinguish between reality and fabricated footage is growing very real.