After the 2016 election, the public learned to be wary of social media, where tricksters masked their identities to spread false and inflammatory information. Now, experts warn that video footage — once the gold standard for verification — may be manipulated ahead of 2020.

So-called “deep fake” videos don’t require much money, time, or specialized training to make. Free software allows users to swap faces in videos, with early use made of it in X-rated content. Researchers are working to build detection tools before the videos appear in politics.

Deep-fake videos have had limited political appearances in other countries, notably in Belgium, where a socialist party in May doctored a video of President Trump in an effort to "start a public debate" about climate change.

“This kind of technology can be weaponized,” warned Siwei Lyu, a computer scientist at the State University of New York at Albany. "[Adversaries] can run this software and change the politician’s face onto the actor’s face … causing in the short term some sort of chaos or confusion."

“For a limited period of time, that will affect people’s opinion," Lyu said. "Users tend to be more attracted to something that is sensational, exciting, and unusual, regardless of the truthfulness of the video."

Lyu is preparing a second version of detection software, which he developed in late 2018 and is readying for public release, which looks at face pixels. The most popular deep-fake software stretches swapped face images, allowing algorithmic detection. An earlier program focused on blink rates, since fake faces didn’t blink as often as people in authentic footage.

But Lyu said technology continues to advance rapidly. "The quality of the fake videos improves, and that will be more difficult to detect," he said.

The most infamous program used to create deep-fake videos is FakeApp, widely distributed last year before a concerted censorship effort by web platforms.

As the FakeApp boomed in attention, Reddit suspended an account and subreddit used to distribute links and tips and the FakeApp’s webpage was taken down, forcing people to use less conventional download venues.

The FakeApp creator's identity is not widely known, and they did not respond to an email requesting comment.

Other similar apps have cropped up in the meantime. A man who narrated popular YouTube tutorials for the FakeApp, for example, has turned to promoting DeepFaceLab, a similar program.

Ravi Ramamoorthi, director of the Center for Visual Computing at the University of California, San Diego, said developers likely could make similar programs without much difficulty.

“Within the academic literature, there have been many papers on face swapping, using generative adversarial networks (GANs) to create faces, synthesizing face expressions synced to a given audio track,” he said. “I'm presuming this app was created by a user synthesizing some of these technologies, but it wouldn't be hard for researchers or those interested in the area to create other such similar software."

As video editing advances, there are emerging questions about legal implications.

Mary Anne Franks, a University of Miami law professor who successfully campaigned for “revenge porn” laws in recent years, said that "we urgently need to re-examine our laws of defamation and other tort law relating to reputational injury."

Franks, president of the Cyber Civil Rights Initiative, argues that the usual free speech considerations are different with phony videos.

"The long-standing ‘breathing room’ justification for allowing false speech has depended in part on the relative ease of being able to detect truth from falsity and on the ability of misrepresented individuals to correct the record," Franks said. "Both of these assumptions are called into question by the emergence of easily accessible, increasingly sophisticated 'deep fake' technology. The falsity of such imagery is in many cases impossible to detect, at least before it has made its initial and often indelible impact."

Lyu said he’s guardedly optimistic that when the videos do arrive in U.S. politics, they will be debunked quickly.

“We still have roughly two years' time and technology can advance very fast,” he said. “But with the awareness of the public, the media, government, and tech companies — and also research — we may, to a certain extent, have this problem under control.”