Less than a month after Donald Trump was improbably elected the 45th president of the United States, a strange story began to make its way across social media. In the quaint days before Russia’s dissemination of fake-news stories in the interest of facilitating Trump’s victory became front-page news, a 28-year-old named Edgar Maddison Welch began reading about a pizzeria in Washington, D.C., that housed young children as sex slaves in a devilish operation masterminded by the recently vanquished Democratic candidate for president, Hillary Clinton. So Welch decided to drive the six or so hours up from his home in Salisbury, North Carolina, to Comet Ping Pong in northwest D.C., where he opened fire with an AR-15.

The Comet Ping Pong story, and the even more disturbing news of the Kremlin’s role in our election, merely underscore fake news’s rapid ascent from an amorphous notion to perhaps the most significant digital epidemic facing the media, government, and, at the risk of sounding mildly hysterical, democracy itself. One Pakistani military offender, confused by a fake-news story, raised the prospect of a nuclear war with Israel. (Recall that Michael Flynn Jr., the son of Trump’s national security adviser, shared the Comet Ping Pong story on Twitter.) Meanwhile, our current president spent virtually his entire campaign inventing or proliferating fabricated stories such as his suggestion that Ted Cruz’s father was involved in the plot to assassinate John F. Kennedy (he wasn’t) and his pronouncement that violent crime was at an all-time high in the U.S. (crimes rates, while rising slightly in the last year, are near a 20-year low). While all of these stories were obviously fabricated in various ways, they did share one technological commonality: they were almost entirely text-based. And that is about to change.

At corporations and universities across the country, incipient technologies appear likely to soon obliterate the line between real and fake. Or, in the simplest of terms, advancements in audio and video technology are becoming so sophisticated that they will be able to replicate real news—real TV broadcasts, for instance, or radio interviews—in unprecedented, and truly indecipherable, ways. One research paper published last year by professors at Stanford University and the University of Erlangen-Nuremberg demonstrated how technologists can record video of someone talking and then change their facial expressions in real time. The professors’ technology could take a news clip of, say, Vladimir Putin, and alter his facial expressions in real time in hard-to-detect ways. In fact, in this video demonstrating the technology, the researchers show how they did manipulate Putin’s facial expressions and responses, among those of other people, too.

This is eerie, to say the least. But it’s only one part of the future fake-news menace. Other similar technologies have been in the works in universities and research labs for years, but they have never really pulled off what computers can do today. Take for example “The Digital Emily Project,” a study in which researchers created digital actors that could be used in lieu of real people. For the past several years, the results have been crude and easily detectable as digital re-creations. But technologies that are now used by Hollywood and the video-game industry have largely rendered digital avatars almost indecipherable from real people. (Go and watch the latest Star Wars to see if you can tell which actors are real and which are computer-generated. I bet you can’t tell the difference.) You could imagine some political group utilizing that technology to create a fake hidden video clip of President Trump telling Rex Tillerson that he plans to drop a nuclear bomb on China. The velocity with which news clips spread across social media would also mean that the administration would have frightfully little time to respond before a fake-news story turned into an international crisis.

Audio advancements may be just as harrowing. At its annual developer’s conference, in November, Adobe showed off a new product that has been nicknamed “Photoshop for audio.” The product allows users to feed about ten to 20 minutes of someone’s voice into the application and then allows them to type words that are expressed in that exact voice. The resultant voice, which is comprised of the person’s phonemes, or the distinct units of sound that distinguish one word from another in each language, doesn’t sound even remotely computer-generated or made up. It sounds real. This sort of technology could facilitate the ability to feed one of Trump’s interviews or stump speeches into the application, and then type sentences or paragraphs in his spoken voice. You could very easily imagine someone creating fake audio of Trump explaining how he dislikes Mike Pence, or how he lied about his taxes, or that he did indeed enjoy that alleged “golden shower” in the Russian hotel suite. Then you could circulate that audio around the Internet as a comment that was overheard on a hot microphone. Worse, you could imagine a scenario in which someone uses Trump’s voice to call another world leader and threaten some sort of violent action. And perhaps worst of all, as the quality of imitation gets better and better, it will become increasingly difficult to discern between what is real behavior and what isn’t.

Perhaps the scariest part is that, one day soon, this sort of technology will transcend beyond the academies and institutions to the point where you or I will be able to create fake digital clips as easily as regular people created fake-news stories during this cycle. The technology out of Stanford that can manipulate a real-time news clip doesn’t need an array of high-end computers like those used by Pixar; it simply needs a news clip from YouTube and a standard Webcam on your laptop.