"The party told you to reject the evidence of your eyes and ears. It was their final, most essential command."

— George Orwell, "1984"

"Just remember, what you’re seeing and what you’re reading is not what’s happening."

— President Trump, 2018

Part 1: The Problem

The term “fake news” in its current, popular incarnation means “News that says bad things about me,” and does not deal with whether or not the news is factually correct. Fortunately for democracy, and perhaps even for sanity itself, reasoned debate about factual correctness is constrained to verifiability and not interpretation.

Unemployment figures, deaths at the border, military pay raises, the health of the stock market and civilian casualties in Iraq are matters of fact. If reported correctly, the only reasonable debate is about causes and responsibilities (i.e., who gets the blame or credit). The facts themselves should be unassailable, and labeling them “fake news,” regardless of which side of the political spectrum does it, is unacceptable.

But what happens when the facts themselves are legitimately called into question? What happens when we can no longer trust what’s right in front of our eyes and ears?

Trust me when I tell you that I’m no conspiracy theorist. In fact, I’ve written well over a million, unapologetically sneering words debunking whacko nut jobs spinning all manner of patently absurd tall tales, and an equal number bemoaning the popular inclination to believe them. Much of the credulity is based on the well-known phenomenon of “confirmation bias,” the eagerness to believe anything that supports your world view. Thus, my inbox gets filled with irate emails slamming then-President Obama for proposing that “The Star Spangled Banner” be replaced as the national anthem by “I Want To Teach The World To Sing.” (Note to 20 or so million of you, with apologies to the rest: He didn’t really do that.) The things that people are willing to believe if it comes from “their side” is one of the most depressing aspects of being human in a civilized society.

The reason I want you to know that I’m not easily suckered by this kind of stuff is that I believe that there is a very real, very imminent threat to politics, civil society and perhaps the very foundation of democracy. Were I to lead off with that sentence, and you had an IQ safely out of double digits, you’d roll your eyes and turn the page (or whatever the metaphorical equivalent of page-turning is for an online column) and move on to the crossword puzzle. And I wouldn’t blame you a bit.

But the threat is real, and it goes by the name “deepfake.”

Stated simply, deepfake is the ability to create phony photos, audio recordings, or video footage of actual people that are so realistic it’s nearly impossible to tell that they were faked. Examples abound all over the internet, including one created by Jordan Peele of President Obama giving a speech warning about deepfakes. Watch it, and you’ll find it hard to believe it isn’t Obama saying what you’re seeing and hearing him say, but it’s a fabrication, made with his permission but not his participation.

Deepfakes were in the news recently when Scarlet Johannsen spoke up in public about porn sites that cleverly superimpose the heads of stars, including hers, onto the bodies of porn actresses. This is a problem that affects hundreds of well-known female personalities. Some of the videos are so well done that they’ve affected careers and reputations. (In the interest of truth, which is after all what this column is all about, it should be said that the problem isn’t helped much by the fact that not all of the videos are fake. A lot of actresses have had home-made, private videos leak out and, when they do, it becomes that much harder to decry the fake ones, or to prove that they’re in fact fake.)

But while pseudo-porn diminishes us all, it isn’t going to bring down democracy. Other uses of deepfakes are a far more dire threat. Consider this scenario: The night before an election, a video surfaces of a Senatorial candidate caught by a hidden camera accepting his share of money from a massive sale of cocaine. The candidate loses in a landslide before the video is exposed as a fake.

Imagine a city on the edge of a race riot when a video is shown of the white mayor showering praise on the cops who are accused of shooting an unarmed black man in a back alley. How much time do you suppose will be spent investigating the video before the city explodes? For that matter, even if it’s exposed, how many people will still think it’s real? After all, the anti-vaccination movement is alive and well even though it was founded on data exposed as fake.

These are well-known problems that are already being discussed. But I’m worried about one that might be worse. While it’s a good thing that the public is becoming aware of these abilities to create fakes, and will therefore (I fervently hope) bring some skepticism to what they see, what happens when people start to doubt everything they see? It’s tough enough trying to convince people with biases that something that supports those biases is fake. How are we going to convince them that something that runs counter to their bias is real?

Right now we have politicians who don’t have any problem telling people that what’s right before their eyes is wrong. Thankfully, even the true believers are often (albeit not often enough) moved to dismiss these absurdities as the obvious lies they are.

But what happens as deepfakes become more realistic and difficult to detect? What stops a politician from saying, “That speech [or shooting or fight or bribe] you saw was faked” even when it wasn’t, especially if there are demonstrably faked videos all over the place?

The answer, to some, is simply to do a better job of detection, scientifically separating the real from the phony.

And this is where things start to get really scary.

The term “deepfake” is a portmanteau of “deep learning,” a powerful technique of artificial intelligence, and “fake.”

Deep learning is an iterative process in which a computer program gets smarter and smarter about a task that it’s given, perfecting its technique based on feedback regarding how well it’s doing. So if someone’s head is superimposed on another person’s body, and the result looks a little off because the movements don’t quite match, the deep learning program can attempt several methods of better synchronizing movement until the discrepancies diminish or disappear. And it can do that over and over again, hundreds of times per second, improving all the time.

This ability is as frightening as it is fascinating. There are a lot of efforts underway, primarily by the Defense Department, to develop techniques of spotting deepfakes. One interesting one that showed a lot of promise is to detect less blinking than might normally be expected, blinking being difficult to incorporate realistically in a deepfake. Another is to detect extremely faint changes in facial coloration owing to varying rates of blood flow depending on how the subject is speaking or moving.

Despite the excitement surrounding these developments, I think all of these detection methods will fail, as will any new ones that are developed. The reason is simple: Any method for detecting discrepancies can also be used by a deep learning program to spot its flaws. As it refines its techniques, it can keep re-applying the detection method until no discrepancy is detected anymore. It’s like turning a gun back on the person who’s aiming it at you.

In very short order, deepfake technology will get to the point where there are literally no differences between the fake and what the video would look like had it not been faked. Remember that we’re dealing with digital imagery. That means there’s no examination possible below the level of a single pixel; no errant brush strokes, no tiny hairs out of place. Once the individual pixels match what would have been real, there’s no place else to go with any detection method.

What we end up with is the diminishing primacy of facts and the weaponizing of artificial intelligence. When it becomes easy to fake facts, primary sources of information heretofore considered sacrosanct will lose their stature as the basis of truth. And once politicians, autocrats, and tyrants lay legitimate claim to doubt evidence-based criticism, we’re in very serious trouble.

Next time, Part 2: The Solution (Fair warning…it’s not good news.)

Lee Gruenfeld is a managing partner of Cholawsky and Gruenfeld Advisory, as well as a principal with the TechPar Group in New York, a boutique consulting firm consisting exclusively of former C-level executives and "Big Four" partners. He was vice president of strategic initiatives for Support.com, senior vice president and general manager of a SaaS division he created for a technology company in Las Vegas, national head of professional services for computing pioneer Tymshare, and a partner in the management consulting practice of Deloitte in New York and Los Angeles. Lee is also the award-winning author of fourteen critically-acclaimed, best-selling works of fiction and non-fiction. For more of his reports — Click Here Now.