A year ago, you couldn’t go anywhere in Silicon Valley without being reminded in some way of Tristan Harris.

The former Googler was giving talks, appearing on podcasts, counseling Congress, sitting on panels, posing for photographers. The central argument of his evangelism—that the digital revolution had gone from expanding our minds to hijacking them—had hit the zeitgeist, and maybe even helped create it.

We were addicted to likes, retweets, and reshares, and our addiction was making us distracted and depressed. Democracy itself was faltering. Harris coined a series of phrases that became so popular they morphed into cliché. “The people behind the screen have a lot more power than the people in front of the screen,” he said, pithily explaining the power of engineers. He talked about the ability of technology to manipulate our basest instincts through “the race to the bottom of the brain stem.” The problem was “the attention economy.”

And most significantly, he popularized a three-word phrase—time well spent—that became a rallying cry for people arguing that we needed to look up from our damn phones from time to time. In February 2018, Harris founded an organization called the Center for Humane Technology. But then, oddly, he seemed to disappear.

What happened? It turns out he snuck off into seclusion to write on his walls. Today he’s rejoining the public conversation, and he’s doing it in part because he believes the old phrases—the words themselves—were tepid and insufficient. Talking about the attention economy or time well spent didn’t capture the true ability of modern technology to dismantle free will and create social anomie. The words didn’t say anything about the risks that increase as AI improves and as deepfakes proliferate.

"We can’t change a system unless we have a shared understanding and shared language,” Harris says. Robert Gumpert/Redux

Harris says language shapes reality, but in his estimation the language describing the real impact of technology wasn’t sufficient to illustrate the ever-darkening storms. So, months ago, he draped his office with white sheets of paper and began attacking them with dark markers.

He jotted phrases, made doodles, put things in all caps. He was looking for the right combination of words, a conceptual framework that could help reverse the trends tearing society apart. “There’s this sort of cacophony of grievances and scandals in the tech industry,” he says. “But there’s no coherent agenda about what specifically is wrong that we’re agreeing on, and what specifically we need to do, and what specifically we want.”

His brainstorming was almost manic: part Don Draper, part Carrie Mathison, and part John Nash as portrayed by Russell Crowe. He and his colleague Aza Raskin went down to the Esalen Institute in Big Sur, California, and covered the walls of their room with paper. They went back to San Francisco and did it again, scrawling phrases like humans swiss cheese and lists of problems caused by technology. Addiction. Fake news. Rising populism. There was a sketch of a deer in headlights.

Recently, Harris gave WIRED a tour of the sketches via Zoom, pacing around the room and reading from the walls: “What if you had headgear for tech? Aligning your paleolithic instincts to your orthodontics for humanity; humanity headgear for humanity’s technological adolescence. Paleolithic headgear. Chewing on ourselves so we can chew on our biggest problems.”

As he struggled with the words, he had a few eureka moments. One was when he realized that the danger for humans isn’t when technology surpasses our strengths, like when machines powered by AI can make creative decisions and write symphonies better than Beethoven. The danger point is when computers can overpower our weaknesses—when algorithms can sense our emotional vulnerabilities and exploit them for profit.