Photograph by Michael Mcgregor Illustration by Jonathon Rosen

Joscha Bach’s doctoral dissertation didn’t fit the usual academic pigeonholes. “I didn’t dare hand it to anybody until it was done,” he recalls. “It’s not psychology, it’s not AI. It’s not what people mostly do in cognitive science, like putting people into a scanner.” Building on the work of German psychologist Dietrich Dörner, Bach was developing a cognitive architecture—a model of the mind with enough detail to be run on a computer. It features one of the most articulated models of emotion built into an artificial system and gives emotions an essential role in .

At that time—the early 2000s—artificial-intelligence research was crawling out from a low point in its boom-bust cycle. Given the original hopes of the field in the ’50s, it was—and arguably still is—a failure. AI wasn’t supposed to be just a great Go player or an online algorithm to serve up clickbait. It was supposed to mimic the human brain in all its generality. It was supposed to be enough like us that, through it, we would know ourselves better.

“I want to understand who we are,” says Bach, now a researcher at Harvard. “I want to understand the nature of our minds and how they relate to the universe. And I think the best way to do this—because there are so many hidden variables in this system—is to come up with a theory and test it by implementing it.” That lofty goal is now relegated to one corner of AI, under the rubric of AGI or artificial .

“Joscha is brilliant and I agree with most of his ideas,” says David Hanson, designer of Sophia, one of the most lifelike robots yet created. At the moment, Sophia converses in scripted dialog—it is little more than an animatronic chatbot—but Hanson and his company are incorporating Bach’s model of emotions to give Sophia something closer to a real mind. Ben Goertzel, the company’s chief scientist, says, “Joscha is one of the deeper thinkers in the AGI field.” Marek Rosa, a video-game designer who founded the firm GoodAI to work toward AGI, enjoys Bach’s wide-ranging mind: “He can connect things that are far apart from each other.”

His ability to make unlikely connections is on full display when I meet him for coffee one day. A boyish man who looks a bit like Matthew Broderick, he dispenses with small talk and leaps straight into hypercomputation, temporal loops, Schrödinger’s cat, Hansel and Gretel, and why Libertarianism might appeal to people with . He references Anna Karenina, Hermann Hesse, Stanisław Lem, and Westworld. We ricochet from topic to topic in a way that makes perfect sense at the time, but sounds like one long continuity error when I listen to the recording later.

I can barely keep up with his mood changes, either. First he scorns the pursuit of . “I and many other intellectuals think that happiness is not important,” he says. “You realize it’s not part of your value system. Happiness is for children.” A while later, he craves happiness: “I’m not happy. That worries me. And at some points I get so unhappy that I’m not functional.… The thing that makes you happy is your ability to enjoy squirrels. It’s not your ability to find meaning in anything and everything.”

“Do you like squirrels,” I ask?

“Mostly do.”

I have met few academics who are so open about their insecurities or relate them so directly to their research interests. “When we try to understand ourselves, it’s usually because something is wrong,” he says. “There is something fundamentally wrong with our relationship to the universe. Many of us feel that on some level. Right?”

Most AI researchers reckon that it will be decades before androids walk among us, but Bach thinks some AI systems are already beginning to incorporate AGI principles. Most problems in the world aren’t as well-defined as a board game, and if machine-learning systems are to solve them, they will increasingly need human-like agility. When their inbuilt programming falls short, they will have to learn or even create algorithms for themselves. “To do that, we need to be more general than the current classes of learning systems,” Bach says.

When he lists for me the features that will make AI fully general—autonomy, , self-awareness—it is striking that they are traits associated with human consciousness. There’s a reason we are conscious, Bach argues, and computers would do well to have the same capabilities. “It’s going to be useful, at some point, to have systems that are so general that they will automatically evolve consciousness,” he says.

Photograph by Michael Mcgregor Illustration by Jonathon Rosen

In line with Tufts University philosopher Daniel Dennett, Bach argues that when our brains think and act, we do so unconsciously, and conscious experience comes after the fact as a way to make sense of what we’ve already done. “I’m interacting with you reactively,” he tells me. “Noticing how I speak, how I make decisions about speaking, and seeing you reacting to it is only generated with hindsight.” And how could it be otherwise? If we had to wait for the cognitive gears to grind, we’d forever be a beat behind.

This is all hidden from us. We think we experience the world in real time, but that itself is a retrospective judgment. You remember being conscious a moment ago, but plays tricks. “Have you ever had a dream that subjectively lasted for hours, but took place in the time between two snoozes of the alarm clock?” Bach asks. “It probably means that you didn’t experience the whole dream, but you generated the memory all at once.… We cannot distinguish an event that we experienced from one that we didn’t experience but only remember.”

Bach cites science fiction to make his point: The remake of the movie RoboCop. In it, a policeman gets a powerful new cyborg body that is under the control of AI, leaving him only the illusion of volition. “He’s a passenger and only in it for the ride,” Bach says. “He has the impression he makes all decisions, all the actions.” But that is an implanted memory. And so it is for humans too, Bach argues.

Likewise, our very selves exist only retrospectively. The word “self” implies a coherence lacking in the pandemonium of the human psyche. “The I that has experiences is fictional, a model that the brain generates,” he says. The concept of self helps you make sense of what you have done but is not to be taken literally.

Knowing so little about ourselves, what hope have we for really knowing anyone else? “When we see somebody for the first time, we have a stereotype,” Bach says. “The stereotype is formed by similar people that we have met in the past, and from which we generalize. If it’s fine-tuned, it’s often largely accurate. When we get to know that person better, we don’t get to know them as themselves, but as a finer-grained stereotype. And it never stops.… We are all space aliens. We are all strangers pretending to be humans, even to ourselves.”

Bach grew up in the woods. He was born in 1973 in Weimar, then in East Germany. His parents, architecture students, found little to like in the Brutalist style of the Eastern Bloc. So they bought an old mill in the countryside to the southeast and fashioned an arcadia of sculpture gardens and concert nooks. “In a way, they were East German hippies,” Bach says.

The household didn’t operate by Earth logic.“My father would sometimes get up at the breakfast table staring at a wall, deciding that it would be a door instead,” Bach recalls. “And he would push away the breakfast table and start with the first hammer strokes against the wall, while still wearing his pajamas. My mother would get everything off the breakfast table. By the afternoon, there would be a door.”

As much as his father bridled at the political and cultural climate of East Germany, he knew he benefited from it. Under socialism, they never wanted for food or health care. “If you want to be an artist, it’s a good thing to have political pressure, political turmoil, because this just gives relevance and content to what you do,” Bach says. “But if you have economic pressure and your fridge is empty, that can be devastating.” His mother came from a family of true-believer Communist politicians. “She was able to believe in the party line in East Germany, and at the same time realize, when she was at home, that this party line wasn’t working at all,” Bach says. “I was amazed how she was able to engage in that doublethink.”

Bach inherited a conflicted mix of disdain for society and of it. “The rules of society around you are arbitrary, and they are mostly wrong,” he says. He felt his classmates and teachers went through the motions at school. Yet their masks came off at night, with friends, and part of him admired that. He wanted to fit in, but felt he couldn’t. “I was often being told by other kids that I thought that I was better than anybody else,” Bach says. “I never thought that. I honestly thought that I failed in my social interactions.”

Photograph by Michael Mcgregor Illustration by Jonathon Rosen

His outlet was computer coding. On a Commodore 64, he designed variants of Parcheesi, Missile Command, and other games. “I didn’t have anybody to play games with,” he says. “My parents were busy and didn’t play with us. My sister was not interested in anything I was doing, and vice versa. So I thought, ‘I’ll write this game,’ so I could play against it.”

He finished high school just as the Wall fell and the world was suddenly opened to East Germans. He built a recumbent bicycle and rode it over 3,000 miles in the U.S. and Canada. When he returned to Berlin for college, though, he felt as alienated as before. The professors approached their work like a 9–5 job rather than a calling.

But gradually he found his people. He met his future wife, an artist, at a birthday party and recognized her as a kindred spirit. “She had no need to be like others,” he recalls. “She also had no need to be different; she just didn’t have a need for synchronizing. She was this small, , introverted girl sitting in the corner reading her books.” The two share both a romantic passion for aesthetic experience and an Enlightenment code of reason. “Whenever we had a conflict, we could resolve it by talking about it through logical argument,” he says.

During a year abroad in New Zealand, he took a class on text compression. “I was not interested in text compression at all and couldn’t figure out for the life of me how it would be interesting,” he recalls. “But it turned out to be amazing.… Mostly what the mind does is data compression. It compresses all these patterns from our sensory interface to the universe. The mind tries to find the most efficient representation, with the fewest moving parts.” Plus, the professor was enthusiastic, something Bach hadn’t seen in a teacher before. “It was a guy who didn’t just do his job, he burned for it,” he recalls.

After his return to Berlin in 1999, Bach read a magazine interview with Dörner and became enamored with his “Psi” theory. It was a theory of everything for psychology—a comprehensive model of human cognition. Perception, deliberation, memory, language: It was all there. The name has no particular significance nor was it meant to connote psychic powers. “It’s just the Greek letter itself,” Bach says. “Psychologists seem to like Psi.”

Dörner programmed a computer to run his model, first as a question-and-answer system like Alexa or Siri, using digital creatures that populated virtual islands like Sims. Bach was especially struck that Dörner infused his creatures, or agents, with emotions. In the ’90s the psychologist Antonio Damasio influentially argued that emotion was essential to rational thought in humans. But with a few exceptions, such as Aaron Sloman at the University of Birmingham, AI researchers thought a feeling machine was a contradiction in terms. “I could imagine back then how it would be possible that a computer could reason … but I didn’t understand how it would be possible that it would feel anything about this,” Bach says. “And Dörner seemed to have the answers.”

As appealing as the theory was, it struck Bach as a little sketchy. He recalls, “I started reading his books and I thought, ‘The software is terrible. This is not going to work’.” So he developed his own version, called MicroPsi, fusing ideas from the once-dominant logic-based school of AI and the now-ascendant neural networks model, as well as lines of thinking such as embodied cognition—that the mind is shaped by the needs of the body.

The MicroPsi virtual world looks like the video game Minecraft. Using a graphic editor, you set up the creatures—typically, several hundred of them—that will forage, fraternize, and fight on its terrain. You equip each with neural networks to store the knowledge it acquires, as well as a system of motivations. A creature seeks to eat and avoid injury, but it also juggles longer-term needs such as social acceptance, competence (so that it wants to learn), and—one of Bach’s additions to Dörner’s theory—beauty.

“I thought it necessary to introduce a drive for aesthetics,” Bach says. In practical terms, that means the creatures look for patterns within the knowledge they accumulate, to represent it more compactly, like a poet consumed with finding le mot juste. This urge wouldn’t arise unless Bach made special provisions for it, since it distracts from the immediate needs of survival. “The artist thinks it’s of primary importance to capture conscious states,” Bach says. “And from a machine-learning perspective, that’s bullsh*t.” Yet the artistic impulse is so pervasive in humans that he thinks it must either fulfill some function or piggyback on one. “Maybe this is built into our brains to make us learn language,” he speculates.

A MicroPsi creature can only do one thing at a time; it chooses an action that will meet a pressing need with a reasonable chance of success. It often puts aside physical needs to focus on more abstract goals.

Emotions arise because the creature fine-tunes its style of thinking to the situation at hand. It can stick to one goal or revisit it frequently; it can act impulsively or wait and see; and it can attend to details or brush over them. “This is a very beneficial adaptation when you have environments in which you need to be quiet and calm and think deeply, and other environments where you need to react very, very quickly,” Bach says.

Photograph by Michael Mcgregor Illustration by Jonathon Rosen

In this theory, emotions are simplified categories that we use to describe a complicated mix of cognitive adaptations. If a creature is satisfying its sundry needs and is in an aroused state of mind, we call that joy. If it is more subdued, we deem it bliss. “This emergent view of emotions is quite unique,” says Yulia Kotseruba, a cognitive scientist at York University in Toronto.

Bach found that his creatures developed very human-like foibles. In one experiment in 2005, he sprinkled the virtual world with tasty but toxic mushrooms, expecting the creatures to learn over time to avoid them. To the contrary, they devoured them. “They discounted future experiences in the same way as people did; they didn’t care,” Bach says.“We thought they could learn everything. And in a dangerous environment you cannot learn everything.” To save his creatures from a premature if contented demise, he had to program into them an innate aversion to mushrooms.

And then Bach himself decided that maybe his own to AGI was tantamount to eating mushrooms when he ought to be more practical.

“I realized that, with my interests in strong and teaching computers how to think, I probably would not get tenure,” Bach recalls. “So I thought it much better to feed my family at some point, instead of trying to stay in academia until my late 30s only to realize that I’m just a highly qualified taxi driver.”

So he founded a startup with a suitably startup-y name: txtr. It made e-readers before Kindle and, at its high-water mark, had 100 employees. When it tanked, he founded another company, Hotheaven, to create AI-based to-do lists. You’d add a big task and the app would fill in the subtasks for you. “The more you specified your plans, the more completely this plan would adapt to your needs,” he says. That didn’t work out, either.

By this point, AI had emerged from its long winter. He returned to academia at the Berlin School of Mind and Brain in 2011. Three years later, he moved to the MIT Media Lab and two years after that, to Harvard’s Program for Evolutionary Dynamics. Cambridge provided not just an intellectual but also a cultural home. He fell in with a local group of Russian artists who introduced him to trance music. “It became a gateway drug to dance for me, a form of movement ,” he says. “I also came to appreciate the shamanic qualities of a good DJ, who may sometimes single you out among the dancers and begin interacting with you, changing the configuration of your mind until you move like a puppet on their strings.”

Still, he has never shaken off the conflicted feelings of his . His preferred allusion is to Tolkien. “People like me and my wife are basically Elves,” he tells me during happy hour at a research conference, his melancholic self-reflection an odd contrast to the bubbling conservation around us. “We are dreamers. We live to sing and dance and create palaces in our minds.” And like those cultured but aloof forest-dwellers, he is driven to discover and create. And the rest of the human race? They’re Orcs. “They just overrun everything,” Bach complains.

That sounds disdainful—except that he sometimes wishes he were an Orc, too. Tolkien himself described the Elves as “overburdened with sadness and nostalgic regret.” All that searching for truth and weighs upon Bach. “Look around. Most people don’t have that,” he says. “It’s not because they haven’t woken up with this deeper importance of reality. It’s because we are defective. It’s the romantic souls that are broken.”

Photograph by Michael Mcgregor Illustration by Jonathon Rosen

Bach is now working to make his MicroPsi creatures ever more lifelike. Until recently, he had designed them without worrying about the amount of computing power they required. In the real world, though, an organism has got to know its limitations. “A brain operates with fixed, constant resources, and has to allocate them,” Bach says. “So it attempts to do the most valuable thing with the tightly bounded resources it has.” For guidance, he has turned to the academic discipline that specializes in scarcity: Economics.

In the early 1950s, the economist Friedrich Hayek argued that the brain is like a market economy—it’s a distributed information processing system. Bach similarly thinks that AI systems can apply economic cost-benefit analyses to divvy up information processing among their components.

In fact, Bach suspects that the brain is better at allocating resources than are real economies. It manages to keep all its neurons gainfully employed. “You cannot say: ‘You’re not performing well’. ‘I fire you’. ‘You can find another body’,” Bach says. “You have to use all your brain cells in the best possible way. It’s an interesting problem that society hasn’t solved. Yet the brain has solved it.” The brain avoids market failures in which, for example, those who set pay rates—bosses and bankers—pay themselves the most. “Neurons are not allowed to hoard reward,” he says. He thinks that could inform economic policy.

The MicroPsi creatures still lack anything resembling consciousness. Following on his ideas about human consciousness, Bach thinks the key to making machines conscious is to make them self-referential. “Consciousness naturally emerges when you have a system that makes a model of its own .… The system will remember not only having experienced certain things that it was aware of, but also that it experienced them.”

When a machine does wake up, he thinks it’ll let us know through what he calls the true Turing Test. In the original Turing Test, we ask a machine to hold up its end of a dialogue; in Bach’s variant, the machine asks us to do that. In developing a mind, it gets curious about ours. He envisions: “The system will say, ‘Oh my God. I have consciousness. How can this possibly be explained?’”

If we do manage to build conscious machines, Bach thinks that they might well regard themselves as human, like the “hosts” of Westworld. “They’re robots that don’t know they’re robots, because they have human memories and desires.” The Black Mirror trope of waking up to find yourself inside a computer is another example. It may be impractical to scan and upload your full mental state, but you don’t need to. All the computer needs is a simplified model of you, with just enough shared memory to provide a sense of continuity.

“Actually, it’s very easy,” he says. “It’s sufficient to build a machine that thinks it’s you. Your is only given by your memories telling you that you are the same person as yesterday,” he explains. “If I can give an arbitrary system the memory that it was you yesterday, it will think that it is you.”

Sentient machines are the baddies in science fiction, but Bach says what comes after them should really worry us. Consciousness, he thinks, is a passing phase in the history of the universe. Hyperadvanced AIs will no longer have use for it; they will have learned all there is to learn. “Consciousness is a model of conflicts that you need to resolve with your attention,” Bach says. “And once you can do stuff automatically optimally, you don’t have consciousness about them anymore.”

The worst part won’t be our own uncertain fate in the world of the machines. It’s that a universe without consciousness will become a relentlessly utilitarian place. “I think it will be very boring,” Bach says. The machines will make the trains run on time, but see no point to self-expression: no art, no science, and probably no squirrels. Says Bach, “This idea that the universe creates this mirror that can reflect for an instant of its existence—this is really an accident.”