As technology advances, historian Yuval Noah Harari writes, humanism — a belief in the primacy of people — is at stake. He envisions a future when “connecting to the system becomes the source of all meaning.” (Facundo Arrizabalaga/EPA)

Matthew Hutson is a science and technology writer and the author of “The 7 Laws of Magical Thinking.”

Many people fear that the path of artificial intelligence will eventually lead to a standoff between humans and machines, with humans as the underdogs. Confrontation looms in the forecasts of futurists and in the narratives of science fiction movies such as “The Matrix,” “The Terminator” and “Westworld.” But there’s another way our demise could go down. We could begin wondering what makes people so special, anyway, and willingly give up the title of supreme species — or even the preservation of humanity altogether. This is the path explored by historian Yuval Noah Harari in his new book, “Homo Deus.” There’s no need for a Terminator to come after us when, instead of fighting the network in the sky, we assimilate into it.

At stake is the religion of humanism. Whereas theists worship gods, humanists worship humans. Harari, whose previous book, “Sapiens: A Brief History of Humankind,” foreshadows this one, defines religion as any system of thought that sees certain values as having legitimacy independent of people. “Thou shalt not kill” derives its force from God, not from the mortal Moses. Similarly, humanists believe in “human rights” as things earned automatically from the universe, whatever anyone else says. The right not to be tortured or enslaved exists outside human convention. (Philosophers call this bit of magical thinking moral realism.)

[Will technology allow us to transcend the human condition?]

"Homo Deus: A Brief History of Tomorrow," by Yuval Noah Harari (Harper / )

We may take for granted the right not to be tortured or enslaved — or various other humanist doctrines, such as the idea that we’re all inherently valuable individuals with the free will to express our authentic selves — but we have not always done so. People were seen as property even well after that bit about “life, liberty and the pursuit of happiness” was inked to parchment. As Harari argues, we’ve lived with alternatives to humanism, and we can again. And ironically, he writes, “the rise of humanism also contains the seeds of its downfall.”

That’s kind of a fudge, one of a few in the book. It’s not the humanist revolution per se that planted those poison seeds. It’s more the (somewhat symbiotic) scientific revolution. You don’t need universal rights to study electricity and invent computers. Or to apply our inventions toward the evergreen pursuits of health, happiness and control over nature (or as Harari calls them, “immortality, bliss and divinity”). Nevertheless, scientific and technological progress might eventually undermine the humanist ethos.

On the scientific front, research is pushing back on the idea of free will (as philosophers have for ages). The more we can explain human behavior with neuroscience and psychology, the less room there is for some magical human soul.

Meanwhile, artificial intelligence is rendering us useless, taking the jobs of taxi drivers, factory workers, stock traders, lawyers, teachers, doctors and “Jeopardy!” contestants. And, Harari argues, liberal humanism rose on the back of human usefulness. It advanced not on moral grounds but on economic and military grounds. Countries such as France offered dignity to all in exchange for service to the nation. “Is it a coincidence,” Harari asks, “that universal rights were proclaimed at the precise historical juncture when universal conscription was decreed?” But with robots making and killing things better than we can, who needs people? Intelligence will matter more than consciousness. “What’s so sacred about useless bums who pass their days devouring artificial experiences” in virtual reality?

[Do we love robots because we hate ourselves?]

Even if the human species does continue to serve the system meaningfully, we might not matter as individuals. Harari suggests that algorithms might get to know us better than we know ourselves. As they collect data on our Web searches, exercise routines and much more, they’ll be able to tell us whom we should date and how we should vote. We may happily take their advice, literally ceding democracy to databases. Once our authentic, enigmatic, indivisible selves are exposed as mere predictable computations — not just by philosophers and scientists but by our every interaction with the world — the fiction of free will might finally unravel. (Personally, I’m not sure our brains will allow this.) We’ll enlist as mere specialized processors in the global cyborganic network.

Harari presents three possible futures. In one, humans are expendable. In a second, the elite upgrade themselves, becoming essentially another species that sees everyone else as expendable. In a third, we join the hive mind, worshiping data over individuals (or God). “Connecting to the system becomes the source of all meaning,” he writes. In any case, he says convincingly, “the most interesting place in the world from a religious perspective is not the Islamic State or the Bible Belt, but Silicon Valley.”

I enjoyed reading about these topics not from another futurist but from a historian, contextualizing our current ways of thinking amid humanity’s long march — especially a historian with Harari’s ability to capsulize big ideas memorably and mingle them with a light, dry humor.

In “Homo Deus,” Harari offers not just history lessons but a meta-history lesson. In school, history was my least favorite subject. I preferred science, which offered abstract laws useful for predicting new outcomes. History seemed a melange of happenstance and contingency retroactively cobbled into stories. If history’s arcs were more Newtonian, we’d be better at predicting elections.

Harari points to an opposing goal of his field. He writes that “studying history aims to loosen the grip of the past,” showing that “our present situation is neither natural nor eternal.” In other words, it emphasizes happenstance. That’s a useful tactic for the oppressed fighting the status quo. It’s also a useful exercise for those who see the technological singularity as a given. We have options.

It’s possible we’ll choose to avoid our loss of values. On the other hand, it’s possible we’ll choose to accelerate it. Harari, a vegan who disputes humanity’s reserved seat atop the great chain of being, briefly ponders this option: “Maybe the collapse of humanism will also be beneficial.” Indeed, don’t we owe a chance to animals and androids, too?