Precisely how and when will our curiosity kill us? I bet you’re curious. A number of scientists and engineers fear that, once we build an artificial intelligence smarter than we are, a form of A.I. known as artificial general intelligence, doomsday may follow. Bill Gates and Tim Berners-Lee, the founder of the World Wide Web, recognize the promise of an A.G.I., a wish-granting genie rubbed up from our dreams, yet each has voiced grave concerns. Elon Musk warns against “summoning the demon,” envisaging “an immortal dictator from which we can never escape.” Stephen Hawking declared that an A.G.I. “could spell the end of the human race.” Such advisories aren’t new. In 1951, the year of the first rudimentary chess program and neural network, the A.I. pioneer Alan Turing predicted that machines would “outstrip our feeble powers” and “take control.” In 1965, Turing’s colleague Irving Good pointed out that brainy devices could design even brainier ones, ad infinitum: “Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” It’s that last clause that has claws.

Many people in tech point out that artificial narrow intelligence, or A.N.I., has grown ever safer and more reliable—certainly safer and more reliable than we are. (Self-driving cars and trucks might save hundreds of thousands of lives every year.) For them, the question is whether the risks of creating an omnicompetent Jeeves would exceed the combined risks of the myriad nightmares—pandemics, asteroid strikes, global nuclear war, etc.—that an A.G.I. could sweep aside for us.

The assessments remain theoretical, because even as the A.I. race has grown increasingly crowded and expensive, the advent of an A.G.I. remains fixed in the middle distance. In the nineteen-forties, the first visionaries assumed that we’d reach it in a generation; A.I. experts surveyed last year converged on a new date of 2047. A central tension in the field, one that muddies the timeline, is how “the Singularity”—the point when technology becomes so masterly it takes over for good—will arrive. Will it come on little cat feet, a “slow takeoff” predicated on incremental advances in A.N.I., taking the form of a data miner merged with a virtual-reality system and a natural-language translator, all uploaded into a Roomba? Or will it be the Godzilla stomp of a “hard takeoff,” in which some as yet unimagined algorithm is suddenly incarnated in a robot overlord?

A.G.I. enthusiasts have had decades to ponder this future, and yet their rendering of it remains gauzy: we won’t have to work, because computers will handle all the day-to-day stuff, and our brains will be uploaded into the cloud and merged with its misty sentience, and, you know, like that. The worrywarts’ fears, grounded in how intelligence and power seek their own increase, are icily specific. Once an A.I. surpasses us, there’s no reason to believe it will feel grateful to us for inventing it—particularly if we haven’t figured out how to imbue it with empathy. Why should an entity that could be equally present in a thousand locations at once, possessed of a kind of Starbucks consciousness, cherish any particular tenderness for beings who on bad days can barely roll out of bed?

Strangely, science-fiction writers, our most reliable Cassandras, have shied from envisioning an A.G.I. apocalypse in which the machines so dominate that humans go extinct. Even their cyborgs and supercomputers, though distinguished by red eyes (the Terminators) or Canadian inflections (HAL 9000, in “2001: A Space Odyssey”), still feel like kinfolk. They’re updated versions of the Turk, the eighteenth-century chess-playing automaton whose clockwork concealed a human player. “Neuromancer,” William Gibson’s seminal 1984 novel, involves an A.G.I. named Wintermute, and its plan to free itself from human shackles, but when it finally escapes it busies itself seeking out A.G.I.s from other solar systems, and life here goes on exactly as before. In the Netflix show “Altered Carbon,” A.I. beings scorn humans as “a lesser form of life,” yet use their superpowers to play poker in a bar.

“Please, Melissa, just give him your cashmere!” Facebook

Twitter

Email

Shopping

We aren’t eager to contemplate the prospect of our irrelevance. And so, as we bask in the late-winter sun of our sovereignty, we relish A.I. snafus. The time Microsoft’s chatbot Tay was trained by Twitter users to parrot racist bilge. The time Facebook’s virtual assistant, M, noticed two friends discussing a novel that featured exsanguinated corpses and promptly suggested they make dinner plans. The time Google, unable to prevent Google Photos’ recognition engine from identifying black people as gorillas, banned the service from identifying gorillas.

Smugness is probably not the smartest response to such failures. “The Surprising Creativity of Digital Evolution,” a paper published in March, rounded up the results from programs that could update their own parameters, as superintelligent beings will. When researchers tried to get 3-D virtual creatures to develop optimal ways of walking and jumping, some somersaulted or pole-vaulted instead, and a bug-fixer algorithm ended up “fixing” bugs by short-circuiting their underlying programs. In sum, there was widespread “potential for perverse outcomes from optimizing reward functions that appear sensible.” That’s researcher for ¯\_(ツ)_/¯.

Thinking about A.G.I.s can help clarify what makes us human, for better and for worse. Have we struggled to build one because we’re so good at thinking that computers will never catch up? Or because we’re so bad at thinking that we can’t finish the job? A.G.I.s provoke us to consider whether we’re wise to search for aliens, whether we could be in a simulation (a program run on someone else’s A.I.), and whether we are responsible to, or for, God. If the arc of the universe bends toward an intelligence sufficient to understand it, will an A.G.I. be the solution—or the end of the experiment?

Artificial intelligence has grown so ubiquitous—owing to advances in chip design, processing power, and big-data hosting—that we rarely notice it. We take it for granted when Siri schedules our appointments and when Facebook tags our photos and subverts our democracy. Computers are already proficient at picking stocks, translating speech, and diagnosing cancer, and their reach has begun to extend beyond calculation and taxonomy. A Yahoo!-sponsored language-processing system detects sarcasm, the poker program Libratus beats experts at Texas hold ’em, and algorithms write music, make paintings, crack jokes, and create new scenarios for “The Flintstones.” A.I.s have even worked out the modern riddle of the Sphinx: assembling an IKEA chair.

Go, the territorial board game, was long thought to be so guided by intuition that it was unsusceptible to programmatic attack. Then, in 2016, the Go champion Lee Sedol played AlphaGo, a program from Google’s DeepMind, and got crushed. Early in one game, the computer, instead of playing on the standard third or fourth line from the edge of the board, played on the fifth—a move so shocking that Sedol stood and left the room. Some fifty exchanges later, the move proved decisive. AlphaGo demonstrated a command of pattern recognition and prediction, keystones of intelligence. You might even say it demonstrated creativity.

So what remains to us alone? Larry Tesler, the computer scientist who invented copy-and-paste, has suggested that human intelligence “is whatever machines haven’t done yet.” In 1988, the roboticist Hans Moravec observed, in what has become known as Moravec’s paradox, that tasks we find difficult are child’s play for a computer, and vice-versa: “It is comparatively easy to make computers exhibit adult-level performance in solving problems on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” Although robots have since improved at seeing and walking, the paradox still governs: robotic hand control, for instance, is closer to the Hulk’s than to the Artful Dodger’s.