Earlier this week, I attended a dinner party hosted by the Edge Foundation, an association that fosters conversation between science and tech intellectuals, in which I found myself seated among a group of physicists, astronomers, historians, philosophers, technologists, and futurists of various varieties and pedigrees. Given the august and diverse group, I decided to ask a provocative question that seemed likely to bridge their manifold disciplines, and possibly elicit some tragicomic debate, too. In short, would artificial intelligence lead to the end of the human race? This was not an idle query. Over the last decade, we’ve given over our lives to computer networks and smart devices. Every aspect of human civilization, from our phones to farms, electric grids and stock markets, cars and missile-guidance systems, have been intertwined with lines of code. The great infrastructure of our existence has never been so vulnerable to manipulation by outside forces—or, one day soon, perhaps, to manipulating itself.

I was stunned by two aspects of their response. First, no one at the table appeared taken aback, surprised, or flummoxed by the question. Second, they all answered immediately, and in unison, “yes,” as if singing the same low note in a choir. This synoptic answer was followed by a moment of self-reflective silence around the table, as if the inevitable had already happened and we were now paying cerebral tribute to those who had perished during the end of civilization. I wasn’t sure how to respond, myself, so I asked someone to explain further. Are we really going to be killed by technology, I wondered? And I then launched into a number of frenzied follow ups: How? When? A historian and philosopher sitting to my left, while scooping fried Brussels sprouts onto his plate, offered a pithy affirmation as the discussion turned to the possibility that a future A.I. could go rogue and wipe us out. “Yes,” he said, passing the sprouts my way.

As of this moment, it’s still unclear precisely how we might destroy ourselves. Nevertheless, it is something that the country’s smartest minds are correctly consumed with. And many, it appears, are pessimistic about the possible outcomes. We are not far from the day when we could build armies of sophisticated robots—or drones, or nanorobots—that could be unleashed on the world. Nor is it a leap to imagine how these robots could intentionally be designed to destroy, or how seemingly “good” A.I. (programs designed to help humans) could be turned “bad” by rogue hackers who might, say, instruct swarms of UPS-delivery drones or robotic dog-walkers or fleets of driverless taxis to mercilessly punish us. (These are among the nightmares that haunt Elon Musk, Silicon Valley’s leading A.I. doomsayer, who in 2015 co-founded the nonprofit OpenAI to help research safeguards as the technology progresses. “With artificial intelligence, we are summoning the demon,” Musk told Vanity Fair last year. “You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.”)

One common fear among technologists is what could happen if Vladimir Putin or Kim Jong Il used software to shut off our power grid, decimating our food supply, heat, and logistical network. It’s hardly a hypothetical concern: In December, the White House said that North Korea had been behind a ransomeware infection that temporarily brought the British health system to its knees. On Thursday, the Trump administration accused Russia of having infiltrated American nuclear power plants and water and electric systems, giving hackers the ability to shut them down at will. A congressional report in 2008 predicted that if the power were to go out, 90 percent of the population would die within a year, at best (or worst depending how you see it) two years, unable to survive off the land, without running water and electricity.