Oxford philosopher Nick Bostrom thinks we’re neglecting the biggest challenge that we as a species are likely to face this century: what to do when the machines take over. We already have artificially-intelligent machines that are “super-human” in specific domains like playing chess and processing certain kinds of data, but Bostom believes we’re not far off from creating machines whose general intelligence and learning ability exceed our own. We chatted about why humanity is in its teenage phase, whether there’s anything we can do to preempt the machines, and just how realistic is Spike Jonze’s Her. His new book, Superintelligence: Paths, Dangers, Strategies, is out this month.

Alice Robb: What kind of time frame are we looking at?

Nick Bostrom: We ran a survey among experts in the field of Artificial Intelligence. We asked them by what year they thought there would be a 50 percent chance that we would have human-level machine intelligence. The median answer was 2040 or 2050. They thought there was a 90 percent probability we would have it by 2075, which I think is a bit over-confident. There’s a big chance that we’ll have it in this century, but a lot of uncertainty as to the exact date. But what I’m interested in trying to figure out is, what happens when we reach that point? Whether it’s 10 years away or 60 years away. What are the dynamics that kick in once you get human-level machine intelligence?

AR: What are the greatest risks? Are you worried that one super-intelligent agent could take over?

NB: That takes up a big chunk of the worry space: that there is one artificial agent that becomes very powerful, and is therefore able to shape the future according to its goals. Those goals may not have anything to do with human goals. They may be arbitrary. If it’s, “Make as many paper clips as possible,” you get a future that consists of a lot of paper clips, but no humans.