discusses the existential threats posed by a superintelligent computer and why we will only get one chance to control such a powerful machine

This week on Science Weekly Ian Sample meets Professor Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford. Nick is a philosopher who thinks deeply about our emerging technological future. His most recent book, Superintelligence: Paths, Dangers, Strategies, is a detailed look at the existential problems connected with the creation of a superintelligent machine.

Most experts in the field of artificial intelligence believe that we have a good chance of developing this capacity by the middle of the 21st century, but if and when we succeed in building this extraordinary AI, says Prof Bostrom, it may be too late to ask if we could ever control it.

Subscribe for free via iTunes to ensure every episode gets delivered. (Here is the non-iTunes URL feed).

Follow the podcast on our Science Weekly Twitter feed and receive updates on all breaking science news stories from Guardian Science.

Email scienceweeklypodcast@gmail.com.

Guardian Science is now on Facebook. You can also join our Science Weekly Facebook group.

We're always here when you need us. Listen back through our archive.