Left-right: Julius Weitzdörfer, Centre for the Study of Existential Risk; Beth Barnes, Future of Sentience Society; Stephen Cave, Leverhulme Centre for the Future of Intelligence; Anders Sandberg, author and futurist; Huw Price, University of Cambridge; and Jane Heal, Centre for the Study of Existential Risk, Cambridge Nick Wilson

One wintry evening in November 2016, an international group of 50 scholars gathered at a candlelit dinner in the 14th-century Old Library at Pembroke College, Cambridge, to discuss grevious threats facing the world's civilisations.


An eavesdropper in the shadows playing on the wood-panelled walls might have heard Shahar Avin, an Israeli software engineer and expert in the philosophy of science, discussing the coming dangers of artificial intelligence ("It won't be about The Terminator! More likely an algorithm selling online ads, which realises that it can sell more if its readers are other robots, not humans"). Or perhaps Julius Weitzdörfer, the German disaster specialist looking after the legal fallout of the Fukushima catastrophe, analysing the implications of Donald Trump's presidency ("It will make people aware that they need to think about risks, but, in a world where scientific evidence isn't taken into account, all the threats we face will increase").

From nuclear war to rogue AI, the top 10 threats facing civilisation Apocalypse From nuclear war to rogue AI, the top 10 threats facing civilisation

Read next Worried about gigantic asteroids hurtling towards Earth? There's a graph for that Worried about gigantic asteroids hurtling towards Earth? There's a graph for that

At another end of the table was Neal Katyal, an American lawyer who was acting solicitor-general under Barack Obama, and who represented Apple in the San Bernardino decryption case. He explained how "law lag", the inability of legislators to keep up with technological change, was weakening governments' power to protect us.

This was not a scene from a new X-Men movie, but an event organised by two Cambridge institutions: the Centre for the Study of Existential Risk (CSER, commonly referred to as "caesar") and the Leverhulme Centre for the Future of Intelligence. For them, it was a fairly ordinary evening, in this case following a lecture by Katyal. The apocalyptic talk is standard: both bodies are among a small group of organisations in the UK and US which employ highly educated academics, scientists, lawyers and philosophers to study existential risk.


AI has been the subject of fantasy since the industrial revolution, but the 21st century's rapid growth in computing power has prompted anxiety in some of the world's most rational, informed and intelligent minds. In January 2015, Stephen Hawking, Elon Musk and Google's director of research Peter Norvig were among dozens of experts who signed an open letter calling for more research on AI's potential impact on humanity. The letter had initially been drafted by the Future of Life Institute, for circulation among AI researchers. Concern has only grown since: Martin Rees, the astronomer royal, Cambridge cosmologist and CSER co-founder, believes X-risk research is essential because, although Earth has existed for 45 million centuries, ours is the first in which a single species holds the future of the biosphere in its hands.

What is an existential risk? Existential risk, known by practitioners as X-risk, groups together hypothetical future events that could bring about global catastrophe, at worst the end of human civilisation or the extinction of humanity. The threats can be subdivided into anthropogenic, or man-made (nuclear war, climate change), and non-anthropogenic (asteroids, volcanoes, hostile extraterrestrials), and one that gets the most attention, and which began to catalyse the new discipline about ten years ago - artificial intelligence. The Leverhulme Centre for the Future of Intelligence (CFI) was set up in 2016 to support work on the impact of AI, and brings together Bostrom's team with researchers in Cambridge, Imperial College London and the Centre for Human-Compatible AI at the University of California in Berkeley. Cambridge also has an undergraduate organisation, the Future of Sentience Society, co-founded by computer-science undergrad Beth Barnes, who also works with CSER in the role of "student collaborator".

X-risk commands increasing interest within the technology industry. CSER was set up partly with the support of Skype co-founder Jaan Tallinn after he met Huw Price, Bertrand Russell professor of philosophy at Cambridge, at a conference and found they shared concerns about AI and other threats.

Tallinn became worried after reading Eliezer Yudkowsky's writing about AI, and sought to bring together minds from diverse areas of study to create a new academic discipline. "There is a fertile area of scientific study that can be done at the intersection of physics, computer science and philosophy," he says. "You can make a philosophical argument, but use a mathematical model that keeps you in check and stops you going off talking nonsense, which most philosophers do, because they say things that only bottom out in their intuitions, and intuitions are flawed. If you can make a philosophical argument that bottoms out in computer code or a mathematical model, that's a very solid foundation. At the same time, philosophy can make science consistent and give its findings a framework and direction."

“If you have a sense we might be the last generation after four billion years of evolution, it makes you want to do something about it" Nick Bostrom, founder of Oxford’s Future of Intelligence Institute


Bostrom makes a similar point when he says the FHI "formulates questions that one might need to answer if future technologies transform the human condition". One example is that we might easily agree that robots should share human values. But how do we agree what those values are? As Stephen Cave, CFI's executive director, notes: "For 2,500 years we've taken our time thinking about these questions and suddenly it's urgent. It's very exciting."

Political will is all we need to topple inequality. Trump might be the leader we need to inspire it Politics Political will is all we need to topple inequality. Trump might be the leader we need to inspire it

X-risk can stretch the bounds of the imagination. Anders Sandberg, a polymath probability theorist at the FHI, discusses the possibility that someone might summon a demon to end the world ("Can you really say the probability is zero? Some would say yes, but we ask, how can you be certain?"), and others are careful to consider the limits of human cognition (How can we know that millions of previous human worlds were not wiped out in the past by risks that may seem very remote to us?).



More importantly, points out CSER executive director Seán Ó Héigeartaigh, most of the work has practical applications. CSER management committee member Jane Heal, for example, studies "how we can distance the reflective, detached part of the self from the hubristic animal part of it, so that we can band together to make legislation that might reduce climate change."



Does thinking about all these questions keep such academics awake? Bostrom laughs: "People always ask that." Sandberg says his standard answer is, "I sleep very well at night because at least I'm doing something about it." Tallinn is more philosophical: "In truth, it makes me appreciate the world a lot more. If you have a sense that we might be the last generation after four billion years of evolution, it makes you want to do something about it, but it also makes you really thankful to be alive in the first place."

Read on: The 10 biggest threats facing civilisation and how Earth's Guardians are preparing for them