Let's make sure he WON'T be back! Cambridge to open 'Terminator centre' to study threat to humans from artificial intelligence

Centre will examine the possibility that there might be a ‘Pandora’s box' moment with technology

The founders say t echnologies already have the 'potential to threaten our own existence'

The Stop the Killer Robots will warn about the threat that robots pose to humanity, like in the 1984 American science fiction action film The Terminator directed by James Cameron

A centre for 'terminator studies', where leading academics will study the threat that robots pose to humanity, is set to open at Cambridge University.



Its purpose will be to study the four greatest threats to the human species - artificial intelligence, climate change, nuclear war and rogue biotechnology.



The Centre for the Study of Existential Risk (CSER) will be co-launched by Lord Rees, the astronomer royal and one of the world's top cosmologists.



Rees's 2003 book Our Final Century had warned that the destructiveness of humanity meant that the species could wipe itself out by 2100.



The idea that machines might one day take over humanity has featured in many science fiction books and films, including the Terminator, in which Arnold Schwarzenegger stars as a homicidal robot.



In 1965, Irving John ‘Jack’ Good and wrote a paper for New Scientist called Speculations concerning the first ultra-intelligent machine.



Good, a Cambridge-trained mathematician, Bletchley Park cryptographer, pioneering computer scientist and friend of Alan Turing, wrote that in the near future an ultra-intelligent machine would be built.

This machine, he continued, would be the 'last invention' that mankind will ever make, leading to an 'intelligence explosion.'



For Good, who went on to advise Stanley Kubrick on 2001: a Space Odyssey, the 'survival of man' depended on the construction of this ultra-intelligent machine.

The Centre for the Study of Existential Risk (CSER) will be opened at Cambridge and will examine the threat of technology to human kind

Huw Price, Bertrand Russell Professor of Philosophy and another of the centre's three founders, said such an 'ultra-intelligent machine, or artificial general intelligence (AGI)' could have very serious consequences.



He said: ' Nature didn’t anticipate us, and we in our turn shouldn’t take AGI for granted.

'We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous.



'I don’t mean that we can predict this with certainty, no one is presently in a position to do that, but that’s the point.



'With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies

He added: 'The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history.

'What better place than Cambridge, one of the oldest of the world’s great scientific universities, to give these issues the prominence and academic respectability that they deserve?

'Cambridge recently celebrated its 800th anniversary – our aim is to reduce the risk that we might not be around to celebrate it’s millennium.'



