News, views and top stories in your inbox. Don't miss our must-read newsletter Sign up Thank you for subscribing We have more newsletters Show me See our privacy notice Invalid Email

Google is working on a 'kill switch' to stop intelligent robots turning on their human masters.

Fears about the power of maniacal machines have been growing in recent years, with both tech pioneer Elon Musk and Professor Stephen Hawking warning of the dreadful possibility of a Terminator-style war between humanity and our super-smart silicon creations .

Now the search engine giant has published a paper outlining the work its British artificial intelligence (AI) team Deep Mind team is doing to ensure humanity is not swept away by a metallic fist.

Deep Mind develops algorithms to allow robots to learn for themselves directly from raw experience or data.

The team is now developing a way to stop AI from learning how to prevent humans from stopping an activity - for example, firing a nuke - a process called 'safe interruptibility'.

Read more:

(Image: Rex) (Image: Rex)

"Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not necessarily receive rewards for this," the researchers write.

Last month a pair of academics sketched out a terrifying vision of the future in which "super intelligent" machines declare war on humanity and then set about wiping us off the planet.

Read more:

Video Loading Video Unavailable Click to play Tap to play The video will start in 8 Cancel Play now

Roman V. Yampolskiy of the University of Louisville and the independent researcher Federico Pistono said computers could commit "specicide" by obliterating humanity.

This could be done by blowing up atomic power plants, nuking us with our most powerful weapons, taking control of all the world's military drones or performing some other dastardly act of destruction.

Asimov's laws of robotics

In 1942, science fiction writer wrote The Three Laws of Robotics, a set of rules to prevent robots harming humanity.

They are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.