The best way to prevent a robot takeover is to start creating solutions

But 'doomsday scenarios' deserve to be considered carefully, he said

He said artificial intelligence will help humanity in the future

The doomsday scenario of killer robots taking over the world isn't going to happen.

That's according to Google chairman Eric Schmidt, who says we should stop worrying about it and start focusing on the positives.

He has said artificial intelligence (AI) will be developed for the benefit of humanity, and although doomsday scenarios should be considered, he is optimistic about the future.

Artificial intelligence will let scientists solve some of the world's 'hard problems.' This is according to Google chairman, Eric Schmidt, who claims that super-intelligent robots will someday help use solve problems such as population growth and climate change

WHY WE SHOULDN'T BE SCARED 'The original Kodak camera was seen as destroying art,' Mr Schmidt said. 'Electricity was believed to be too dangerous when it was first introduced. 'But once these technologies got into the hands of millions of people, and they were developed openly and collaboratively, those fears subsided. 'Just as the agricultural revolution has freed us from spending our waking hours picking crops by hand in the fields, the AI revolution could free us from menial, repetitive, and mindless work.' Advertisement

Schmidt, who is executive chairman of Google parent company Alphabet, said the comments in an opinion piece in Fortune, written along with his colleague Sebastian Thrun.

'The history of technology shows that there's often initial skepticism and fear-mongering before it ultimately improves human life,' Mr Schmidt said.

Mr Schmidt and Mr Thrus said while 'doomsday scenarios' deserve 'thoughtful consideration,' the best course of action is to get to work on creating solutions.

'Google, alongside many other companies, is doing rigorous research on AI safety, such as how to ensure people can interrupt an AI system whenever needed, and how to make such systems robust to cyberattacks.'

The pair said that technology like Google's AlphaGo could improve the things we do every day.

'Imagine a world where clever apps and devices could help us recognize every person we’ve ever met, recall anything we’ve ever said, and experience any moment we’ve ever missed. A world where we could in effect speak every language.'

As artificial intelligence advances, the possibility that machines could independently select and fire on targets is fast approaching. Fully autonomous weapons, also known as 'killer robots,' are quickly moving from the realm of science fiction (like the plot of Terminator, pictured) toward reality

ROBOTS WILL NOT TAKE OVER During a talk in Cannes earlier this month, Eric Schmidt said AI will be developed for the benefit of humanity and there will be systems in place in case anything goes awry. 'We've all seen those movies,' he said. But he said in reality, people would always know how to turn the AI systems off, should it ever get to a dangerous point. He said the company will soon be launching an AI that can automatically respond to IM messages. Advertisement

This is not the first time the Google chairman has taken to allay fears surrounding artificial intelligence.

During a talk in Cannes earlier this month, Eric Schmidt said AI will be developed for the benefit of humanity and there will be systems in place in case anything goes awry.

'To be clear, we're not talking about consciousness, we're not talking about souls, we're not talking about independent creativity,' said Mr Schmidt, according to Hollywood Reporter.

He said the company will soon be launching an AI that can automatically respond to IM messages.

'We've all seen those movies,' he said. But he said in reality, people would always know how to turn the AI systems off, should it ever get to a dangerous point.

He also said several companies will quickly follow with similar tech based on Alphago from Google's DeepMind project.

Google's DeepMind start-up, which was bought for £255 million ($400 million) last year, is currently attempting to mimic the properties of the human brain's short-term working memory.

Google's DeepMind start-up, which was bought for £255 million ($400 million) earlier this year, is currently attempting to mimic the properties of the human brain's short-term working memory

A sinister threat is brewing deep inside the technology laboratories of Silicon Valley, according to Professor Stephen Hawking (pictured). Artificial Intelligence, disguised as helpful digital assistants and self-driving vehicles, is gaining a foothold, and it could one day spell the end for mankind

STEPHEN HAWKING WARNS OF A ROBOTIC UPRISING A sinister threat is brewing deep inside the technology laboratories of Silicon Valley, according to Professor Stephen Hawking. Artificial Intelligence, disguised as helpful digital assistants and self-driving vehicles, is gaining a foothold, and it could one day spell the end for mankind. The world-renowned professor has warned robots could evolve faster than humans and their goals will be unpredictable. On the Larry King Now show, Professor Hawking spoke of his fears about the future of the human race. 'I don't think advances in artificial intelligence will necessarily be benign,' Professor Hawking said. The physicists has previously been outspoken on his believes. Professor Hawking was interviewed from the Canary Islands, where he was being honored at the 'Starmus' Festival, aimed at making science accessible to the public. 'Once machines reach a critical stage of being able to evolve themselves we cannot predict whether their goals will be the same as ours.' 'Artificial intelligence has the potential to evolve faster than the human race.' Advertisement

By combining the way ordinary computers work with the way the human brain works, the artificial intelligence researchers hope the machine will learn to program itself.

Described as a 'Neural Turing Machine', it learns as it stores memories, and later retrieve them to perform logical tasks beyond those it has been trained to do.

The acquisition of DeepMind followed Google's recent purchase of seven robotics firms, including Meka, which makes humanoid robots, and Industrial Perception, which specialises in machines that can package goods, for example.

In August last year, Google also revealed it had teamed up with two of Oxford University's artificial intelligence teams to help machines better understand users.

'It is a really exciting time for AI research these days, and progress is being made on many fronts including image recognition and natural language understanding,' wrote Demis Hassabis, co-founder of DeepMind and vice president of engineering at Google in a blog post.

But despite these projects, and Mr Schmidt's comments, Google is also aware of the dangers involved with AI and machine learning.

So much so that in January 2014 it set up an ethics board to oversee its work in these fields.

In fact, one of the original founders of Google's DeepMind warned artificial intelligence is the 'number one risk for this century,' and believes it could play a part in human extinction.

The Google boss, who is involved in the development of AI in applications such as self-driving cars (pictured), also says that the fear of robots stealing human jobs is unwarranted

Eric Schmidt's comments (right) follow a warning by Professor Stephen Hawking (left) that humanity faces an uncertain future as technology learns to think for itself and adapt to its environment

'Eventually, I think human extinction will probably occur, and technology will likely play a part in this,' DeepMind's Shane Legg said in an interview earlier this year.

The ethics board, revealed by web site The Information, is to ensure the projects are not abused.

Earlier this year, Elon Musk likened artificial intelligence to 'summoning the demon'.

The Tesla and Space X founder previously warned that the technology could someday be more harmful than nuclear weapons.

Professor Stephen Hawking has also outwardly spoken of his concerns regarding the rise of rogue robots.

Earlier this week the professor spoke on the Larry King Now show discussing his fears about the future of the human race.

But Mr Schmidt said the fears we would be overtaken by artificial intelligence were unfounded. 'We'll make make sure that people know how to turn this stuff off should we get to that point.'

Last this year, Elon Musk likened artificial intelligence to 'summoning the demon'. The Tesla and founder previously warned that the technology could someday be more harmful than nuclear weapon