Chris Olah at Google Research has, in a blog post on Tuesday, spelled out the five big questions about how to develop smarter, safer artificial intelligence.

The post came alongside a research paper Google released in collaboration with OpenAI, Stanford and Berkley called Concrete Problems in AI Safety. It's an attempt to move beyond abstract or hypothetical concerns around developing and using AI by providing researchers with specific questions to apply in real-world testing.

"These are all forward thinking, long-term research questions -- minor issues today, but important to address for future systems," said Olah in the blog post.

The five points are:

Avoiding Negative Side Effects: AI shouldn't disturb its environment while completing set tasks

AI shouldn't disturb its environment while completing set tasks Avoiding Reward Hacking: AI should complete tasks properly, rather than using workarounds (like a cleaning robot that covers dirt with material it doesn't recognise as dirt)

AI should complete tasks properly, rather than using workarounds (like a cleaning robot that covers dirt with material it doesn't recognise as dirt) Scalable Oversight: AI shouldn't need constant feedback or input to be effective

AI shouldn't need constant feedback or input to be effective Safe Exploration: AI shouldn't damage itself or its environment while learning

AI shouldn't damage itself or its environment while learning Robustness to Distributional Shift: AI should be able to recognise new environment and still perform effectively in them

Google has made no secret about its commitment to AI and machine learning, even having its own dedicated research branch, Google DeepMind. Earlier this year, DeepMind's learning algorithm AlphaGo challenged (and defeated) one of the world's premier (human) players at the ancient strategy game Go in what many considered one of the hardest tests for AI.