Elon Musk, founder, CEO and lead designer at SpaceX and co-founder of Tesla, speaks at the International Space Station Research and Development Conference in Washington, U.S., July 19, 2017. REUTERS/Aaron P. Bernstein Elon Musk, the billionaire entrepreneur who cofounded companies like PayPal, Tesla, and SpaceX, has once again warned that artificial intelligence (AI) poses a threat to humanity's existence.

This time he tweeted that competition for AI could lead to the third world war after Russian president Vladimir Putin told a group of students last week that the country with the best AI will be "the ruler of the world."

Musk tweeted out a story on Putin's comments and added: "Competition for AI superiority at national level most likely cause of WW3 imo."

Musk has repeatedly warned about doomsday AI scenarios despite the fact no one really knows how advanced the technology will become or who will look to harness it and how.

The smartest self-thinking machines today are still unable to perform more than one task and they still have limited use. An AI might be able to learn how to play a board game, for example, but the same AI can't learn how to spell or how to perform an operation. This is one of the major limitations with AI at present.

Several AI experts, including Google DeepMind CEO Demis Hassabis and Skype cofounder Jaan Tallinn, believe that machines will eventually learn how to excel at a number of tasks, outsmarting humans and becoming "superintelligent" in the process. But the timescale for this varies wildly from around 30-50 years right up to over 100 years.

Musk has over 12 million followers on Twitter and there's a risk that his comments could lead result in policy makers putting the brakes on AI development just as it's starting to take off. That would be a shame given AI has enormous potential to improve our lives. Companies operating in the field believe that it can harnessed to make new life-saving drugs and cut the amount of energy used across entire nations.

It's fair to say that there are a number of far more pressing issues that humanity needs to contend with, including the prospect of a nuclear war on the Korean peninsula and mitigating the effects of climate change, which is already claiming thousands of lives through major weather events.

There are efforts underway to ensure that AI remains safe and of benefit to humanity. The Partnership on AI, for example — a collaboration involving Microsoft, Amazon, Google, DeepMind and others — is working together to try and determine things like whether it's possible to programme an AI with a set of ethics (and what those ethics should be) and how to prevent AI from being exploited by terrorists and other groups.