Elon Musk is a man of many characteristics, one of which apparently is not shying away from calling out big names when they are not informed about a subject. A day after Facebook founder and CEO Mark Zuckerberg said Musk's doomsday prediction of AI is "irresponsible," the Tesla, SpaceX, and SolarCity founder returned the favour by calling Zuckerberg's understanding of AI "limited."

Responding to a tweet Tuesday, which talked about Zuckerberg's remarks on the matter, Musk said he has spoken to the Facebook CEO about it, and reached the conclusion that his "understanding of the subject is limited."

Even as AI remains in its nascent stage - recent acquisitions suggest that most companies only started looking at AI-focused startups five years ago - major companies are aggressively placing big bets on it. Companies are increasingly exploring opportunities to use machine learning and other AI components to improve their products and services and push things forward.

But as AI is seeing tremendous attention, some, including people like Musk worry that we need to regulate these efforts as they could pose a "fundamental risk to the existence of human civilisation."

At the National Governors Association summer meeting earlier this month in the US, Musk added, "I have exposure to the very cutting edge AI, and I think people should be really concerned about it. I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal."

Over the weekend, during Zuckerberg's Facebook Live session, a user asked what he thought of Musk's remarks. "I have pretty strong opinions on this. I am optimistic," Zuckerberg said. "And I think people who are naysayers and try to drum up these doomsday scenarios -- I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible."

Musk, who has himself invested in several AI startups (Google's DeepMind, for one) over the years, has been one of the major voices to express his apprehensions of the state of AI, and how it could grow and impact humans in the future. Other voices of caution include Bill Gates and Stephen Hawking.

Interestingly, Elon Musk, in his personal capacity, and Facebook are among the investors in the non-profit startup OpenAI which aims to create AI that augments humans' capabilities, rather than making them obsolete. Other companies that have also taken part in OpenAI include Google and Amazon.

In his biography by author Ashlee Vance, for instance, he commended Larry Page's Alphabet's major work with AI, but mentioned that he fears that Alphabet could still "produce something evil by accident," something like "a fleet of artificial intelligence-enhanced robots capable of destroying mankind."

Musk's apprehension of AI is in line with a concept called technological singularity, which suggests that AI super-intelligence will trigger exponential technological growth that will bring about unforeseen changes at a rate that humans will not be able to keep up with. A related concept often referred to as doomsday proposes a strongly superhuman intelligence that decides to kill humans as it would be the "right" and "best" solution to fix the planet's problems.

The Tesla CEO is already doing his part to tackle the problem. He is working on a startup that is building a Neuralink "brain-computer interface", which aims to enable humans to unlock the potential of the brain, which among other things could augment humans sufficiently to compete with robots and AI in the future.