Two main problems with artificial intelligence lead people like Mr. Musk and Mr. Hawking to worry. The first, more near-future fear, is that we are starting to create machines that can make decisions like humans, but these machines don’t have morality and likely never will.

The second, which is a longer way off, is that once we build systems that are as intelligent as humans, these intelligent machines will be able to build smarter machines, often referred to as superintelligence. That, experts say, is when things could really spiral out of control as the rate of growth and expansion of machines would increase exponentially. We can’t build safeguards into something that we haven’t built ourselves.

“We humans steer the future not because we’re the strongest beings on the planet, or the fastest, but because we are the smartest,” said James Barrat, author of “Our Final Invention: Artificial Intelligence and the End of the Human Era.” “So when there is something smarter than us on the planet, it will rule over us on the planet.”

What makes it harder to comprehend is that we don’t actually know what superintelligent machines will look or act like. “Can a submarine swim? Yes, but it doesn’t swim like a fish,” Mr. Barrat said. “Does an airplane fly? Yes, but not like a bird. Artificial intelligence won’t be like us, but it will be the ultimate intellectual version of us.”

Perhaps the scariest setting is how these technologies will be used by the military. It’s not hard to imagine countries engaged in an arms race to build machines that can kill.

Bonnie Docherty, a lecturer on law at Harvard University and a senior researcher at Human Rights Watch, said that the race to build autonomous weapons with artificial intelligence — which is already underway — is reminiscent of the early days of the race to build nuclear weapons, and that treaties should be put in place now before we get to a point where machines are killing people on the battlefield.