Elon Musk has said that A.I. could wipe out the human race. Could it?

Iya Khalil: Some applications of A.I. may be concerning. But we're just starting to see machine learning improving treatments in medicine.

Louis Del Monte: Experiments suggest that once machines have neural networks, they will be able to ignore their initial programming. We won't be able to program laws and assume that humanity is safe.

Might some factor make A.I. unlikely to be used widely by businesses in the near future?

Khalil: Its inability to distinguish between cause and effect. But we can start empowering physicians to make better decisions--and spend more time with patients--since machines automate certain menial tasks.

Del Monte: Machines are very close to passing the Turing test--where a person could converse with a machine via text and not know that it was a machine. That's when A.I. is as "intelligent" as a human.

What is an unforeseen consequence of A.I. adoption?

Khalil: There may be a point when we've built so many machines that we lose track of how they work and lose our ability to control them. We need to be thoughtful and plan for problems ahead of time.

Del Monte: We're already implanting chips in the brains of people who are having strokes. Soon we might be using these to help people with low IQs become "normal." Once that happens, will those humans identify more with machines--or will they retain their humanity?

Could a machine become smart enough to start and run a company?

Khalil: We could design a machine that is smart enough to run a company, yes. The question is: Would it have the desire to? Would it have the creativity?

Del Monte: I believe that by mid­century, we'll have a machine that equates to an Elon Musk.

Advantage: Del Monte