What Elon Musk Could Have Shared About Artificial Intelligence But Didn’t

If I had a sit down with Elon Musk after his SXSW talk.

Elon Musk has many strengths. Oratory is not one of them.

Yet, his words flowed like a jet stream at South by Southwest (SXSW) event in Austin last weekend. Especially when he was needled on Artificial intelligence (AI) .

In his words, “Mark my words, AI is far more dangerous than Nukes, by far. So, why do we have no regulatory oversight? This is insane.”

How did he back up his serious statement?

1) Like many of us naturally do, he put his own experience forefront as the credibility builder.

“I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me. It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.”

2) Like many of us do, he backed it up with real example.

DeepMind, Google’s AI company, has AI called AlphaGo playing a board-game called Go.

“Over six to nine months AlphaGo went from being unable to beat a relatively good Go player to then beating world champions. AlphaGo Zero, its successor, then completely destroyed AlphaGo 100–0.”

3) Like many of us do, he concluded by reinforcing his main point.

“The rate of improvement is really dramatic, but we have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity. I think that’s the single biggest existential crisis that we face, and the most pressing one,”

Elon had done this line of presentation last year too. It is not working.

Stephen Pinker, Harvard Professor and Optimist pushed back. He said, “If ‪Elon Musk was really serious about the AI threat he’d stop building those self-driving cars.”

This was Elon’s response on Twitter — “Wow, if even Pinker doesn’t understand the difference between functional/narrow AI (e.g. car) and general AI, when the latter *literally* has a million times more compute power and an open-ended utility function, humanity is in deep trouble”

The question is why is Elon having trouble communicating his idea of biggest risk? Why is he doubling up on hyperbole headlines? Could communication be his problem?

Facts Tell, Stories Stick.

I like this story from renowned scientist Michio Kaku. He makes a similar point about Aliens — that could be true for Artificial Intelligence.

“The real danger to a deer in the forest is not the hunter with a gigantic rifle, but the developer.

The guy with blueprints, the guy in the three-piece suit, the guy with the slide ruler and calculator.

The guy that is going to pave the forest and perhaps destroy a whole eco-system.”

Recently deceased physicist, Stephen Hawking, made a great point that mirrors the hunter with the gigantic rifle.

“The automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.”

Elon, through his fleeting reference to the “existential crisis”, worries about the paver in the three-piece.

How does he make his point? By pushing back on the so called experts.

Expert Push Back.

“The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are. This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.”

If you point a finger, the challenge is that three fingers point at you.

Here is a better way — works with my children, should work with adults too.

The story Elon Musk forgot to tell.

“No matter who you are or what your profession is — whether you’re an entrepreneur or in sales or a designer or a developer — no matter what you do, your job is to tell a story.” — Gary Vaynerchuk

Three learned scholars and a common man were walking through the forest. They found bones of a dead animal. They decided to bring it to life using their knowledge.

The first man said, “Okay, I will assemble the bones into a skeleton.” With the power of learning he regrouped a skeleton.

The second man commanded flesh and blood to fill the skeleton and skin to cover it.

When the third man was about to bring life to the body, the common man warned him, “Look, this looks like the body of a lion. If it comes to life, it will kill us all.

The third scholar said, “You are a fool. Do you think I will lose this opportunity to test my learning.”

The common man told him to wait so that he could climb up a tree for safety. When the third man gave the animal life, the lion came alive and killed all the three learned men.

If it had been a donkey instead of a lion, the story would have been different.

With AI, everything is abstract, it is hard to tell what we unleash — a lion or a reliable workhorse like a donkey.

One thing, we as humans have done well — our tribal instincts have helped us survive and thrive.

Seen in that context, Elon’s words for “how to” approach it makes sense.

He said, “This is a case where you have a very serious danger to the public, therefore there needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely — this is extremely important,”

We know Elon as a visionary and problem solver. He pioneers to solve challenges that many mortals give up without trying. His lifetime of work is proof — he has natural credibility when he speaks about “how to solve….”

He needs to his spend time to communicate the existential risk of humanity to the larger audience in ways we can grasp.

Note to Elon and You.

Elon — people look up to you. You owe it to humanity to find better ways to broadcast the bigger picture.

Regulation is slow. Your message needs to be broadcast better and faster.

The question I leave all of you with — Are your cells churning on how we can communicate this risk better?

Send Elon suggestions on Twitter. He does read and responds there.

I leave you with this story from the smartest man who had nothing to lose.

The scariest story ever told by the smartest man on earth in recent times — Stephen Hawking.

John Oliver needled Stephen Hawking about Artificial intelligence.

Here is what Stephen Hawking shared.

“There’s a story that scientists built an intelligent computer. The first question they asked it was: “Is there a God?” The computer replies: “There is now.” And a bolt of lightning struck the plug so it couldn’t be turned off.”

— — — —

Karthik Rajan

I am a positive person — most of the time. In other words, I work hard to be positive. Sometimes, just sometimes, some stories have to be told. When Elon combined my two topics AI [my fascination started in high school when computers were slow] and energy [I work in this space] in one line, I felt an urge to pen my thoughts.