What is it going to take for us to trust artificial intelligence?

I mean, really.

Right now, these are images most people conjure when I start talking about artificial intelligence:

Or if you’re a West World fan, perhaps this one of Delores:

The immediate fear for most people is that robots are going to take over the world. Even smart people like Steven Hawking and Elon Musk and Sergey Brin have been very forthcoming in their fears about Artificial Intelligence.

Centralized artificial intelligence is a good thing to fear. AI in and of itself, not so much.

What will it take for us to trust artificial intelligence?

Because trust is how humans move into new spaces, whether technological or not. Once we move to a certain level of trust, we can step farther along the adoption curve.

This iPod really is better than carting around this giant folder of CDs. The internet really is a hub of information and isn’t just for porn. Google maps really will get me to my destination and I don’t have to try and drive while reading a folded map. Computers really are better at getting men to the moon. Scanners really aren’t shooting radiation into my groceries. …Robots really aren’t going to take over the world and kill us all.

For those in the, “Robots are going to kill us,” camp, we as humans do a fine job of that already. Wars aside, plenty of data exists to quantify humans screw-ups — wrong medications, wrong diagnoses, deaths, car accidents, any data set you’d like; including ones from last week, where, in today’s advanced technological age, we should have this keeping-other-humans-alive thing down to a precise dance.

As one simple example, about two years ago, the city was doing a big construction project near my house, one that required a reconfiguration of the traffic lights because of the way they’d torn up the existing streets and had to allow for traffic to work at the intersection for the interim of the project. Very quickly, a trio of wrecks happened — and kept happening on a near daily basis — minor fender benders, but verifiable proof that the new configuration wasn’t working. For varied reasons (time, money, ignorance) the construction company and city engineers were committed to their chosen layout of the traffic lights.

Early one Sunday morning, a 23-year-old mother of two small children was involved in another wreck. She did not survive her injuries. Within hours of her death, the street light configuration was changed and for the remaining duration of the project, not a single traffic accident occurred.

In that instance, the humans didn’t trust the data (in the form of accidents) because that wasn’t their metric, their objective. The accidents were cleaned up with minimal impact to the construction sites. No one bothered to question if they should change the metric.

Human life lost.

No artificial intelligence involved.

Only human intelligence.

Compare that situation to AI, where the metric must be continually defined, adjusted, and reset. There is no gray area with data — it is what it is. Same with code — black, white; if this, then that. Metrics given to an AI for the reconfiguration of the intersection would have included, identify the optimal location for traffic lights, maintain strict accordance with DOT code, and preserve human life. Additionally, the AI would have been continually training on the live feedback of what was occuring at each intersection, every second, and (ideally) reconfiguring. Reports of those findings would have been generated for the construction supervisors and city officials. One report in the very early stages would have identified the likelihood (probably with startling clarity) of the mother’s impending death. At that point, humans would have had to consider their metrics.

Yet, we don’t trust AI to work alongside us to find solution.

We expect it to overthrow us.

Because that’s what we do as humans.

Yesterday, 151,600 people died.

In the time it took you to read this, 340 human lives came to an end.

No artificial intelligence involved.

Only human.

The answer to trusting AI is not, “When it stops killing us.”

AI already kills fewer humans per year in automobile accidents than human drivers.

What’s the real reason we don’t trust it?

When will we?

What will it take?