A two-year-old video of a prankster confusing a self-driving car has gone viral for a second time, and it demonstrates a critical reason why Tesla’s self-driving cars will never become ubiquitous.

Going viral once is a big event, but a “recurrent” viral video indicates that something about the content has gripped people in some visceral fashion.

In this case, it’s the fact that human ingenuity can fool a self-driving car. It drives home the reason why it seems unlikely that they will ever have mass-market appeal, no matter what company makes them.

How to Confuse a Self-Driving Car

The video below is of a self-driving car surrounded by a solid and unbroken line of salt, which is further enclosed by a circle consisting of dashed lines of salt.

[embedded content]

The video demonstrates that the self-driving car understands it can cross the dashed lines, and even the solid line following it.

As anyone who has ever driven a car knows, the dashed-solid line pair is the universal symbol that permits a car to move into the opposing lane in order to pass a vehicle in front of it.

However, while a human driver knows that he must then travel back into his original lane to avoid a head-on collision, the self-driving car only views its motion activity in a vacuum. It doesn’t use the previous data of having crossed over the dashed line to understand that it must cross back.

Consequently, the car ends up stuck in the circle of solid salt. The AI interface believes that, because there is a solid line, it cannot be crossed lest it place the car into a head-on collision.

It seems likely that, at some point, software engineers will find a solution to this particular problem.

Yet it is also indicative of the infinite variables that exist when a human being drives a car, combined with an infinite number of variables that exist when that driver shares the road with other vehicles.

Maybe One Day This Can Be Fixed. Maybe.

Driving is not fun for those reasons, and it will stay not fun, regardless of what Tesla CEO Elon Musk says.

No matter how intelligent self-driving cars and AI interfaces become, they will never be able to account for every situation. That is going to lead to terrible accidents, and the public is going to take notice.

So will insurance companies and personal injury lawyers.

The Public Will Never Trust Tesla’s Robot Cars

The public will be wary of trusting self-driving cars that make these kinds of errors. Insurance companies will be hesitant to cover self-driving cars. Personal injury lawyers will rub their greedy little hands together at the prospect of terrible accidents.

This is not to say that the public will never come to accept self-driving vehicles, but expecting the mass market to adopt them in anything less than ten years is folly, no matter what Elon Musk says.

[embedded content]

While technology will sufficiently advance to the point where there will not be these kinds of problems to worry about, that technology is likely to be decades away.

Disclaimer: The views expressed in the article are solely those of the author and do not represent those of, nor should they be attributed to, CCN.