A very recent article follows in the footsteps of many others talking about how the promise of autonomous cars on roads is a little further off than many pundits have been predicting for the last few years. Readers of this blog will know that I have been saying this for over two years now. Such skepticism is now becoming the common wisdom.

In this new article at The Ringer, from May 16th, the author Victor Luckerson, reports:

For Elon Musk, the driverless car is always right around the corner. At an investor day event last month focused on Tesla’s autonomous driving technology, the CEO predicted that his company would have a million cars on the road next year with self-driving hardware “at a reliability level that we would consider that no one needs to pay attention.” That means Level 5 autonomy, per the Society of Automotive Engineers, or a vehicle that can travel on any road at any time without human intervention. It’s a level of technological advancement I once compared to the Batmobile.

Musk has made these kinds of claims before. In 2015 he predicted that Teslas would have “complete autonomy” by 2017 and a regulatory green light a year later. In 2016 he said that a Tesla would be able to drive itself from Los Angeles to New York by 2017, a feat that still hasn’t happened. In 2017 he said people would be able to safely sleep in their fully autonomous Teslas in about two years. The future is now, but napping in the driver’s seat of a moving vehicle remains extremely dangerous.

When I saw someone tweeting that Musk’s comments meant that a million autonomous taxis would be on the road by 2020, I tweeted out the following:

Let’s count how many truly autonomous (no human safety driver) Tesla taxis (public chooses destination & pays) on regular streets (unrestricted human driven cars on the same streets) on December 31, 2020. It will not be a million. My prediction: zero. Count & retweet this then.

I think these three criteria need to be met before someone can say that we have autonomous taxis on the road.

The first challenge, no human safety driver, has not been met by a single experimental deployment of autonomous vehicles on public roads anywhere in the world. They all have safety humans in the vehicle. A few weeks ago I saw an autonomous shuttle trial along the paved beachside public walkways at the beach on which I grew up, in Glenelg, South Australia, where there were two “two onboard stewards to ensure everything runs smoothly” along with eight passengers. Today’s demonstrations are just not autonomous. In fact in the article above Luckerson points out that Uber’s target is to have their safety drivers intervene only once every 13 miles, but they are way off that capability at this time. Again, hardly autonomous, even if they were to meet that goal. Imagine having a breakdown of your car that you are driving once every 13 miles–we expect better.

And if normal human beings can’t simply use these services (in Waymo’s Phoenix trial only 400 pre-approved people are allowed to try them out) and go anywhere that they can go in a current day taxi, then really the things deployed will not be autonomous taxis. They will be something else. Calling them taxis would be redefining what a taxi is. And if you can just redefine words on a whim there is really not much value to your words.

I am clearly skeptical about seeing autonomous cars on our roads in the next few years. In the long term I am enthusiastic. But I think it is going to take longer than most people think.

In response to my tweet above, Kai-Fu Lee, a very strong enthusiast about the potential for AI, and a large investor in Chinese AI companies, replied with:

If there are a million Tesla robo-taxis functioning on the road in 2020, I will eat them. Perhaps @rodneyabrooks will eat half with me?

I readily replied that I would be happy to share the feast!

Luckerson talks about how executives, in general, are backing off from their previous predictions about how close we might be to having truly autonomous vehicles on our roads. Most interestingly he quotes Chris Urmson:

Chris Urmson, the former leader of Google’s self-driving car project, once hoped that his son wouldn’t need a driver’s license because driverless cars would be so plentiful by 2020. Now the CEO of the self-driving startup Aurora, Urmson says that driverless cars will be slowly integrated onto our roads “over the next 30 to 50 years.”

Now let’s take note of this. Chris Urmson was the leader of Google’s self-driving car project, which became Waymo around the time he left, and is the CEO of a very well funded self-driving start up. He says “30 to 50 years”. Chris Urmson has been a leader in the autonomous car world since before it entered mainstream consciousness. He has lived and breathed autonomous vehicles for over ten years. No grumpy old professor is he. He is a doer and a striver. If he says it is hard then we know that it is hard.

I happen to agree, but I want to use this reality check for another thread.

If we were to have AGI, Artificial General Intelligence, with human level capabilities, then certainly it ought to be able to drive a car, just like a person, if not better. Now a self driving car does not need to have general human level intelligence, but a self driving car is certainly a lower bound on human level intelligence. Urmson, a strong proponent of self driving cars says 30 to 50 years.

So what does that say about predictions that AGI is just around the corner? And what does it say about it being an existential threat to humanity any time soon. We have plenty of existential threats to humanity lining up to bash us in the short term, including climate change, plastics in the oceans, and a demographic inversion. If AGI is a long way off then we can not say anything sensible today about what promises or threats it might provide as we need to completely re-engineer our world long before it shows up, and when it does show up it will be in a world that we can not yet predict.

Do people really say that AGI is just around the corner? Yes, they do…

Here is a press report on a conference on “Human Level AI” that was held in 2018. It reports that of respondents to a survey at that conference said they expected human level AI to be around in 5 to 10 years. Now, I must say that looking through the conference site I see more large hats than cattle, but these are mostly people with paying corporate or academic jobs, and of them think this.

Ray Kurzweil still maintains, in Martin Ford’s recent book that we will see a human level intelligence by 2029–in the past he has claimed that we will have a singularity by then as the intelligent machines will be so superior to human level intelligence that they will exponentially improve themselves (see my comments on belief in magic as one of the seven deadly sins in predicting the future of AI). Mercifully the average prediction of the 18 respondents for this particular survey was that AGI would show up around 2099. I may have skewed that average a little as I was an outlier amongst the 18 people at the year 2200. In retrospect I wish I had said 2300 and that is the year I have been using in my recent talks.

And a survey taken by the Future of Life Institute (warning: that institute has a very dour view of the future of human life, worse than my concerns of a few paragraphs ago) says were are going to get AGI around 2050.

But that is the low end of when Urmson thinks we will have autonomous cars deployed. Suppose he is right about his range. And suppose I am right that autonomous driving is a lower bound on AGI, and I believe it is a very low bound. With these very defensible assumptions then the seemingly sober experts in Martin Ford’s new book are on average wildly optimistic about when AGI is going to show up.

AGI has been delayed.