As the automotive industry continues to throw all of its weight behind developing self-driving technologies for the near-future, looking back on 2018—which included lawsuits, wake-up calls about limitations and the first known pedestrian fatality from a self-driving car—does anything but sell the promise of having feasible self-driving cars on the road anytime soon.




Contrary to what our tech overlords in Silicon Valley (and, to be fair, Detroit too) have been saying, we will not have fully driverless cars in a couple years. Or even several years. If anything, this last year has only proven that self-driving cars won’t be viable on a massive scale for decades.

So many of this year’s stories involved technology failing in everyday circumstances, companies delaying their development programs, governments that are clueless about what to do with the emerging tech, and even the success stories show just how limited the technology still is from being ready for prime time.


Elon Musk himself once had to come to the conclusion that “humans are underrated.” It turns out that’s true behind the wheel too.

Let’s break down what did and didn’t work out in the biggest year ever for self-driving cars.

The Technology Isn’t Even Near Being Ready

By far the biggest story of this year for the entire automotive industry was the death of Elaine Herzberg in March, who was struck and killed crossing the street at night by an Uber-owned self-driving Volvo XC90 test car while the safety driver watched a show on their phone. Further investigation showed the car detected Herzberg in the road and had up to six seconds to respond, but failed to do so.


At the time, Uber was one of the only companies to require only a single safety driver in its self-driving test cars, where most other companies had two occupants in the car during testing. Since the car had no other way of actively tracking driver awareness, both the safety driver and the car’s computer failed to avoid killing Herzberg, despite clear weather, street lights, and vehicles sensors that should have been capable of braking before impact.


Not only did the incident lead Uber to completely back out of its Arizona test operations and lay off hundreds of workers, it proved that the car’s suite of sensors and computers were clearly not prepared to be making life and death decisions on public roads.

It also tested the public’s acceptance to the presence of self-driving vehicles taking over the roads, with Arizona Governor Doug Ducey rescinding Uber’s license to test despite championing his own government’s lack of oversight and regulations for Uber’s program just a couple of years before.


But Uber wasn’t the only company that ran into setbacks for its self-driving program. Volvo delayed its autonomous car program by four years, which would have put Swedish families in allegedly self-driving capable cars in 2018 as part of a pilot program. The automaker admitted that the sensor technology required would likely be more capable and easier to implement closer to 2021.

Tesla delayed its highly publicized goal to perform a fully-autonomous demo in one of its cars from coast to coast. Originally pitched in January of 2017, it was moved to maybe happen by the end of this year, and now it’s unclear when it will happen. While it’d ultimately be more of a marketing gimmick, the continual delay of such an ambitious demonstration proves that Autopilot—which uses machine learning based on the data of its customer cars to improve its self-driving capabilities—may still be far off from truly safe self-driving capability.


Tesla also had a brief issue with some owners losing all Autopilot functionality after an over-the-air update to their cars back in September. The fix took a day to implement, and while it only meant that Tesla owners had to actually pay full attention during their highway commutes, what happens when an update bricks someone’s car in the future?


Google’s Waymo also had its fair share of rough headlines this year, despite being one of the few companies to actually get its pilot program of public self-driving car-sharing off the ground on schedule.

In August, a big report from The Information cited five unnamed sources who claimed that Waymo’s self-driving cars struggled with basic infrastructure comprehension and driving tasks, like making an unprotected left turn or stopping at traffic signals designed to control the flow of traffic from a ramp onto a road.


Despite a few issues this year with crashes and reports indicating its cars struggle with certain aspects of everyday driving, Waymo managed to get its self-driving ride-sharing program up and running earlier this month.

Conditions Need to be Perfect

The commercial service operates as a fleet of self-driving taxis for riders who have participated in the company’s previous free pilot programs in the Phoneix area. Now participants pay a fare to ride in a self-driving car, with access to a region of about 100 square miles covering Chandler, Tempe, Mesa, and Gilbert, and it operates essentially just like Uber or Lyft.


The cars still come with a safety driver in the driver’s seat, but an early demonstration to Reuters proved to be successful, with the driver never having to intervene.

However, it’s clear that Waymo has cherry-picked this area of Arizona due to the near-perfect conditions for testing a new self-driving car without having to worry about inclement weather. Take Waymo’s car somewhere that isn’t sunny and dry almost every day out of the year, and it likely would not be able to perform at the same level of safety. In these perfect conditions, as we tested for ourselves in Vegas earlier this year in a self-driving Lyft test car, the technology is undisputedly impressive.


(This is also one reason, along with Carnegie Mellon being nearby, that Pittsburgh is becoming a hotbed of autonomous car testing. The cars will have to learn to handle potholes, hills and brutal snow to be viable.)

Even the head of Google’s self-driving car unit, former Hyundai exec John Krafcik, has publicly doubted self-driving cars will ever be able to perform in all conditions. He also believes we still have decades to go before the average person even has access to a self-driving car.


It’s a Business of Egos

Perhaps the clearest narrative this year was just how messy the companies behind the biggest self-driving programs are, and how far they’re willing to go to protect their trade secrets.


The high-profile lawsuit where Waymo accused former Google engineer Anthony Levandowski of stealing its self-driving truck technology when he left to go work for Uber was settled in just a week. The settlement ruled that Uber could not incorporate any of the technology Google claimed was stolen into its vehicles and pay a settlement of about $245 million.




All of this shook out right after Uber replaced its scandal-marred CEO Travis Kalanick, and Faraday Future and Apple had issues with employees over allegedly attempting to take trade secrets to other companies. Even Disney’s plans to outfit its parks with self-driving shuttles ended in a major lawsuit.

Speaking of that Levandowski guy, it turns out he was the epitome of Silicon Valley ego. In his time at Google, he reportedly believed that there was nothing to learn from history, safety could not be a number one priority in the development of new technology, believed he deserved a minimum of a billion dollars for his work.


If you think that’s a little crazy, keep in mind this guy was extremely influential over what became Waymo.

One extremely alarming story from an article in the New Yorker alleges that Levandowski once tried to convince a coworker that their technology was more capable, and was being held back by the testing program. So he modified one of the cars to go off-route on an untested part of the freeway, where it ran another car off the road and into the median.


Levandowski kept on driving, never alerted the authorities or reported the incident, and later presented video of the incident to Waymo employees as an “invaluable” resource of information for further development.


And he’s a dude who once said this:

“The only thing that matters is the future,” he told me after the civil trial was settled. “I don’t even know why we study history. It’s entertaining, I guess—the dinosaurs and the Neanderthals and the Industrial Revolution, and stuff like that. But what already happened doesn’t really matter. You don’t need to know that history to build on what they made. In technology, all that matters is tomorrow.”


Ugh.

The future of automotive safety and security boils down to the egos of these companies and their leadership, and if 2018 has proven one thing, it’s that they’re anything but reliable.




Governments Aren’t Prepared

Unfortunately, these are the people that many government official want to trust to keep consumers safe without much oversight, as the year has proven.


General Motors and its Cruise self-driving test operation announced earlier this year that it had a permit from the state of New York to begin testing its self-driving vehicles on the crazy streets of Manhattan—likely a much tougher test than what Waymo’s cars are subject to down in the Southwest.

But Cruise’s cars still aren’t on city streets, as it would appear the company still doesn’t have the permit necessary according to the New York Department of Motor Vehicles, as we reported back in September. When the project was announced by the governor’s office in October of last year, the Mayor’s office of New York City had not been consulted. GM now says the holdup is due to a “complex regulatory environment.”


So it’s not only the companies developing self-driving cars that are having problems getting along.

What’s even more alarming is the approach by some lawmakers to legally stay out of the way of self-driving car development. Michigan Governor Rick Snyder said such companies should be given “the benefit of the doubt,” as he thinks they’re working to make cars safer. This comment came after the Uber test car killed someone on a public street.


The federal government isn’t much better, as its major self-driving car legislation, the AV START Act, had to be overhauled after critics claimed it didn’t go far enough in establishing safety standards for the new technology. Even the revised version of the bill was criticized by the Center for Automotive Safety as being “a lot of hot air which has no substantial safety improvements but will likely inflate car and tech company stock prices.”

The outlook for self-driving cars going into next year is essentially the same as it was going into this year, except with more companies jumping in—but no clear path on how these things will be viable.


We’re still unclear on how to legislate for cars that drive themselves, we can’t be confident that the companies rushing to get cars on public roads are doing enough to ensure they’re ready, and we still don’t know if there will ever be a substantial market demanding them when they are ready. And there’s more doubt in the minds of the public than ever.

But we do know it will be decades before the technology is ready to go mainstream, though they’ll likely never be prepared for every driving condition. And we know that currently, they aren’t good enough to avoid a woman walking a bicycle across the street at night.


And then there’s the root question of it all: How do we begin to define what “good enough” even is?