LAS VEGAS—Charlie Miller and Chris Valasek made headlines at the Black Hat conference a few years ago by hacking remotely into a Jeep Cherokee and steering it into a ditch. Since that time, this dynamic duo has worked with several self-driving car companies to prevent such shenanigans.

At the Black Hat 2018 conference, they revealed a surprising fact: self-driving cars are tougher to hack than their less-smart counterparts, and they're getting tougher.

"We spent a lot of time hacking cars, trying to make them do unsafe stuff," said Valasek. "Since we stopped that, we've worked at self-driving car companies. Now we're in the protector role. We'll define self-driving cars, demystify them, and debunk things you've seen in the news. We'll tell you how attacks could happen, and how we could secure them. We want everyone to be secure."

"We worked for two or three companies, currently for Cruise," said Miller. "Right now the whole field is competitive, with lots of secrets. Nobody wants to say how their solution works. Security isn't like that. We get everything into the open, and talk about what works and doesn't work as a community."

What Is a Self-Driving Car?

The pair laid out five levels of self-driving car, starting with Level Zero: vintage cars that have no automation at all.

Level Four is where they work. At this level, the car can drive without a driver, even without pedals or a steering wheel. But it only drives on a restricted, well-mapped area. Valasek noted that Kitt, from the Knight Rider TV show, might be considered Level Five—it drives by itself, anywhere, any time. The current technology is nowhere near that level.

Valasek pointed out that these cars are not built to be self-driving from the ground up. They all start with a base vehicle. "You have this car," he explained, "and now you're making it autonomous. They drive a ton of miles to make sure the cars work correctly. But it's not just the amount of miles, it's the type. You have to drive in the chaos of San Francisco rush hour, and the spaghetti roads of Pittsburgh."

Valasek pointed out that self-driving car hardware is expensive; too expensive for individuals to buy. The LIDAR alone can be $75,000. They're focusing now on cars that serve as a ride share, not as cars that Joe Consumer will buy. "Not only does the car see with LIDAR," added Valasek, "it does classification of the objects it sees. One thing you may not realize [is] there are rooms full of people looking at camera and sensor data and verifying the classification."

Self-Driving Means Secure

The pair pointed out that self-driving car sensors and components communicate via Ethernet, not the traditional automotive CAN bus. A self-driving car effectively has a data center in the trunk.

"You need the bandwidth of Ethernet," said Miller. "CAN can only handle so much. You need to secure the communication, including with the outside world. The car needs to know where it is, and where you want it to go."

The fact that these cars only drive in known, mapped areas means they know their location with vastly more accuracy than GPS offers. The car knows where crosswalks and stoplights are. It's accurate to inches, not the several feet accuracy of GPS. "If you know the environment perfectly," said Miller, "then when you see something new it's important. The negative to this is, they only go where there's a map. You can't say, 'Kitt, come get me, I'm in trouble.'"

Self-driving cars need constant monitoring, and need to receive info about where to pick up and drop off passengers. This communication really needs to be secure, and it can be. The biggest step toward security involves reducing the attack surface. A localized self-driving car doesn't need Bluetooth. It doesn't even need an AM/FM radio.

"We're not trying to sell the car on features," noted Valasek. In addition, self-driving cars just aren't available to the public. Past attacks by Valasek and Miller required them to purchase the vehicle in question and analyze it. "It's hard to attack a device you don't have," explained Miller. "Yes, it's a kind of security through obscurity, but for now people can't examine them."

Security Challenges

Reliance on an off-the-factory base vehicle is still a problem. Cars are designed without a thought for being self-driving. Miller noted that since General Motors owns Cruise, they can help their counterparts secure the base vehicle.

You've seen articles screaming that self-driving cars can be fooled by modification of the environment, like putting tape on stop signs. Valasek pooh-poohed the idea.

"Even if you pull up the stop sign and throw it away, the car still knows it should be there," he said. "And sensor hacks are nonsense. Jam or spoof the GPS signal? No effect on the car, as it doesn't even need GPS."

Likewise, attackers can't blind the car by tricking LIDAR. "Those headlines don't apply to real self-driving cars," said Miller. "They have LIDAR, radar, cameras, and other sensors. And if the car doesn't get good data, it just stops."

Defending Self-Driving Cars

Since software and hardware can't be perfectly secure, the biggest defense is to reduce the attack surface. Remove everything that isn't necessary. "What do you need this for? Can't answer? Yank it out," said Valasek. Also, don't allow any inbound connections. Wait for the car to make the connection."

Miller noted that all previous successful car hacks involved reprogramming some component of the auto. Trusted execution prevent's that. If an attacker physically changes some component of the car, even a base vehicle component, it won't start.

The pair went through a collection of high-tech solutions to maintain secure connectivity between car components, solutions such as getting the component makers to use secured TCP connections rather than the current UDP connections.

The takeaway from this talk? You're not going to buy a self-driving car any time soon, but you'll probably ride in one. And it will be vastly more secure than the car you drive yourself.

Further Reading

Security Reviews