Before long, the only things that will be required of you when you get in a car are to turn it on and to set a destination.

You won’t even need to pay attention to the road.

Autonomous vehicles (AVs) operate with LIDAR, a system that uses a laser light beam to map its surrounding environment and to detect other moving objects (cars, pedestrians, bicyclists, etc). Add some powerful computers and artificial intelligence and AVs will chauffeur you safely from point A to B.

Just imagine: your car is driving you down the highway as you’re lost in conversation with friends. Every car around you is filled with people doing the same, happily lost in the moment.

What could possibly go wrong?

Your favorite song comes on shuffle and you all start singing along loudly enough that you don’t hear the buzz of a small drone as it passes by overhead. The drone’s equipped with a spotlight that flashes at a specific wavelength that blinds every car’s LIDAR sensors, causing complete mayhem on the road.

Oh no. We have to stop right now!

“So you’re saying that it’s possible for someone to control cars whenever they want? An organized attack would be catastrophic! We shouldn’t make autonomous cars. The risk is too high to trust a computer.”

We hear you.

The fear of that threat is perfectly reasonable, but the answer is definitely not to stop making AVs. Many experts have already pointed out the tens of thousands of human lives that will be saved (not to mention injuries prevented) if we introduce AVs. Once we’ve reached that point, wouldn’t it feel irresponsible to let anybody but the car do the driving?

Join 11,000+ people who read the weekly 🤖Machine Learnings🤖 newsletter to understand how AI will impact their future.

It’s important to understand the number of ways that someone might be able to hack into a self-driving car so that we can adjust our imaginations accordingly. To be clear, not all of these threats are purely because of self-driving car capabilities.

Hacked Jeep Cherokee. Source

As WIRED showed us, even today’s cars can be hacked without your knowledge. Your connected radio, dashboard, phone, “smart locks”, and even your diagnostics port are all potential doorways for a motivated hacker.

The addition of autonomous capabilities to the mix introduces a few more potential vulnerabilities.

For example, a hacker could try to infiltrate the network connection to access the onboard computer that’s analyzing data and controlling the car. Someone might also try to confuse the car’s sensors to manipulate its autonomous functions, like the LIDAR cameras.

Ultimately, the car’s going to do what it thinks it’s supposed to do based on the instructions it’s given. Artificial intelligence doesn’t have common sense yet — it’s still just a tool. Some companies are investing resources into protections right now. For example, Waymo (Google) claims they’ve designed an AV that can drive offline for several hours, effectively closing the door for cyber hackers. This strategy might have performance consequences, but it’s a start.

An autonomous car could be trapped with something as simple as spray paint. Source

By contrast, GM hires third-party ethical hacker organizations to test their defense systems in an effort to spot bugs before the cars come to market. This method isn’t new — lots of brands do this to ensure that their products are safe for use.

Bad actors out there are continuously innovating in their methods, so it’s imperative that we reciprocate.