What Drives a Driverless Car?

How the tech inside Tesla’s and Waymo’s self-driving vehicles really works

Photo: Sjoerd van der Wal/Getty Images

In 2018, the World Health Organization reported that road traffic injuries were the leading killer of people ages five to 29. An estimated 1.35 million deaths worldwide were due to vehicle crashes. One possible solution to this problem:

Don’t let humans drive.

In 94% of the cases, the driver was at fault. Driverless cars are expected to drastically reduce crash-related deaths. Here’s proof.

Tesla Autopilot engaging in automatic emergency braking.

Self-driving cars are far from perfect, but in addition to reducing crashes, driverless cars are expected to bring benefits like higher productivity, better traffic management, and reduced energy consumption. But how do driverless cars actually drive?

If your car has adaptive cruise control, you already have an idea. But these modes barely scratch the surface of what’s possible, falling into either level one or two in the six levels of driving automation formulated by the Society of Automotive Engineers (SAE).

Six levels of driving automation. Image: Synopsys

Making cars that operate at level four and five has been the goal of companies pursuing driverless tech. A relatively small number of test cars have reached level four, while none have reached level five. Achieving complete level five autonomy relies on three broadly interconnected technological factors: sensors, software, and connectivity.

Sensors, sensors, and more sensors

A wide variety of sensors, including radar, ultrasonics, and cameras, are installed to give cars the ability to “see.” But there’s no universally defined number or types of sensor. The two leading companies, Waymo (owned by Google’s parent company, Alphabet) and Tesla, differ in their approaches.

The Waymo way

Waymo’s vehicle sensors. Image: Waymo

For its R&D, Waymo modifies the Chrysler Pacifica hybrid minivan and fits it with proprietary technology consisting of the following sensors.

Lidar

The light detection and ranging system emits billions of laser pulses per second, 360 degrees around the car. It then measures the time it takes for these beams to return after reflecting off surfaces. Using this information, the system creates a detailed 3D map of the objects and environment surrounding the car. Waymo’s system has three lidar sensors: short range, midrange and high range. Since these sensors emit light, they can work during the day or night. But they are unreliable in foul weather.

Cameras

The vision system consists of several high-resolution cameras that cover the vehicle’s front, sides, and back. Unlike lidar, cameras can detect color, which is useful for spotting traffic lights, construction zone signs, and emergency vehicle lights. These cameras are designed to work in daylight and low-light situations, but like any other camera, resolution drops as the light decreases.

Radar

A radar system fitted around the car emits radio waves that bounce back after hitting objects and uses the time it takes for the wave to return to judge the distance and speed of objects around the car. The advantage of radar over the other sensors is its ability to work in multiple weather conditions, including rain, snow, and fog. But these sensors have a low level of detail when it comes to recognizing shapes and other such characteristics.

Supplemental sensors

Several other sensors, including GPS, ultrasonics, and microphones, further add to the information collected.

Waymo’s 360-degree experience: a fully self-driving journey.

Waymo’s lidar-centered approach is similar to that of most other companies pursuing driverless tech, including Uber and GM. But Tesla is taking another route.

The Tesla way

Sensors in Tesla’s cars. Image: Tesla

Elon Musk, founder and CEO of Tesla, is famously opposed to the use of lidar. “Lidar is a fool’s errand,” he proclaimed on one of the many occasions he has criticized lidar. “Anyone relying on lidar is doomed. Doomed!” Musk’s argument is that cameras and software can do everything that the $7,000 lidars can do, but at a fraction of the cost. Although there is some backing to his claim, all other companies working on driverless cars seem to trust the lidar approach, hoping that the cost will decrease over time.

Tesla’s Autopilot system consists of:

Eight cameras that cover the front, sides, and back, providing 360-degree footage of the car’s surroundings.

that cover the front, sides, and back, providing 360-degree footage of the car’s surroundings. Twelve ultrasonic sensors that complement the cameras and can detect objects much closer to the car, which is particularly useful for assisting with parking and detecting when cars are entering the same lane.

that complement the cameras and can detect objects much closer to the car, which is particularly useful for assisting with parking and detecting when cars are entering the same lane. One front-facing radar that uses radio waves to judge the distance and speed of objects under all weather conditions.

Tesla’s Autopilot in action.

While the various sensors in both Tesla’s and Waymo’s approach capture a trove of invaluable data, this information is useless on its own. The vehicle’s software processes this data and functions as the brain behind the driverless future.

The brain

Similar to sensors, software approaches differ from company to company. Unlike the sensors, we know much less about those different approaches, in part because companies tightly guard the A.I. in their driverless cars. But in general, the software needs to be able to take in all the sensor data and make sense of it.

Waymo outlines the three main tasks the software needs to perform: perception, behavior prediction, and action planning.

Perception

Perception is the classification of the objects after fusing information from all the sensors. It enables the car to distinguish between pedestrians, cyclists, cars, and other objects and comprehend which lane the car needs to be in (even when there are no markings), as well as road signs and traffic lights. Perception involves estimating the object’s speed, distance, and direction. One way software perceives is by comparing a live feed with a high-quality map of the same location and focusing on the differences.

Behavior prediction

Behavior prediction involves predicting what the perceived objects are likely to do next. For example, a pedestrian might cross the road, or a car might move into the same lane. If a cyclist gestures a turn using hand signals, the car has to pick up on that as well.

Action planning

Action planning involves deciding what to do next based on perception and behavior prediction. It tells the car whether to move or stop, what speed to go, and where to go.

To complete these tasks, the software is put through miles and miles of testing using different types of roads, weather conditions, and edge-case scenarios.

Waymo has put its software through 10 billion virtual miles via computer simulations and 10 million real-world miles via its fleet of test cars. Tesla’s software takes advantage of the hundreds of thousands of Teslas on the road, learning from them by capturing all the valuable data they produce while driving and applying deep neural net algorithms to the results. The on-road advantage gives Tesla billions of real-world miles to work with.

At its current level of sophistication, Waymo has achieved level four automation and has driverless taxis operating in Phoenix, Arizona. Its cars have the lowest disengagement rate — how often a human must take over the wheel — at 0.09 times every 1,000 miles. To put this in context, if a Waymo car traveled across the United States and back, a human would have to take control of the car less than one time on average.

Waymo driverless taxi service in Arizona.

Tesla’s on-road cars have achieved level three automation with their ability to steer, brake, accelerate, enter and exit highways, and navigate in and out of parking spots. The company promises full self-driving capability (level four to five) in all of its cars through software updates over the coming years. In the promotional material for Autopilot on Tesla’s website, the company claims, “All you will need to do is get in and tell your car where to go. If you don’t say anything, the car will look at your calendar and take you there as the assumed destination or just home if nothing is on the calendar.”

Connectivity

The last piece to solve the driverless puzzle also happens to be the least developed: connectivity.

While a car can reliably navigate, steer, accelerate, and brake using only the onboard software and hardware, reaching complete autonomy requires that the cars be connected to the outside world in multiple ways.

Vehicle to infrastructure (V2I)

Cars should be able to receive and send information to infrastructures to make autonomous driving a seamless process. For example, parking spots at your destination should be able to broadcast space availability and reserve this space if requested. The car then will automatically navigate to this spot without human interference. Cars can directly receive input from traffic lights, weather stations, and local road repair agencies.

Vehicle to vehicle (V2V)

Cars need to be able to communicate with each other. This will prevent crashes since each car is broadcasting its speed and location in real time. If one car detects a hazardous situation, it needs to pass this information to cars behind it so those cars can either find an alternate route or begin slowing down.

Vehicle to people (V2P)

Cars should be able to detect people (pedestrians, cyclists) by communicating with their smartphones. These phones can broadcast their users’ real-time positions, which will help cars avoid collisions.

All of the above is broadly referred to as vehicle to everything (V2X). This aspect is currently underdeveloped because the required connectivity tech is lagging. These communications need to be instantaneous and highly reliable, which current 4G technology does not guarantee. Things will change with the arrival of 5G cellular technology, which promises to vastly increase the bandwidth of wireless data transmission.

How far are we from a driverless future?

Countries that have excellent road infrastructure and well-enforced traffic rules are on the verge of overcoming the technological hurdles of sensors, software, and connectivity, but this is the easy part. The real challenges lie in proving safety, dealing with ethical and legal concerns, and addressing the different stakeholders (truckers, taxi drivers) who are likely to be negatively affected by the shift to driverless vehicles.

Who will take the legal blame in case of a crash involving a driverless car, especially one that does not come with a steering wheel? Is it the car manufacturer, the owner, the passenger, or the supplier of autonomous technology? Ethical considerations are more intriguing: If a collision is inevitable, who should the car prioritize to protect: a pedestrian crossing the street or the passenger in the car?

Children born a decade from now might never need to learn to drive. Consequently, they won’t be able to fully appreciate your excellent parallel-parking skills, racing abilities, or cross-country driving stamina, and if you live in cities like Mumbai or Moscow, your patience and perseverance sitting through hours of traffic. There might also be unforeseen consequences. The cars we have today were strangely referred to as horseless carriages when they were first introduced. Likewise, what we call driverless cars today might sound absurd in the not-so-distant future.