A vulnerability in the design of LiDAR components in driverless cars is far worse than anything we've seen yet outside of the CAN bus sphere — with a potentially deadly consequence if exploited.

The hack is also going to be hard to fix, researchers Hocheol Shin, Dohyun Kim, Yujin Kwon, and Yongdae Kim, from the Korea Advanced Institute of Science and Technology, wrote in a paper for a study supported by Hyundai.

The researchers described how they engineered more potentially lethal and stealthier attacks for driverless car LiDAR sensors compared to previous similar attack vectors. These exploits both spoof LiDAR beams and cause denial-of-service attacks with signal saturation. The paper, called "Illusion and Dazzle: Adversarial Optical Channel Exploits against LiDARs for Automotive Applications," was published by the International Association for Cryptologic Research.

The Korean researchers' hack exploits how LiDAR beam signals create dots in a cloud-like area that the machine-taught onboard computer detects in real time. The artificial intelligence (AI) then decides if it needs to change the car's steering, braking, or acceleration to react to the object while driving the vehicle.

For the spoofing attack, the researchers created fake dots representing objects on the road that can appear very close to a moving car. As the car's LiDAR detects these dots, the car's AI perceives it as an immediate impact and takes a calculated risk, locking the brakes to crash stop in a ditch or gully on the side of the road. Suddenly braking to a stop on a busy freeway in bumper-to-bumper traffic, for example, is one scenario that can have deadly consequences.

Because the braking distance is the distance required solely for braking, even autonomous vehicles have no room for checking the authenticity of the observed dots, but need to immediately activate emergency braking or evasive [maneuvers]. Such sudden actions are sufficient to endanger the surrounding vehicles. — Shin et. al, CHES 2017

In the saturation attack, the researchers sent light signals to the LiDAR component, causing the device to become "blind." The LiDAR, which is the target receiver, becomes saturated with light beams from the device sending the light signal. The most obvious risk the attack causes is that the LiDAR can no longer recognize objects that could potentially cause harm upon impacts, such as a stopped car, road debris, or a pedestrian in the car's path.

Image via Korea Advanced Institute of Science and Technology

The researchers were able to exploit the refractive angles of light signals sent to the LiDAR device, which was a Velodyne LiDAR (although devices from Ibeo and Quanergy are also vulnerable, the researchers said). The person sending the signal to hack the LiDAR embedded in the car is thus able to remain at angles to the side of the car, instead of being directly in front of the LiDAR.

With the previous LiDAR attack vectors, the signal had to be transmitted at a perpendicular angle to the LiDAR. More importantly, the spoofed object could not be farther away in distance from the light beam emitter in previous exploits — this means that the device used to create a fake object had to be in front of the LiDAR and at the same distance from the car as the spoofed object.