If an internet ne’er-do-well gains access to your computer or phone, they can cause an awful lot of havoc, but this is nothing new. You put the pieces back together, change your passwords, and go on with your life. If an attacker were to interfere with the computer powering your self-driving car, on the other hand, the consequences could be much more dire — you might not be going on with your life after that. Two experts on self-driving cars are weighing in on this increasingly likely scenario and their message is that companies are not prepared for the threat of cyberattacks on future robotic cars.

Jonathan Petit of University College Cork and Steven Shladover of the University of California Berkeley have completed what they say is the first exhaustive analysis of potential hacking on self-driving cars. We don’t really know how self-driving cars will work because there are no consumer products just yet, so Petit and Shladover addressed a variety of systems and attempted to identify the most serious threats to safety and security.

The report splits cyber-threats to robotic vehicles into three matched pairings — passive snooping versus active manipulation, jamming of a signal versus the substitution of a false signal, and attacks focusing on single cars versus those targeting a network of interconnected vehicles. If you go down the list and choose the worst case scenarios, you unsurprisingly come up with an active attack that involves using fake signals to affect a network of cars. Petit and Shladover point to global navigation satellite systems like GPS and GLONASS as prime targets for this kind of attack. More worryingly, the technology to do this already exists.

GPS scrambling technology can be had for as little as $20 and could be used to knock a self-driving car off course. It’s likely other technologies like laser rangefinders and radar would be used to orient the vehicle, but a broken GPS connection would at least cause the car to pull over and make you late. More advanced GPS spoofing systems are even capable of passing incorrect location data to a car. This is particularly problematic — if the vehicle doesn’t know it has bad data, a crash could be unavoidable. If cars are connected in a mesh network to enable more efficient traffic management, that bad data could end up being passed to other vehicles and cause a chain reaction.

The report calls on car makers that are designing driverless vehicles to begin developing overlapping security measures that would prevent such tampering. The authors suggest encrypted signal authentication for GPS and new smart algorithms that can detect unusual signals that could indicate the system is being spoofed. Simply keeping a backup steering wheel in the car for a human to correct a failure of the automated system isn’t good enough. According to General Motors research, drivers almost completely disengage from watching the road just a few minutes after letting a robot take over for them.

This serves as a bit of a reality check for driverless cars. We might be getting close to making them work in a technical sense (like Google’s efforts), but making them safe is about more than navigating the roads. They also have to navigate the murky waters of cybersecurity.

Now read: How Google’s self-driving cars detect and avoid obstacles