With the recent rise of autonomous cars, they start facing new challenges related to their reliability and security, every week, people are finding new flaws, that they can use to gain access to the cars, and alter their behavior.

Most of those flaws are technical, requiring a deep knowledge of technologies behind cars, and have a common starting point: To find weakness in hardware or software bricks, like in the exploit of unsecured ports (e.g. CAN bus…)

To solve that, the solution seems obvious: Strengthen car’s systems.

But, what if hackers aren’t targeting the car itself?

Just like other complex products, the intruders may focus more on external elements, a non negligible part of threats will be related to the environment where autonomous cars will be, and of course, road signs are part of it

Here is my question:

What if thieves plant fake road signs, in order to change the itinerary of an autonomous armored cash car, and steal it?

This question looks like a futuristic robbery movie, but remains quite realistic, given the pace of adoption of autonomous vehicles nowadays.

Here is an example of STOP sign detected and applied by Tesla Autopilot.

Car autonomy is defined by 5 levels:

At the final level 5, the cars shall operate entirely on their own, without any driver presence, but does this include breaking road rules? And interpreting and deciding about each sign in its context?

As far as I know, cars aren’t yet designed to check the authenticity of each road sign, and at the same time, road signs are still basic objects that don’t have any intelligent components to communicate with vehicles.

The autonomy will be a matter of decision more than of automation.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Q: And for you, what environment issues can be add to this one?