Critical AI Algorithms Are Not Secure (Yet)

22 Nov 2019

After Elon Musk's promise of "fully-featured" self-driving cars in 2020, we decided to look at the state of security of AI algorithms and IoT systems like autonomous vehicles.

TL;DR Our lives already depend on AI algorithms that are currently easily fooled and insecure. Figure 1: Evolved images that are unrecognizable to humans, but that state-of-the-art DNNs trained on ImageNet believe with >= 99.6% certainty to be a familiar object. This result highlights differences between how DNNs and humans recognize objects. Left: Directly encoded images. Right: Indirectly encoded images. Source: Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images.

Fatalities by AI accidents

Five people have been killed with Teslas occurring while automated driving-system acknowledged to have been engaged.

Many people would argue that humans cause more accidents, however if you look at the footage of the accidents, it is clear a human would be able to at least slow down while Tesla's autopilot seems to be completely blind in certain circumstances.

These accidents show that our lives already depend on the implementations of Artificial Intelligence.

The main question regarding cybersecurity is whether we can replicate the input to the algorithms that cause the car to crash. The answer is yes.

Hacking AI

It turns out all it takes is a few stickers to fool state-of-the-art AI algorithms.

In a recent report, Tencent’s Keen Security Lab showed how they were able to bamboozle a Tesla Model S into switching lanes so that it drives directly into oncoming traffic. All they had to do was place three stickers on the road, forming the appearance of a line. The car’s Autopilot system, which relies on computer vision, detected the stickers and interpreted them to mean that the lane was veering left. So it steered the car that way.

A group of Israeli researchers at Harman International also managed to alter traffic signs in such a way that AI algorithms would interpret it differently than humans.

Probing and manipulating machine-learning systems is also known in the field as “adversarial machine learning” methods, and it is relevant to many industries where machine-learning is used, including Defense, Healthcare and Banking.

Exploiting weak AI Security to steal sensitive data

AI is not only used for image recognition, one example of another application is automated email responses. This is used by many companies, including banks.

According to AI researcher Dawn Song, any AI system can have major security risks.

One project by Song, conducted in collaboration with Google, involved probing machine-learning algorithms trained to generate automatic responses from e-mail messages (in this case the Enron e-mail data set). The effort showed that by creating the right messages, it is possible to have the machine model spit out sensitive data such as credit card numbers.

When deploying AI, developers should always take security into account.

Conclusions

Many AI systems have critical responsibilities over our lives in various industries. Cybersecurity is often not considered in the context of AI and this must change quickly, before someone decides to put three stickers on a road and cause fatal injuries by fooling the AI.