As a security veteran I find myself from time to time having to explain to newbies the importance of adopting a ‘hacker’s way of thinking’, and the difference between hacker’s and builder’s thinking.

If you can’t think like an attacker, how are you going to build solutions to defend against them? For the last 4 years I was involved in several research projects (most successful but some not so much) aimed at incorporating AI technology into Imperva products. The most significant challenge we had to cope with was making sure that our use of AI worked safely in adversarial settings; assuming that adversaries are out there, investing their brain power in understanding our solutions and adapting to them, polluting training data and trying to sneak through security mechanisms.

In recent years we’ve seen a surge in Artificial Intelligence technology being incorporated into almost every aspect of our lives. Having made remarkable leaps forward in areas like visual object recognition, semantic segmentation, and speech recognition, it’s only natural to see other industries race to adopt AI as their solution to, well… everything really.

The Security Lifecycle

Unsurprisingly, most vendors using AI don’t think of security. I remember one of the most interesting DefCon sessions I’ve seen: a research on the security of smart traffic sensors. Researcher Cesar Cerrudo saw a scene in a movie where hackers cleared a route of green lights through traffic and wondered whether this was possible. The answer was, of course, yes. In probing this ecosystem for security mechanisms to circumvent, Cerrudo wasn’t able to find any. What he did find, however, was a disturbingly easy way to take control of sensors buried in the roads and how to disable them.

I remember this session not because of the sophisticated hacking techniques used, but because it reminded me of a meeting with a large car manufacturer I had a few years earlier. As they started venturing further into the digitization of automotive systems – moving from purely mechanical systems into a wired/wireless network of digital devices, connected to external entities like garages and service centers – they discovered severe vulnerabilities and new cyber threats to the industry, with potentially lethal results.

This is the unfortunate, but inevitable security lifecycle of new technologies – closely tied to the Gartner hype-cycle. In its early days, the community talks innovation and opportunity, expectations and excitement couldn’t be higher. Then comes disillusionment. Once security researchers find ways to make the system do things it wasn’t supposed to, in particular, if the drops of vulnerabilities turn into a flood, the excitement is replaced with FUD – fear, uncertainty and doubt around the risk associated with the new technology.

Just like automotive systems and smart cities, there are no security exceptions when it comes to AI.

When it comes to security, AI is no different than other technologies. Analyzing a system designed without considering what attackers look for, it’s likely that the attacker will find ways to make that system do things it’s not supposed to. Some of you might be familiar with Google research from 2015, where they added human-invisible noise to an image of a school bus and had the AI classify it as an ostrich; more recently, research into the same field produced some pretty interesting new applications based on the 2015 exercise.

The Houdini framework was able to fool pose estimation, speech recognition and semantic segmentation AI. Facial recognition, used in many control and surveillance systems – like those used in airports –, was completely confused by colorful glasses with deliberately embedded patterns meant to puzzle the system. Next-Gen Anti-Virus software using AI for malware detection was circumvented by another AI in a super-cool bot vs. bot research, presented at Blackhat last year. Several bodies of research show that in many cases, AI deception has transferability, where deceptive samples for model A, are found to be effective against another model ‘A’ that solves the same problem.

It’s not all doom and gloom

That was the bad news, the good news is that AI can be used safely. Our product portfolio uses AI technology is used extensively to improve protection against a variety of threats against web applications and data systems. We do this effectively and, perhaps most important, safely. Based on our experience, I’ve gathered some guidelines for safe usage of AI.

While these are not binary rules, we find these guidelines effective in estimating the risk associated with using AI due to adversarial behavior.