6 Questions to Ask Security Solutions Vendors Who Claim They Use Artificial Intelligence

Posted by Chad Skipper OCT 23, 2019 ON

Just as 2017 was the year of ransomware and 2018 was the year of cryptomining, 2019 will certainly go down as the year of artificial intelligence (AI). It’s actually gotten so bad that we’ve had prospects literally tell us, “If you mention AI, I’m leaving.” So many solutions at least claim to use AI that organizations are at risk of missing out on very beneficial, credible AI-powered products because of all of the hype and false claims.

What is necessary is for security professionals to be able to discern which solutions (and vendors) are credible and which are just jumping on the bandwagon. In this post, I’ll give you some specific questions that will help you tell the difference. The six questions below include both a “good” and a “bad” answer that illustrate what organizations should be looking for in an AI-driven security software provider.

Putting AI into Perspective

But what do we mean by AI exactly? In broad terms, artificial intelligence is all about replicating intelligent human behavior. This often takes the form of machine learning (ML), a sub-category of AI that uses algorithms to categorize data and support predictions. ML functions in settings that are either supervised (mapping input to output variables) or unsupervised (clustering input data with no output variables).

But why is it important? Why do you need to consider AI solutions given the inflated expectations and hype? Put simply, it’s because you can’t keep up today without it. Given the high volume of automated, sophisticated threats plus the skills shortage, there are too many attacks to investigate and remediate them all manually. You need technology, specifically AI, in order to distinguish between benign anomalies and malicious one, and focus your limited human resources on the highest risk, most suspicious activity.

However, AI isn’t a cure-all for all security problems, however. If done poorly, it can be the source of a whole slew of problems. For instance, certain AI tools are known to flag benign network traffic as malicious, thereby creating false positives. These solutions oftentimes cannot “connect the dots” of an attack chain or provide high-fidelity assessments of the attack, either.

Given these potential drawbacks, organizations can get the most bang for their buck by being discerning about which AI security solution they want to purchase. But where and how do organizations start in evaluating a potential tool? I hope these six questions will help.

Q1: What types of AI do you use?

Bad answer: The vendor provides only one type. This indicates that they’re looking at just one problem. If they’re looking at digital security in one way only, it’s likely that the utility is missing what could be learned from other techniques. It’s like trying to solve a crime by talking to only one of the 10 suspects and witnesses, or considering only one type of evidence. You don’t get the full story.

Good answer: The solution provider uses more than one type of AI. The vendor should be able to give you a clear answer on where and why they use AI. It’s not just about using more than one type, but using AI in different aspects of their analysis. Even using the same type of AI in multiple locations is better than a single type in one location. Ideally, these types include various combinations of AI, including expert system, deep learning as well as supervised and/or unsupervised machine learning, each of which is particularly well-suited to analyze different types of data or answer different types of questions.

Q2: What are the specific algorithms you use in your ML?

Bad answer: The vendor can’t name any. This lack of familiarity illustrates a general lack of knowledge about AI that comes with time and experience. It also typically means that whatever AI algorithms the solutions provider is using are likely not configured in a way that will add value and thereby keep your organization protected.

Good answer: The solutions provider names specific and varied AI algorithms that it has incorporated into its tools, and why those algorithms are appropriate and beneficial. These algorithms may include the following:

DBScan: Useful in unsupervised ML, this algorithm is commonly used for clustering of packets/HTTP requests to support signature generation as well as detecting managed services that beacon external systems at regular intervals.

Isolation Forest: Data scientists generally use this algorithm for supervised ML, specifically for the purpose of using outlier detection on HTTP errors to identify malicious/suspicious requests.

Naïve Bayes… Lasso Regression…LSH/Minhash…the list goes on and on. The point is that the solutions provider should be able to name at least some of these, and the objective of each. For example, if you need new tires, it’s not enough to blindly go buy all-weather tires. You need to understand why you need all-weather tires to know that they are indeed the appropriate product.

Q3: How are your ML models generated, trained, scored and validated, and who does this?

Bad answer: If the vendor cannot provide some information about this aspect of using AI, then it’s highly likely that they’re either very new at it, or don’t really understand what’s necessary to effectively utilize it. Training, scoring and validating models is a critically important exercise if there’s any hope that the algorithms will deliver as expected. And it’s not something that just anyone can do.

Good answer: The solutions provider says it’s different for supervised and unsupervised applications. Also, listen for references to the data sets that each algorithm is modeling. Different file types (like .exe, .pdf and macros) and network traffic (like HTTP, DNS, SMTP) all have separate training sets, so it’s useful to evaluate all these resources individually. Think of it as a decathlete who has different training regimens for each of the events – the 1500 meter, pole vault, javelin, etc.

They should specifically be using some kind of scoring system that they can use to inform users about what they should ultimately be doing with the results of the models. And this work is typically done by a data scientist.

Q4: What are the goals of each algorithm?

Bad answer: I’d watch out for anyone who provides an overly generic answer, such as “it’s to detect threats and cyber attacks.” The role of a particular algorithm is much more granular than this.

Good answer: The solutions provider states that they’re using many algorithms because each data set requires its own technique. Different algorithms are best suited to different types of data to achieve different objectives just as a chef uses different knives for different tasks or different types of food. One size does not fit all. If the solution is robust, the tool should use various algorithms for clustering, classification, profiling to detect unusual connections in a network, unusual patterns in DNS, outliers, cluster samples based on behavior and levels of maliciousness, and so forth.

Q5: How often are your ML models updated? How well do they perform against drift?

Bad answer: The vendor states that they don’t know. Alternatively, they might answer “never” which makes the ML models slow to respond to new threats, or “daily,” which makes it little better than a signature-based system in that it’s dependent upon very frequent updates. If the AI is trained appropriately, it learns as it goes, so while it does need updating, it’s certainly not daily or never.

Good answer: The solutions provider demonstrates that they have some kind of plan or mechanism in place. Exactly what the update process or schedule consists of is less important than the fact that they have one. It could be updating the models themselves, implementing “patches” between model updates, or using signatures for something malicious that was not initially detected. Also, listen for if they’re familiar with the idea of “concept drift,” which is when the dataset being modeled evolves over time. For example, if you’re analyzing employees’ remote access to corporate servers, over time more employees may do this more often, which could look like an anomaly and generate an alert (a false alarm). Alternatively, the model is adjusted to accommodate the changing nature of legitimate behavior (drift).

Q6: What happens when your ML makes a bad decision?

Bad answer: The vendor says it never happens, or has the user contact them for an update or wait for a signature update.

Good answer: The solutions provider acknowledges that this happens and when it does, action needs to be taken. Specifically, it needs to incorporate findings into training, e.g., how to adapt to false positives and false negatives. It can use these findings to reset thresholds/scoring or adjust models to accommodate this unforeseen situation that caused the error.

Another way to address “bad” decisions is to use multiple techniques simultaneously to decrease the risk of something evading detection – what come call a layered approach. For example, some NTA solutions will flag activity as anomalous, and while the activity might not be bad – i.e., malicious – it is still anomalous, so the detection is not wrong. Having other systems that understand, for example, what malicious behavior looks like, understand IP address reputation, and analyzes email activity, could put that “bad” anomaly into a broader context that could make it clear either that it’s benign, or that it’s malicious. Either way, complementing the initial detection with additional context, added layers, can reduce the number of false negatives and false positives.

Looking for the Best Solution

The questions included above reveal a lot about a security solution. Sometimes vendors give good answers on some questions and bad answers on others. Just keep in mind what you’re trying to achieve – to make a determination about the vendor and their technology. Are they just claiming AI but don’t have the necessary skills, staff or expertise to use it effectively, or are they staffed with data scientists who have been working with AI for decades and can therefore put it to the best possible use towards creating a truly effective security product?