Artificial Intelligence, Machine Learning, and Cybersecurity: A CISO’s Perspective

Guest blog by Bobby Singh, CISO & Global Head of Infrastructure Services at the Toronto Stock Exchange.

Artificial intelligence (AI) and machine learning (ML) are hot topics in the cybersecurity space. But what do they offer? What shortcomings must be overcome? And what must we be aware of as we look to build security solutions that will leverage AI and ML moving forward?

Before turning to these questions, I’d like to set up some working definitions. I see AI and ML as somewhat distinct disciplines. Machine learning is part of AI, but it is based on analysis of historical data. For example, in the case of monitoring employee behaviour, if an employee is browsing recruitment websites at work, updating his/her resumé, and coming in late to work, those are patterns that machine learning would be able to detect and identify as suspicious. Analyzing employee patterns of behaviour and correlating them with their subsequent decisions to leave the company, machine learning would be typically able to come up with a warning that “this employee is a potential flight risk.”

In comparison, artificial intelligence would be general problem solving that emulates human problem solving. AI could be deep learning using a multitude of neural networks to solve complex problems or to make simple common-sense decisions. An example of that type of problem solving would be what the Wright brothers did with their first flight. They used trial and error and solved the problem of flight in an innovative way, using an approach that hadn’t been seen before. But we are nowhere close to achieving such an AI system.

Where are we now? AI and ML in cybersecurity today

At present, the cybersecurity solutions space is dominated by machine learning. As a CISO, I see vendors presenting solutions based on analysis of historical patterns. These solutions use past attacks as the basis for combing through current data, seeking to identify an attack pattern. In this respect, ML does offer advantages. If you’re looking for a needle in a haystack, it will let you find it in the quickest and most efficient way possible. But this presupposes that you’re looking for a needle — a defined historical pattern or a defined criterion.

Of course, there are no rules in cyber attacks, and just because we have current solutions based on historical patterns, that doesn’t mean that future attacks will follow those patterns. The capable bad actors are not playing by the rules. They can gain knowledge of the specifics of our security solution through reconnaissance or from compromised insiders and are able to attack systems using the blind spots. Alternatively, they can manipulate the configuration of machine learning security tools using specialised malware or compromised insiders to create breaches in the cybersecurity protection. Even if the organization has a decent ML solution in place they should not be complacent. The human intelligence of attackers as well as human weaknesses inside the organization such as lack of processes or, in an extreme case, insider cooperation with attackers, will always be a weak point that can be exploited to defeat any “artificially intelligent” security system.

Is machine learning the answer?

I would say that machine learning is a part of the answer but not a complete answer. Let me clarify this a bit. Above all, I doubt that we can ever come up with a bullet-proof solution for cybersecurity. There will always be human, process, and technology vulnerabilities in both technology and security systems that we cannot even predict, let alone fix. It would be unrealistic to expect perfection, and artificial intelligence, like human intelligence, will always be imperfect.

Given this lack of rules in the cyber attack world, the job of machine learning is challenging. We’re trying to define rules for ML to follow, but the bad actors aren’t playing by any rules. We must guard against an over-reliance on ML, the belief that it will protect us against everything we think the bad actors can exploit. Machine learning will never catch all the anomalies or new attack vectors that someone from the outside might seek to inject into the organization. Let’s not oversell the machine learning concept in cybersecurity solutions. If we accept that we’re working within a system that doesn’t have any rules, does it make sense to use defined criteria in setting up ML?

It’s also important that machine learning isn’t given so much access that it does something that could have a negative effect. In certain use cases, for example when the impact of it going wrong is manageable, ML can be automated. But in other situations, such as when a new pattern of attack emerges, the machine shouldn’t make the judgement call. Rather, it should send out alerts, so a human expert can act.

I’m not against artificial intelligence and machine learning. They are here, and they can help us improve. But we aren’t ready to let go of the human intervention, because the machines aren’t smart enough yet. Could this change in the next few years? Perhaps.

Where do we go from here?

So where does that leave us? it’s vital that we pay attention to security basics — the hygiene of the cybersecurity world. For example, patching, or the gaps that result from lack of patching, those are hygiene fixes that we must perform to ensure that vulnerabilities and exposure are minimized. This doesn’t require advanced skills or a lot of R&D. It’s the basic stuff that we should all be doing in our space to protect it. On the other side also, hackers are not going to use sophisticated attacks if they have known unpatched vulnerabilities that can be easily exploited.

On the second level, we should use AI / ML as much as we can to automate protection against known attack scenarios. There we should combine the explicit rules of business and technology and augment security experts with machine learning. We can implement explicit expert rules that prevent business prohibited or suspicious actions. For example, a transaction that exceeds a daily client limit, or a user accessing a system from two continents within a short period of time, or a business administrative action happening outside the range of expected business hours could be completely prevented by the system using explicit rules.

Finally, the human intelligence of security professionals stays as our last and the most important line of defence. We must be aware that any AI / ML security system will have blind spots, points that we haven’t monitored. Furthermore, any security system could potentially be duped by a humanly intelligent, superior attacker. However, at that point, human security professionals will have a much easier job as a huge number of known attacks will already be repelled by a powerful AI /ML defence.

Download the CLX Forum book, Canadian Cybersecurity 2018: An Anthology of CIO/CISO Enterprise-Level Perspectives: http://www.clxforum.org/