You enter your password incorrectly too many times and get locked out of your account; your colleague sets up access to her work email on a new device; someone in your company clicks on an emailed “Google Doc” that is actually a phishing link — initially thought to be how the recent spread of the WannaCry computer worm began.

Each of these events leaves a trace in the form of information flowing through a computer network. But which ones should the security systems protecting your business against cyber attacks pay attention to and which should they ignore? And how do analysts tell the difference in a world that is awash with digital information?

The answer could lie in human researchers tapping into artificial intelligence and machine learning, harnessing both the cognitive power of the human mind and the tireless capacity of a machine. Not only will the combination of person and device build stronger defences, their ability to protect networks should also improve over time.

A large company sifts through 200,000 so-called “security events” every day to figure out which present real threats, according to Caleb Barlow, vice-president of threat intelligence for IBM Security. These include anything from staff forgetting their passwords and being locked out of the system, to the signatures of devices used to access networks changing, to malware attempting to gain entry to corporate infrastructure. “A level of rapid-fire triage is desperately needed in the security industry,” Mr Barlow says.

The stakes for businesses are high. Last year, 4.2bn records were reported to have been exposed globally in more than 4,000 security breaches, revealing email addresses, passwords, social security numbers, credit card and bank accounts, and medical data, according to analysis by Risk Based Security, a consultancy.

International Data Corporation, a US market research company, forecasts businesses will spend more than $100bn by 2020 protecting themselves from hacking, up from about $74bn in 2016.

Artificial intelligence can improve threat detection, shorten defence response times and refine techniques for differentiating between real efforts to breach security and incidents that can safely be ignored.

“Speed matters a lot. [Executing an attack] is an investment for the bad guys,” Mr Barlow says. “They’re spending money. If your system is harder to get into than someone else’s, they are going to move on to something that’s easier.”

Daniel Driver of Chemring Technology Solutions, part of the UK defence group, says: “Before artificial intelligence, we’d have to assume that a lot of the data — say 90 per cent — is fine. We only would have bandwidth to analyse this 10 per cent.

“The AI mimics what an analyst would do, how they look at data, how and why they make decisions . . . It’s doing a huge amount of legwork upfront, which means we can focus our analysts’ time. That saves human labour, which is far more expensive than computing time.”

IBM is also applying AI to security in the form of its Watson “cognitive computing” platform. The company has taught Watson to read through vast quantities of security research. Some 60,000 security-related blog posts are published every month and 10,000 reports come out every year, IBM estimates. “The juicy information is in human-readable form, not machine data,” Mr Barlow says.

The company has about 50 customers using Watson as part of its security intelligence and analytics platform. The program learns from every piece of information it takes in.

Read more Cyber attack survival guide What to expect, who to tell and how to limit the damage

“It went from literally being a grade-school kid. We had to teach it that a bug is not an insect, it’s a software defect. A back door doesn’t go into a house, it's a vulnerability. Now it’s providing really detailed insights on particular [threats] and how their campaigns are evolving. And that’s just in a matter of months,” Mr Barlow says. “The more it learns, the faster it gets smarter.”

IBM says Watson performs 60 times faster than a human investigator and can reduce the time spent on complex analysis of an incident from an hour to less than a minute.

Another even more futuristic technology could make Watson look as slow as humans: quantum computing. While machine learning and AI speed up the laborious process of sorting through data, the aim is that quantum computing will eventually be able to look at every data permutation simultaneously. Computers represent data as ones or zeros. But Mr Driver says that in a quantum computer these can be: “both [zeros and ones] and neither at the same time. It can have super positions. It means we can look through everything and get information back incredibly quickly.

“The analogy we like to use is that of a needle in a haystack. A machine can be specially made to look for a needle in a haystack, but it still has to look under every piece of hay. Quantum computing means, I’m going to look under every piece of hay simultaneously and find the needle immediately.”

He estimates that quantum computing for specific tasks will be more widely available over the next three to five years. “On this scale, the technology is still a way off, but there are companies that are developing it.”

One company pushing to make quantum computing commercially viable is Canada-based D-Wave, whose customers include Nasa, Lockheed Martin and Google. In January the company sold its newest, most powerful machine to a cyber security company called Temporal Defense Systems, which is using it to work on complex cyber security problems.

But there are risks to using AI technology in security systems. After all, machines that can be taught to think like humans can also be tricked.

Live event What will we do when machines do everything? Join us on June 19th for a conversation about automation and what it means for the future world of work.

“The AI itself is now becoming a target,” says Roman Yampolskiy, a professor of computer engineering and computer science at the University of Louisville in the US, who studies artificial intelligence and security.

Hackers may exploit machine learning by gradually teaching a security system that unusual behaviour is normal, known as “behavioural drift”, he says.

AI can also be used by attackers to fake human voices and create video images that could let criminals into your network. “If you get a call from someone whose voice you recognise and they say, ‘I don’t have time to talk, give me your password’, you will give it to them,” Prof Yampolskiy says.

Despite these advances in technology, the core challenge of providing security has not changed, says Mr Driver of Chemring. “It’s always a cat-and-mouse thing. As soon as you put the gate up higher, then the people will jump higher to get over it."

———————

1. On Friday May 12 2017, mobile operator Telefónica was among the first large organisations to report infection by WannaCry

2. By late morning, hospitals and clinics across the UK began reporting problems to the national cyber incident response centre

3. In Europe, French carmaker Renault was hit; in Germany, Deutsche Bahn became another high-profile victim

4. In Russia, the ministry of the interior, mobile phone provider MegaFon, and Sberbank became infected.

5. Although WannaCry’s spread had already been checked, the US was not entirely spared, with FedEx being the highest-profile victim

———————