AI fuzzing definition

AI fuzzing uses machine learning and similar techniques to find vulnerabilities in an application or system. Fuzzing has been around for a while, but it's been too hard to do and hasn't gained much traction with enterprises. Adding AI promises to make the tools easier to use and more flexible.

That's a good news, bad news kind of situation. The good news is that enterprises and software vendors will have an easier time finding potentially exploitable vulnerabilities in their systems so they can fix them before bad guys get to them.

The bad news is that the bad guys will have access to this technology as well and will soon start to find zero-day vulnerabilities on a massive scale. Australian tech consultancy Rightsize Technology named it one of the top ten security threats of 2019.

How fuzzing works

In traditional fuzzing, you generate a lot of different inputs to an application in an attempt to crash it. Since every application accepts inputs in different ways, that requires a lot of manual setup. Then, you can't just try every possible input in a brute force attack and see how the application responds. "That would take a very long time," says Daniel Crowley, research director at IBM's Security X-Force Red. Just trying every possible combination of characters in a single ten-character input field could take months or even years, he says.

Virtual every application out there is more complex than that, so fuzzers try things at random, and use a variety of strategies to try the most likely prospects. AI tools can be useful in helping generate test cases, says Crowley. "But you can also use it in the post-mortem phase of fuzzing, to determine if the crashes you find are exploitable or not. Not every bug you find is a security bug."

Fuzzing technology has been around for a long time, he adds. Even intelligent fuzzing isn't completely new. It just hasn't been used much outside of academia. "There haven't been a lot of good commercial tools," says Jared DeMott, former NSA analyst and founder at VDA Labs.

Fuzzing falls under the category of dynamic analysis, DeMott explains, which is difficult to integrate into the development lifecycle. Static analysis, which scans the source code of an application, is much easier, he says. "And there are a fair number of tools in the commercial space." So, enterprises are more likely to use static analysis, he says.

AI fuzzing tools

That's all now changing, he says, with companies offering fuzzing as a service making the technology simpler to deploy. "I'm really excited about Microsoft Security Risk Detection," he says. "What's neat is that they're combining a web fuzzer into it, too. In the past, you needed different tools."

The top AI fuzzing tools include:

MSRD uses an intelligent constraint algorithm approach. The other main option for intelligent fuzzing is genetic algorithms, he adds. Both are ways in which the fuzzing tool can zero in on the most promising paths to find a vulnerability.

Genetic algorithms are used in the American Fuzzy Lop open-source tool set, which is at the heart of a new cloud-based product, Fuzzbuzz. AFL is also part of Google's ClusterFuzz project.

"We're starting to see fuzzers as a service come out," says DeMott. "I'm hoping people will use it. Even if it's hard, it's still worth doing. It's worth it to make your products safe." What's exciting about the AI fuzzers is that not only do they reduce the work required, but they get better results and can dive deeper into a program and find more bugs, he adds.

One company offering fuzzing is Sunnyvale, California-based cybersecurity startup RiskSense, which provides risk assessments to enterprise customers. "We don't call it fuzzing as a service, we call it attack validation as a service," says Srinivas Mukkamala, the company's CEO. "But the main component is fuzzing as a service."

Where machine learning comes in today is after the company finds a vulnerability to see if the vulnerability is worth exploiting and whether it will be exploited. The fuzzing itself is done with traditional automated methods and human oversight, he says. The company does plan to start using AI in the initial stage as well, once it has enough training data. "You want to have enough data, enough patterns, that your machine learning makes sense," Mukkamala says. "If you don't have a good data set, you're going to have a lot of false positives, and you'll have to go back and validate the results and spend a lot of time."

For enterprises setting up their own AI-powered fuzzing programs, he recommends going with a vendor. With Google, Microsoft, or another provider, you get economies of scale, Mukkamala says. "Unless you're a multinational, you won't have the right resources," he says. "The big cloud providers have more data than governments."

The one exception is for really critical systems, he says, such as a defense company's weapons system. "If I'm a government, I might not put my data on AWS for fuzzing," he says. "If it's a weapons system, they're not going to use an AI fuzzer from Google or Microsoft unless there are very stringent security requirements. But if I'm a normal organization, I'll go with the established vendors. If you look at what Google and Microsoft do, they're not always the first to do it, but they'll do it at scale."

Using AI fuzzing to find zero-day vulnerabilities

In February, Microsoft security researcher Matt Miller gave a presentation at the company's BlueHat conference, in which he addressed the issue of zero-day vulnerabilties. In 2008, he says, the vast majority of Microsoft vulnerabilities were first exploited after a patch was released. Only 21 percent were first exploited as zero-days. In 2018, the ration was reversed with 83 percent of vulnerabilities first exploited as zero-days, most usually as part of a targeted attack.

So far, finding zero-days has been a challenge, and prices for new zero-days are high. One company, Zerodium, is now offering as much as $2 million for a single high-risk vulnerability with a fully functional exploit and says it can go even higher for particularly valuable zero-days. Zeronium then resells the info to an undisclosed set of a "very limited number of organizations" that are "mainly government organizations."

Average prices for zero-days are lower. The RunSafe Pwn Index, which averages exploit prices across operating systems, and includes dark web marketplaces as well as payout services and private practitioners, is currently at just over $15,000 and trending up, the company says.

It's very likely that nation-states and sophisticated cybercriminals are already using AI fuzzing, says Joe Saunders, founder and CEO at RunSafe Security. "Attackers will always go for the lowest effort." Plus, with IoT devices and connected systems in general proliferating, the potential attack surface continues to expand, he says.

Ultimately, attackers will be able to simply pick a target and then automatically mine it for zero-day exploits, says Derek Manky, global security strategist at Fortinet, Inc. "We are seeing the building blocks of where this could happen in the future," he says.

"I can guarantee that it will be a big push on the cybercrime side," says Adam Kujawa, director of Malwarebytes Labs. "And it may become a service thing. It's the future of fuzzing. Doing it manually doesn't make sense anymore when you can have an AI do it for you."

That means we'll be seeing a lot more zero-days, he says. But he doesn't expect a flood of AI-discovered zero-days to hit this year. "It's too early," Kujawa says. "The technology itself is pretty young."

It's not too soon to start preparing, though, Kujawa adds. "It's best to get ahead of it, in my opinion. Any vendor, developer, software company should be fuzzing their own software. That's the best way to prepare, to make sure you don't have those obvious holes."

Few real-world examples of AI fuzzing

Despite the hype, there are few examples of AI fuzzing being used, says Leigh-Anne Galloway, cybersecurity resilience lead at Positive Technologies. "Nowadays, fuzzing without AI is more effective," she says.

Positive Technologies has a large security research center and discovers about 700 application security vulnerabilities each year. Right now, Galloway says, traditional techniques dominate, such as fuzzing using symbolic execution. However, machine learning and other new technologies "definitely bring something new" to the field, she adds. "At the next large conference, someone will likely talk describe something new that will turn the industry upside down."

It's hard to tell whether criminals are already using AI fuzzing because they don't usually talk about their methods, says Andrew Howard, CTO at Kudelski Security. "Fuzzing is fuzzing; the algorithm behind the fuzzer is not likely to be discussed when a vulnerability is exposed," he says.

Since AI offers the possibility to find vulnerabilities faster and to find vulnerabilities not detected by traditional means, he expects to see an arms race here. "If the attackers are improving their fuzzing techniques, the good guys must do the same," Howard says.