Existing threat-detection systems broadly fall into two categories: a software bot that can detect patterns and human analysis. AI2's gimmick is that it mashes together a handful of different machine learning tools and asks its flesh-and-blood counterparts for help. When it thinks it's found a pattern amongst the noise of data, it offers it up to a person for a second opinion. After a short period of time, AI2 will learn from its errors and what the human experts are telling it. As Arnaldo says, "it continuously generates new models that it can refine in as little as two hours."

On its first day of operation, AI2 flagged 200 events that it determined to be a cyberattack to its masters. After a handful of days picking up the dos and don'ts from operators, that figure had dropped to just 40. That frees up fleshy operators to concentrate on the rest of their job and giving each flagged incident more of their attention. According to Nitesh Chawla, professor of computer science at Notre Dame University, AI2 "has the potential to become a line of defense against attacks such as fraud, service abuse and account takeover." Maybe Mossack Fonseca will be second on the list of clients racing to MIT's door.