Still, in light of what we already knew about the National Security Agency’s own efforts along similar lines, thanks to Edward Snowden’s disclosures about the agency’s Tailored Access Operations division, this is—at least from a policy perspective—not so much revelation as confirmation. Moreover, there’s little here to suggest surveillance that’s either aimed at Americans or indiscriminate, the features that made Snowden’s leaks about NSA surveillance so politically explosive. One of the more widely-reported projects in Vault 7, for instance, has been the Doctor Who–referencing “Weeping Angel” implant, which can turn Samsung televisions into surveillance microphones even when they appear to be turned off. Yet, at least at the time the documentation in the WikiLeaks release was written, Weeping Angel appeared to require physical access to be installed—which makes it essentially a fancy and less detectable method of bugging a particular room once a CIA agent has managed to get inside. This is all fascinating to surveillance nerds, to be sure, but without evidence that these tools have been deployed either against inappropriate targets or on a mass scale, it’s not intrinsically all that controversial. Finding clever ways to spy on people is what spy agencies are supposed to do.

What is genuinely embarrassing for the intelligence community, however, is the fact of the leak itself—a leak encompassing not only thousands of pages of documentation but, according to WikiLeaks, the actual source code of the hacking tools those documents describe. While WikiLeaks has not yet published that source code, they claim that the contents of Vault 7 have been circulating “among former U.S. government hackers and contractors in an unauthorized manner,” which if true would make it far more likely that other parties—such as foreign intelligence services—had been able to obtain the same information. Worse, this comes just months after the even more disastrous Shadow Brokers leak, which published a suite of exploits purportedly used by the NSA-linked Equation Group to compromise the routers and firewalls relied upon by many of the world’s largest companies to secure their corporate networks.

That’s of great significance for the ongoing debate over how intelligence agencies should respond when they discover vulnerabilities in widely-used commercial software or firmware. Do they inform the vendor that they’ve got a security hole that could put their users at risk, or do they keep quiet and make use of the vulnerability to enable their own surveillance? If the latter, how long do they wait until disclosing? In 2014, the White House’s cybersecurity czar attempted to reassure the public that the government’s mechanism for making such decisions—an informal “Vulnerability Equities Process” designed to weigh the intelligence benefit of keeping an exploit against the public’s interest in closing security holes—was strongly biased in favor of disclosure. The number of critical vulnerabilities that we now know have remained undisclosed, sometimes for years, should cast serious doubt on that assertion. But the means by which we know it should strengthen the case for disclosure still further.