Before we begin, let us preface this by saying that this is not an opinion piece. This article is the product of our own experience combined with breach related data from various sources collected over the past decade. While we too like the idea of detailed vulnerability disclosure from a “feel good” perspective the reality of it is anything but good. Evidence suggests that the only form of responsible disclosure is one that results in the silent fixing of critical vulnerabilities. Anything else arms the enemy.

Want to know the damage a single exposed vulnerability can cause? Just look at what’s come out of MS17-010. This is a vulnerability in Microsoft Windows that is the basis for many of the current cyberattacks that have hit the news like WannaCry, Petya, and NotPetya.

However, it didn’t become a problem until the vulnerability was exposed to the public. Our intelligence agencies did know about the vulnerability, kept it a secret, and covertly exploited it with a tool called EternalBlue. Only after that tool was leaked and the vulnerability that it exploited was revealed to the public did it become a problem. In fact, the first attacks happened 59 days after March 14th, which was when Microsoft published the patch thus fixing the MS17-010 vulnerability. 100% of the WannaCry, Petya and NotPetya infections occurred no less than two months after a patch was provided.

Why? The key word in the opening paragraph is not vulnerability. It’s exposed. Many security experts and members of the public believe that exposing vulnerabilities to the public is the best way to fix a problem. However, it is not. It’s actually one of the best ways to put the public at risk.

Here’s an analogy that can help the reader understand the dangers of exposing security vulnerabilities. Let’s say everyone on earth has decided to wear some kind of body armor sold by a particular vendor. The armor is touted as an impenetrable barrier against all weapons. People feel safe while wearing the armor.

Let’s say a very smart person has discovered a vulnerability that allows the impenetrable defense to be subverted completely, rendering the armor useless. Our very smart individual has a choice to make. What do they do?

Choice One: Sell it to intelligence agencies or law enforcement

Intelligence agencies and law enforcement are normally extremely judicious about using any sort of zero-day exploit. Because zero-day exploits target unknown vulnerabilities using unknown methods they are covert by nature. If an intelligence agency stupidly started exploiting computers left and right with their zero-day knowledge, they’d lose their covert advantage and their mission would be compromised. It is for this reason that the argument of using zero-day exploits for mass compromise at the hands of intelligence or law enforcement agencies is nonsensical. This argument is often perpetuated by people who have no understanding of or experience in the zero-day industry.

For many hackers this is the best and most ethical option. Selling to the “good guys” also pays very well. The use cases for sold exploits includes things like combating child pornography and terrorism. Despite this, public perception of the zero-day exploit market is quite negative. The truth is that if agencies are targeting you with zero-day exploits then they think that you’ve done something sufficiently bad to be worth the spend.

Choice Two: Sit on it

Our very smart individual could just forget they found the problem. This is security through obscurity. It’s quite hard for others to find vulnerabilities when they have no knowledge of them. This is the principle that intelligence agencies use to protect their own hacking methods. They simply don’t acknowledge that they exist. The fewer people that know about it, the lower the risk to the public. Additionally it is highly unlikely that low-skilled hackers (which make up the majority) would be able to build their own zero-day exploit anyway. Few hackers are truly fluent in vulnerability research and quality exploit development.

Some think that this is an irresponsible act. They think that vulnerabilities must be exposed because then they can be fixed and to fail to do so puts everyone at increased risk. This thinking is unfortunately flawed and the opposite is true. Today’s reports show that over 99% of all breaches are attributable to the exploitation of known vulnerabilities for which patches already exist. This percentage has been consistent for nearly a decade.

Choice Three: Vendor notification and silent patching

Responsible disclosure means that you tell the vendor what you found and, if possible, help them find a way to fix it. It also means that you don’t publicize what you found which helps to prevent arming the bad guys with your knowledge. The vendor can then take that information, create and push a silent patch. No one is the wiser other than the vendor and our very smart individual.

Unfortunately, there have been cases where vendors have pursued legal action against security researchers who come to them with vulnerabilities. Organizations like the Electronic Frontier Foundation have published guides to help researchers disclose responsibly, but there are still legal issues that could arise.

This fear of legal action can also prompt security researchers to disclose vulnerabilities publicly under the theory that if they receive retaliation it will be bad PR for the company. While this helps protect the researcher it also leads to the same problems we discussed before.

Choice Four: Vendor notification and publishing after patch release

Some researchers try to strike a compromise with vendors by saying they won’t publicly release the information they discovered until a patch is available. But given the slow speed of patching (or complete lack of patching) all vulnerable systems, this is still highly irresponsible. Not every system can or will be patched as soon as a patch is released (as was the case with MS17-010). Patches can cause downtime, bring down critical systems, or cause other pieces of software to stop functioning.

Critical infrastructure or a large company cannot afford to have an interruption. This is one reason why major companies can take so long to patch vulnerabilities that were published so long ago.

Choice Five: Exploit the vulnerability on their own for fun and profit.

The media would have you believe that every discoverer of a zero-day vulnerability is a malicious hacker bent on infecting the world. And true, it is theoretically possible that a malicious hacker can find and exploit a zero-day vulnerability. However, most malicious hackers are not subtle about their use of any exploit. They are financially motivated and generally focused on a wide-scale, high-volume of infection or compromise. They know that once they exploit a vulnerability in the wild it will get discovered and a patch will be released. Thus, they go for short-term gain and hope they don’t get caught.

Choice Six: Expose it to the public

This is a common practice and it is the most damaging from a public risk perspective. The thinking goes that if the public is notified then vendors will be pressured to act fast and fix the problem. The assumption is also that the public will act quickly to patch before a hacker can exploit their systems. While this thinking seems rational it is and has always been entirely wrong.

In 2015 the Verizon Data Breach Investigation Report showed that half of the vulnerabilities that were disclosed in 2014 were being actively exploited within one month of disclosure. The trend of rapid exploitation of published vulnerabilities hasn’t changed. In 2017 the number of breaches is up 29 percent from 2016 according to the Identity Theft Resource Center. A large portion of the breaches in 2017 are attributable to public disclosure and a failure to patch.

So what is the motivator behind public disclosure? There are three primary motivators. The first is that the revealer believes that disclosure of vulnerability data is an effective method for combating risk and exposure. The second is that the revealer feels the need to defend or protect themselves from the vulnerable vendor. The second is that the revealer wants their ego stroked. Unfortunately, there is no way to tell the public without also telling every bad guy out there how to subvert the armor. It is much easier to build a new weapon from a vulnerability and use it than is to create a solution and enforce its implementation.

Exposing vulnerability details to the public when the public is still vulnerable is the height of irresponsible disclosure. It may feel good and be done with good intention but the end result is always increased public risk (the numbers don’t lie).

It is almost certainly fact that if EternalBlue had never been leaked by ShadowBrokers then WannaCry, Petya and NotPetya would never have come into existence. This is just one out of millions of examples like this. Malicious hackers know that businesses don’t patch their vulnerabilities properly or in a timely manner. They know that they don’t need zeroday exploits to breach networks and steal your data. The only thing they need is public vulnerability disclosure and a viable target to exploit. The defense is logically simple but can be challenging to implement for some. Patch your systems.

[dt_divider style=”thick” /]

