When Jonathan Leitschuh found a catastrophic security vulnerability in Zoom, the popular videoconferencing platform, the company offered him money to keep quiet in the form of a bug bounty and a non-disclosure agreement (NDA) through Bugcrowd.

The security flaw affected millions of Zoom users on Mac, and Leitschuh wanted to see the issue fixed. He declined the bounty payment because of the NDA, gave Zoom an industry-standard 90-day embargo to ship a patch, and when the company failed to do so, he published his research.

Cue fireworks. Zoom got a lot of negative media attention and fixed the security flaw. Leitschuh's struggle to hold organizations accountable for their poor security posture is more common than you may think, and some security researchers feel the bug bounty platforms — HackerOne, Bugcrowd and Synack — have become marketplaces where their silence is being bought and sold to prevent public exposure of insecure practices.

Used properly, bug bounty platforms connect security researchers with organizations wanting extra scrutiny. In exchange for reporting a security flaw, the researcher receives payment (a bounty) as a thank you for doing the right thing. However, CSO's investigation shows that the bug bounty platforms have turned bug reporting and disclosure on its head, what multiple expert sources, including HackerOne's former chief policy officer, Katie Moussouris, call a "perversion."

Key takeaways from CSO’s bug bounty investigation • Bug bounty platforms use NDAs to trade bounty hunter silence for the possibility of a payout. • All organizations need a vulnerability disclosure program (VDP); few need a bug bounty program. • Bug bounty platforms may violate California and federal labor law, and the EU’s General Data Protection Regulation (GDPR). • You can't outsource a VDP entirely, only very tiny pieces, per ISO standards. • Bug bounty platforms and their use of NDAs contribute to a public safety issue due to unpatched security flaws.

Bug bounty vs. VDP

A vulnerability disclosure program (VDP) is a welcome mat for concerned citizens to report security vulnerabilities. Every organization should have a VDP. In fact, the US Federal Trade Commission (FTC) considers a VDP a best practice, and has fined companies for poor security practices, including failing to deploy a VDP as part of their security due diligence. The US Department of Homeland Security (DHS) issued a draft order in 2019 mandating all federal civilian agencies deploy a VDP.

Regulators often view deploying a VDP as minimal due diligence, but running a VDP is a pain. A VDP looks like this: Good-faith security researchers tell you your stuff is broken, give you 90 days max to fix it, and when the time is up they call their favorite journalist and publish the complete details on Twitter, plus a talk at Black Hat or DEF CON if it's a really juicy bug.

"Getting ready for a VDP is technically straightforward but politically is a harder challenge," HackerOne's co-founder and CTO Alex Rice tells CSO in defense of the practice of providing private bug bounty programs to companies that lack a VDP, citing legal, regulatory, policy and risk management concerns inside customer organizations. "Today we have people launching private bounty programs before VDPs, and that's a model that's worked well to start building that researcher relationship with a small number of hackers in a private engagement," he adds. "We could debate all day whether that’s right or not. Our conclusion is that it's right for some organizations."

The delicate balance of running a VDP and working with good-faith researchers is a win-win-win for society, for the impacted organization, and for the security researcher, but some enterprises more worried about their stock price might prefer to pay money to make that pain point go away.

Bug bounty platforms offer organizations a tempting alternative. Researchers report security flaws under NDA and are paid to keep quiet. Maybe we'll fix the issues you reported. When we get around to it.

But there are no regulatory — or even normative — requirements to deploy a bug bounty, and for many companies unprepared to process a deluge of bug reports, a bug bounty is the wrong decision.

VC-powered marketing hype

Venture capitalist-fueled dreams of building a billion-dollar unicorn cybersecurity gig economy are largely to blame for where we are now, Moussouris tells CSO.

“I want to get to 1,000,000 hackers [on our platform] … that’s really where I want us to be in the future,” HackerOne CEO Mårten Mickos told CyberScoop in July 2017. The company's February 2020 report "details the efforts and motivations of more than 600,000 individuals who represent our hacker community."

Except that 600,000 number is at least somewhat inflated. This reporter has two of those accounts, including one created, and forgotten, in 2017. Anyone can sign up for as many HackerOne or Bugcrowd accounts as they like. (Synack requires applicants to apply with a resumé before giving them access to bug bounty programs.) The real question: How many competent security researchers are finding and reporting bugs?

According to HackerOne's Rice, 9,650 HackerOne users submitted valid bug bounty vulnerability reports in 2019, with 3,150 of them sufficiently motivated and engaged to respond to the company's questionnaire.

The number of people making more than $100,000 over their entire time working on the platform is in the low hundreds. — Katie Moussouris

That number of active users is far short of Mickos's lofty one million hacker goal. And as for the quality of those valid vulnerability reports…. "I've seen some quote unquote valid vulnerability reports," Laurens ("lvh") Van Houtven, principal at Latacora, a secops and cryptography expert, tells CSO. "If someone asked me 'should I put this in my appsec report?', I'd say 'you can put it in there, but I will never let you live it down.'"

Moussouris, now founder and CEO of bug bounty consultancy Luta Security, questions how much of HackerOne is real. "Their latest report shows most registered users are basically either fake or unskilled," she says. "The number of people making more than $100,000 over their entire time working on the platform is in the low hundreds. That number of relatively skilled researchers hasn't changed significantly at all, making their claim to have the largest number of hackers pretty misleading."

"These commercial bug bounty platforms ... are perverting the entire ecosystem, and I want to see this stop, even if it costs me personally," Moussouris adds. As a former HackerOne exec, she would profit handsomely from any successful HackerOne public stock offering. "I am speaking to you in the opposite direction of my own personal financial gain."

HackerOne also makes a lot of noise about its "hacker millionaires," who have made more than a cumulative million dollars each since the platform launched in 2012. What was the median income of a HackerOne bug finder in 2019? What about the average? How many vulnerability reports does the median/mean hacker submit? HackerOne declined to answer these questions.

Likewise, Bugcrowd tells CSO that it has "20,000-plus active researchers on the platform with an estimate of 2 to 3 million potential whitehat hackers available around the world."

How does Bugcrowd define an "active researcher"? Is that a calendar year 2019 figure, or a cumulative number since Bugcrowd first launched in 2011? Where does the 2 to 3 million whitehat hackers number come from? "At this time, we do not publicly disclose those details," a Bugcrowd public-relations rep tells CSO.

Covering up security issues

Silence is the commodity the market appears to be demanding, and the bug bounty platforms have pivoted to sell what willing buyers want to pay for.

"Bug bounties are best when transparent and open. The more you try to close them down and place NDAs on them, the less effective they are, the more they become about marketing rather than security," Robert Graham of Errata Security tells CSO.

Leitschuh, the Zoom bug finder, agrees. "This is part of the problem with the bug bounty platforms as they are right now. They aren't holding companies to a 90-day disclosure deadline," he says. "A lot of these programs are structured on this idea of non-disclosure. What I end up feeling like is that they are trying to buy researcher silence."

The bug bounty platforms' NDAs prohibit even mentioning the existence of a private bug bounty. Tweeting something like "Company X has a private bounty program over at Bugcrowd" would be enough to get a hacker kicked off their platform.

The carrot for researcher silence is the money — bounties can range from a few hundred to tens of thousands of dollars — but the stick to enforce silence is "safe harbor," an organization’s public promise not to sue or criminally prosecute a security researcher attempting to report a bug in good faith.

The US Department of Justice (DOJ) published guidelines in 2017 on how to make a promise of safe harbor. Severe penalties for illegal hacking should not apply to a concerned citizen trying to do the right thing, they reasoned.

Want safe harbor? Sign this NDA

Sign this NDA to report a security issue or we reserve the right to prosecute you under the Computer Fraud and Abuse Act (CFAA) and put you in jail for a decade or more. That's the message some organizations are sending with their private bug bounty programs.

Take PayPal. The VDP on its website tells all bug finders to create an account on HackerOne and agree to the terms and conditions of their private bug bounty program, including the NDA. If you report a bug any other way, PayPal explicitly refuses to offer safe harbor to bug hunters.

"You won't find VDPs on HackerOne that don't permit any type of disclosure," Rice tells CSO, which at least in the case of PayPal appears not to be true. PayPal's VDP shoehorns every bug reporter into its private bounty program on HackerOne, and the only way to report a bug in good faith with zero expectation of a bounty is to agree to that private program's NDA. (HackerOne's website may label the program a "private bug bounty" instead of a "VDP," but it remains the sole published way to report a security flaw to PayPal at the time of this writing.)

The PayPal terms, published and facilitated by HackerOne, turn the idea of a VDP with safe harbor on its head. The company "commits that, if we conclude, in our sole discretion, [emphasis ours] that a disclosure respects and meets all the guidelines of these Program Terms and the PayPal Agreements, PayPal will not bring a private action against you or refer a matter for public inquiry."

The only way to meet their "sole discretion" decision of safe harbor is if you agree to their NDA. "By providing a Submission or agreeing to the Program Terms, you agree that you may not publicly disclose your findings or the contents of your Submission to any third parties in any way without PayPal’s prior written approval."

HackerOne underscores that safe harbor can be contingent on agreeing to program terms, including signing an NDA, in their disclosure guidelines. Bug finders who don't wish to sign an NDA to report a security flaw may contact the affected organization directly, but without safe harbor protections.

"Submit directly to the Security Team outside of the Program," they write. "In this situation, Finders are advised to exercise good judgement as any safe harbor afforded by the Program Policy may not be available."

Rice says HackerOne discourages such conduct from customers and will kick companies off the platform if they take "unreasonable punitive action against finders," such as making legal threats or referring a finder to law enforcement. He points out that earlier this week HackerOne removed online voting vendor Voatz from the platform, the first time HackerOne has removed a customer from the platform. PayPal did not respond to our request for comment.

However, security researchers concerned about safe harbor protection should not rest easy with most safe harbor language, Electronic Frontier Foundation (EFF) Senior Staff Attorney Andrew Crocker tells CSO. "The terms of many bug bounty programs are often written to give the company leeway to determine 'in its sole discretion' whether a researcher has met the criteria for a safe harbor," Crocker says. "That obviously limits how much comfort researchers can take from the offer of a safe harbor."

"EFF strongly believes that security researchers have a First Amendment right to report their research and that disclosure of vulnerabilities is highly beneficial," Crocker adds. In fact, many top security researchers refuse to participate on bug bounty platforms because of required NDAs.

Fed up with bug bounty NDAs

Tavis Ormandy, the well-respected security researcher at Google Project Zero, declined to be interviewed for this article, but has previously taken a strong public stance against NDAs. In 2019 he tweeted, "I refuse to agree to terms before reporting a vulnerability," adding in a follow-up tweet, "It's like saying you're going to make a truthful, verifiable and reproducible claim about a product, but willing to give the vendor a short window to make changes first if they wish to do so. No requirement to act if they don't want to or don't care."

He's not the only security researcher who refuses to be muzzled. Varun Kakumani, recently in the news for trying to report a security flaw to Netflix that Bugcrowd triagers marked as out of scope, tells CSO that, despite being a veteran bug finder listed in the Google, Microsoft, Yahoo, Adobe and eBay halls of fame, he will never work for the bug bounty platforms and only submitted via Bugcrowd because Netflix outsources its VDP to that platform.

"There is no use of bug bounties these days," Kakumani tells CSO. "It's like a game for many people. Just follow their stupid rules and get paid. There is no value for true hacker's work these days."

Kevin Finisterre (@d0tslash), the DJI drone bug finder who famously walked away from a $30,000 bounty because the vendor demanded an NDA to cover up a data breach, doesn't mince his words about bug bounties. "Enticing me to participate in bounties at this phase in my career is a hard sell," he tells CSO. "The economy of doing bounty work makes zero sense for me in most cases."

Labor law violations

After getting burned by DJI, Finisterre now works full time doing security for an autonomous vehicle division of a large auto manufacturer and suggests bug bounties are more for younger, less established people looking to get noticed. He says that bug bounty platforms are exploiting hackers. "Most egregious to me is many of us are some form of on [the autism] spectrum and we will literally work ourselves to death hunting bugs ultimately for little return on immediate efforts," he says, adding, "No one ever mentions the lack of health insurance...."

Health insurance in the US is typically provided by employers to employees, and not to independent contractors. However, legal experts tell CSO that the bug bounty platforms violate both California and US federal labor law.

California AB 5, the Golden State's new law to protect "gig economy" workers that came into effect in January 2020, clearly applies to bug bounty hunters working for HackerOne, Bugcrowd and Synack, Leanna Katz, an LLM candidate at Harvard Law School researching legal tests that distinguish between independent contractors and employees, tells CSO.

AB 5 uses a human-readable "ABC test" to determine if a worker is an employee or independent contractor under California law. "It is unlikely that all three elements [of the ABC test] of control, work outside the usual course of business, and independently performing the same work are met," Katz says. "Thus...hackers are likely employees under California's laws."

Veena Dubal, a law professor at University of California, Hastings, and an expert on labor law who researches the gig economy, agrees with Katz's analysis. She says that the bug bounty platforms also violate the US Federal Labor Standards Act (FLSA) that requires employers to pay a minimum wage.

Under federal law it is conceivable that not just HackerOne but the client is a joint employer [of bug finders]. There might be liability for companies that use [bug bounty platform] services. — Veena Dubal

Consider a finder who spends weeks or months of unpaid work to discover and document a security flaw. Someone else independently discovers, documents and submits that same bug five minutes before the first finder. Under the rules of most HackerOne and Bugcrowd bounty programs, the first submitter gets all the money, the second finder gets nothing.

"My legal analysis suggests those workers [on bug bounty platforms] should at least be getting minimum wage, overtime compensation, and unemployment insurance," Dubal tells CSO. "That is so exploitative and illegal," she adds, saying that "under federal law it is conceivable that not just HackerOne but the client is a joint employer [of bug finders]. There might be liability for companies that use [bug bounty platform] services."

"Finders are not employees," Rice says, a sentiment echoed by Bugcrowd founder Ellis and Synack founder Jay Kaplan. Synack's response is representative of all three platforms: "Like many companies in California, we're closely monitoring how the state will apply AB 5, but we have a limited number of security researchers based in California and they represent only a fractional percentage of overall testing time," a Synack representative tells CSO.

Using gig economy platform workers to discover and report security flaws may also have serious GDPR consequences when a security researcher discovers a data breach.

Bug bounty platforms may violate GDPR

When is a data breach not a data breach?

When a penetration testing consultancy with vetted employees discover the exposed data.

A standard penetrating testing engagement contract includes language that protects the penetration testers — in short, it's not a crime if someone asks you to break into their building or corporate network on purpose, and signs a contract indemnifying you.

This includes data breaches discovered by penetration testers. Since the pen testers are brought under the umbrella of the client, say "Company X," any publicly exposed Company X data discovered is not considered publicly exposed, since that would legally be the same as a Company X employee discovering a data breach, and GDPR's data breach notification rules don't come into play.

What about unvetted bug bounty hunters who discover a data breach as part of a bug bounty program? According to Joan Antokol, a GDPR expert, the EU's data breach notification regulation applies to bug bounty platforms. Antokol is partner at Park Legal LLC and a longstanding member of the International Working Group on Data Protection in Technology (IWGDPT), which is chaired by the Berlin Data Protection Commissioner. She works closely with GDPR regulators.

"If a free agent hacker who signed up for a project via bug bounty companies to try to find vulnerabilities in the electronic systems of a bug bounty client (often a multinational company), was, in fact, able to access company personal data of the multinational via successful hacking into their systems," she tells CSO, "the multinational (data controller) would have a breach notification obligation under the GDPR and similar laws of other countries."

The lack of vetting of bug bounty hunters, where anyone, including this reporter, can sign up for a HackerOne or Bugcrowd account with any email address, is the key sticking point, Antokol says. "There is really no way around it when the bug bounty companies collect little more than a name (and perhaps a fictitious one at that) of the presumably ethical hacker, along with their wire transfer information or an address for the bug bounty company to send them a payment," Antokol says. "Even if the bug bounty company or multinational was able to obtain a certification from the successful hacker about no misuse of the personal data, full and irreversible erasure of the data, no sharing, of the data, etc., they would not be able to ensure credibility or accountability of the hacker, so it would essentially be a sham."

With proper GDPR compliance in place, though, Antokol says, the notification obligation could perhaps be avoided.

However, all three bug bounty platforms downplayed this risk, and none of the three acknowledged any instance where a bug bounty program led to GDPR breach notification. "I don't have data on how often bounty program activity triggered a breach notification in 2019," HackerOne's Rice says. "It's quite rare, so I'd guess a handful, if there were any at all."

Bugcrowd pointed out that 90% of its bug bounty programs are invite-only private programs, meaning NDA required. "All researchers participating in private bug bounty programs," they write, "must be pre-vetted through our internal vetting process. In order to be invited to private programs, researchers must prove their abilities and trustworthiness via public programs."

Bugcrowd declined to elaborate on what the process of pre-vetting researchers actually looks like. So, this reporter signed up for an account and had immediate access to all public programs without any additional steps.

To be eligible for private programs, Bugcrowd invites researchers to fill out this Google form, which asks researchers for their LinkedIn, GitHub and Twitter accounts, "Name or Pseudonym", plus "any relevant certifications or qualifications that validate your testing credentials." The form also asks, "Would you be willing to be background checked?" noting that, "Specific customers require researchers to be background checked in order to participate in their private programs." The form warns that "if you don’t provide detailed information you will not be considered for private program invites."

Synack, which runs only private bug bounty programs, vets all its independent researchers (the "Synack Red Team"), including a criminal background check, and accepts only around 20% of applicants, a Synack representative tells CSO.

What this really highlights is that companies can augment but not replace a traditional pen testing engagement with bug bounties. Further, only a small portion of bug reporting can be outsourced, and convoluted attempts to do so might backfire.

Some things your security team has to do in house.

NDAs are not ISO compliant

Good security people are scarce, and at a time of zero unemployment among information security professionals, it can be tempting to outsource anything you can. But a VDP is not something you can fully outsource.

HackerOne's marketing strongly implies that its platform can help companies outsource ISO 29147 and ISO 30111, the standards that define best practices on receiving security bug reports, fixing those bugs, and publishing advisories. However, the co-author of both of those standards, Moussouris, says this is flat out not possible.

ISO 29147 standardizes how to receive security bug reports from an outside reporter for the first time and how to disseminate security advisories to the public.

ISO 30111 documents internal digestion of bug reports and remediation within an affected software maker. ISO provided CSO with a review copy of both standards, and the language is unambiguous.

These standards make clear that private bug bounty NDAs are not ISO compliant. "When non-disclosure is a required term or condition of reporting bugs via a bug bounty platform, that fundamentally breaks the process of vulnerability disclosure as outlined in ISO 29147," Moussouris says. "The purpose of the standard is to allow for incoming vulnerability reports and [her emphasis] release of guidance to affected parties."

ISO 29147 lists four major goals, including "providing users with sufficient information to evaluate risk due to vulnerabilities," and lists eight different reasons why publishing security advisories is a standardized requirement, including "informing public policy decisions" and "transparency and accountability." Further, 29147 says that public disclosure makes us all more secure in the long term. "The theory supporting vulnerability disclosure holds that the short-term risk caused by public disclosure is outweighed by longer-term benefits from fixed vulnerabilities, better informed defenders, and systemic defensive improvements."

Contrast this with HackerOne's "5 critical components of a vulnerability disclosure policy" that lets organizations muzzle security researchers. "Set non-binding expectations for how reports will be evaluated," the company recommends. "This section can include the duration between submission and response, confirmation of vulnerability, follow-on communications, expectation of recognition, and if or when finders have permission to publicly disclose their findings." [emphasis ours]

Rice disputes the accusation that HackerOne is unable to offer ISO compliance, saying that "H1 Response [their VDP offering] allows organizations to comply with all guidance from 29147 and 30111 in its default configuration." That's like saying Monster Sugar O'Cereal is a part of a balanced breakfast. Technically true, but it doesn't add much nutrition.

"For bug bounty companies to claim they help at all with ISO 30111 shows they don't actually understand these standards or how to comply with them. It's false advertising at best, and outright lies at worst," Moussouris says. "None of what the bug bounty platforms provide has anything to do with this part of the process [ISO 30111]. They can only help with a small part of ISO 29147, which is intake and initial triage."

Triage. Welcome to a world of pain.

HackerOne "weaponized triage"

Running a VDP typically results in a trickle of reports, as only good-faith researchers expecting zero payout will contact you. Start paying a bug bounty, though, and every wannabe script kiddie looking for a quick payout will flood your inbox with garbage reports.

"When people run their own bug bounty programs, they quickly live to regret it," Latacora’s Van Houtven says. "They get thousands of reports, mostly bad reports. There's a massive incentive [for low-skilled bug finders] to spam everyone with complete bullshit."

That's not an argument to outsource bug bounties to one of the platforms, but rather to question whether your organization needs a bug bounty at all. A tsunami of bad bug reports is a problem that the bug bounty platforms both create and solve, Van Houtven says.

"HackerOne has weaponized this. They make a ton of money off triage. The problem is HackerOne has a terrible perverse incentive. They want to preserve the status quo of people submitting bullshit scanner findings," Van Houtven says. "HackerOne's job is to make bug bounties as bad as possible for everyone because they make more money that way. I think HackerOne should not exist. Their business model is misery."

Rice denies this charge. "HackerOne has every incentive to make triage a pleasant experience for hackers, our customers and our staff," he says. "We are constantly working to improve the triage process for everyone."

Responsible use of bug bounties

A company with an existing VDP and a mature bug triage and remediation team can and, in some cases, should augment their existing, robust red team with a bug bounty. But the bug bounty is the cherry on top of the cybersecurity sundae. It's nice to have, looks pretty, and adds a bit of flavor oomph.

There are layers: The foundation is a published VDP, at CompanyX.com/security, with a security@ email address, a PGP key (that works), and clear safe harbor language that permits researcher disclosure. The second is to employ a proper penetration testing consultancy and, if organizational size and budget warrants it, an in-house red team. Once those planks of due diligence have been laid, a bug bounty can help find issues everyone else has missed.

Mature organizations can and should run their own VDP in house. If they are ready for an avalanche of dubious bug reports, they might optionally choose to run a bug bounty.

To the extent that bug bounty platforms need to exist, they are a modest value-add. "I told them before I left," Moussouris says, "If you guys can continue to reduce friction between researchers and vendors, that's a good thing. If you try to sell control, you're in the wrong space."