Used properly, bug bounty platforms connect security researchers with organizations wanting extra scrutiny. In exchange for reporting a security flaw, the researcher receives payment (a bounty) as a thank you for doing the right thing. However, CSO’s investigation shows that the bug bounty platforms have turned bug reporting and disclosure on its head, what multiple expert sources, including HackerOne’s former chief policy officer, Katie Moussouris, call a “perversion.”
Bug bounty vs. VDP
A vulnerability disclosure program (VDP) is a welcome mat for concerned citizens to report security vulnerabilities. Every organization should have a VDP. In fact, the US Federal Trade Commission (FTC) considers a VDP a best practice, and has fined companies for poor security practices, including failing to deploy a VDP as part of their security due diligence. The US Department of Homeland Security (DHS) issued a draft order in 2019 mandating all federal civilian agencies deploy a VDP.
Regulators often view deploying a VDP as minimal due diligence, but running a VDP is a pain. A VDP looks like this: Good-faith security researchers tell you your stuff is broken, give you 90 days max to fix it, and when the time is up they call their favorite journalist and publish the complete details on Twitter, plus a talk at Black Hat or DEF CON if it’s a really juicy bug.
“Bug bounties are best when transparent and open. The more you try to close them down and place NDAs on them, the less effective they are, the more they become about marketing rather than security,” Robert Graham of Errata Security tells CSO.
Leitschuh, the Zoom bug finder, agrees. “This is part of the problem with the bug bounty platforms as they are right now. They aren’t holding companies to a 90-day disclosure deadline,” he says. “A lot of these programs are structured on this idea of non-disclosure. What I end up feeling like is that they are trying to buy researcher silence.”
The bug bounty platforms’ NDAs prohibit even mentioning the existence of a private bug bounty. Tweeting something like “Company X has a private bounty program over at Bugcrowd” would be enough to get a hacker kicked off their platform.
The carrot for researcher silence is the money — bounties can range from a few hundred to tens of thousands of dollars — but the stick to enforce silence is “safe harbor,” an organization’s public promise not to sue or criminally prosecute a security researcher attempting to report a bug in good faith.
The US Department of Justice (DOJ) published guidelines in 2017 on how to make a promise of safe harbor. Severe penalties for illegal hacking should not apply to a concerned citizen trying to do the right thing, they reasoned.
Want safe harbor? Sign this NDA
Sign this NDA to report a security issue or we reserve the right to prosecute you under the Computer Fraud and Abuse Act (CFAA) and put you in jail for a decade or more. That’s the message some organizations are sending with their private bug bounty programs.
The PayPal terms, published and facilitated by HackerOne, turn the idea of a VDP with safe harbor on its head. The company “commits that, if we conclude, in our sole discretion, [emphasis ours] that a disclosure respects and meets all the guidelines of these Program Terms and the PayPal Agreements, PayPal will not bring a private action against you or refer a matter for public inquiry.”
The only way to meet their “sole discretion” decision of safe harbor is if you agree to their NDA. “By providing a Submission or agreeing to the Program Terms, you agree that you may not publicly disclose your findings or the contents of your Submission to any third parties in any way without PayPal’s prior written approval.”
HackerOne underscores that safe harbor can be contingent on agreeing to program terms, including signing an NDA, in their disclosure guidelines. Bug finders who don’t wish to sign an NDA to report a security flaw may contact the affected organization directly, but without safe harbor protections.
“Submit directly to the Security Team outside of the Program,” they write. “In this situation, Finders are advised to exercise good judgement as any safe harbor afforded by the Program Policy may not be available.”
security researchers concerned about safe harbor protection should not rest easy with most safe harbor language, Electronic Frontier Foundation (EFF) Senior Staff Attorney Andrew Crocker tells CSO. “The terms of many bug bounty programs are often written to give the company leeway to determine ‘in its sole discretion’ whether a researcher has met the criteria for a safe harbor,” Crocker says. “That obviously limits how much comfort researchers can take from the offer of a safe harbor.”
“EFF strongly believes that security researchers have a First Amendment right to report their research and that disclosure of vulnerabilities is highly beneficial,” Crocker adds. In fact, many top security researchers refuse to participate on bug bounty platforms because of required NDAs.
Health insurance in the US is typically provided by employers to employees, and not to independent contractors. However, legal experts tell CSO that the bug bounty platforms violate both California and US federal labor law.
California AB 5, the Golden State’s new law to protect “gig economy” workers that came into effect in January 2020, clearly applies to bug bounty hunters working for HackerOne, Bugcrowd and Synack, Leanna Katz, an LLM candidate at Harvard Law School researching legal tests that distinguish between independent contractors and employees, tells CSO.
“My legal analysis suggests those workers [on bug bounty platforms] should at least be getting minimum wage, overtime compensation, and unemployment insurance,” Dubal tells CSO. “That is so exploitative and illegal,” she adds, saying that “under federal law it is conceivable that not just HackerOne but the client is a joint employer [of bug finders]. There might be liability for companies that use [bug bounty platform] services.”
“Finders are not employees,” Rice says, a sentiment echoed by Bugcrowd founder Ellis and Synack founder Jay Kaplan. Synack’s response is representative of all three platforms: “Like many companies in California, we’re closely monitoring how the state will apply AB 5, but we have a limited number of security researchers based in California and they represent only a fractional percentage of overall testing time,” a Synack representative tells CSO.
Using gig economy platform workers to discover and report security flaws may also have serious GDPR consequences when a security researcher discovers a data breach.
Bug bounty platforms may violate GDPR
When is a data breach not a data breach?
When a penetration testing consultancy with vetted employees discover the exposed data.
A standard penetrating testing engagement contract includes language that protects the penetration testers — in short, it’s not a crime if someone asks you to break into their building or corporate network on purpose, and signs a contract indemnifying you.
This includes data breaches discovered by penetration testers. Since the pen testers are brought under the umbrella of the client, say “Company X,” any publicly exposed Company X data discovered is not considered publicly exposed, since that would legally be the same as a Company X employee discovering a data breach, and GDPR’s data breach notification rules don’t come into play.
What about unvetted bug bounty hunters who discover a data breach as part of a bug bounty program? According to Joan Antokol, a GDPR expert, the EU’s data breach notification regulation applies to bug bounty platforms. Antokol is partner at Park Legal LLC and a longstanding member of the International Working Group on Data Protection in Technology (IWGDPT), which is chaired by the Berlin Data Protection Commissioner. She works closely with GDPR regulators.
“If a free agent hacker who signed up for a project via bug bounty companies to try to find vulnerabilities in the electronic systems of a bug bounty client (often a multinational company), was, in fact, able to access company personal data of the multinational via successful hacking into their systems,” she tells CSO, “the multinational (data controller) would have a breach notification obligation under the GDPR and similar laws of other countries.”
ISO 29147 standardizes how to receive security bug reports from an outside reporter for the first time and how to disseminate security advisories to the public.
ISO 30111 documents internal digestion of bug reports and remediation within an affected software maker. ISO provided CSO with a review copy of both standards, and the language is unambiguous.
These standards make clear that private bug bounty NDAs are not ISO compliant. “When non-disclosure is a required term or condition of reporting bugs via a bug bounty platform, that fundamentally breaks the process of vulnerability disclosure as outlined in ISO 29147,” Moussouris says. “The purpose of the standard is to allow for incoming vulnerability reports and [her emphasis] release of guidance to affected parties.”
ISO 29147 lists four major goals, including “providing users with sufficient information to evaluate risk due to vulnerabilities,” and lists eight different reasons why publishing security advisories is a standardized requirement, including “informing public policy decisions” and “transparency and accountability.” Further, 29147 says that public disclosure makes us all more secure in the long term. “The theory supporting vulnerability disclosure holds that the short-term risk caused by public disclosure is outweighed by longer-term benefits from fixed vulnerabilities, better informed defenders, and systemic defensive improvements.”