Bug bounty platforms buy researcher silence, violate labor laws, critics say

Used properly, bug bounty platforms connect security researchers with organizations wanting extra scrutiny. In exchange for reporting a security flaw, the researcher receives payment (a bounty) as a thank you for doing the right thing. However, CSO’s investigation shows that the bug bounty platforms have turned bug reporting and disclosure on its head, what multiple expert sources, including HackerOne’s former chief policy officer, Katie Moussouris, call a “perversion.”

Bug bounty vs. VDP

A vulnerability disclosure program (VDP) is a welcome mat for concerned citizens to report security vulnerabilities. Every organization should have a VDP. In fact, the US Federal Trade Commission (FTC) considers a VDP a best practice, and has fined companies for poor security practices, including failing to deploy a VDP as part of their security due diligence. The US Department of Homeland Security (DHS) issued a draft order in 2019 mandating all federal civilian agencies deploy a VDP.

Regulators often view deploying a VDP as minimal due diligence, but running a VDP is a pain. A VDP looks like this: Good-faith security researchers tell you your stuff is broken, give you 90 days max to fix it, and when the time is up they call their favorite journalist and publish the complete details on Twitter, plus a talk at Black Hat or DEF CON if it’s a really juicy bug.

[…]

“Bug bounties are best when transparent and open. The more you try to close them down and place NDAs on them, the less effective they are, the more they become about marketing rather than security,” Robert Graham of Errata Security tells CSO.

Leitschuh, the Zoom bug finder, agrees. “This is part of the problem with the bug bounty platforms as they are right now. They aren’t holding companies to a 90-day disclosure deadline,” he says. “A lot of these programs are structured on this idea of non-disclosure. What I end up feeling like is that they are trying to buy researcher silence.”

The bug bounty platforms’ NDAs prohibit even mentioning the existence of a private bug bounty. Tweeting something like “Company X has a private bounty program over at Bugcrowd” would be enough to get a hacker kicked off their platform.

The carrot for researcher silence is the money — bounties can range from a few hundred to tens of thousands of dollars — but the stick to enforce silence is “safe harbor,” an organization’s public promise not to sue or criminally prosecute a security researcher attempting to report a bug in good faith.

The US Department of Justice (DOJ) published guidelines in 2017 on how to make a promise of safe harbor. Severe penalties for illegal hacking should not apply to a concerned citizen trying to do the right thing, they reasoned.

Want safe harbor? Sign this NDA

Sign this NDA to report a security issue or we reserve the right to prosecute you under the Computer Fraud and Abuse Act (CFAA) and put you in jail for a decade or more. That’s the message some organizations are sending with their private bug bounty programs.

[…]

The PayPal terms, published and facilitated by HackerOne, turn the idea of a VDP with safe harbor on its head. The company “commits that, if we conclude, in our sole discretion, [emphasis ours] that a disclosure respects and meets all the guidelines of these Program Terms and the PayPal Agreements, PayPal will not bring a private action against you or refer a matter for public inquiry.”

The only way to meet their “sole discretion” decision of safe harbor is if you agree to their NDA. “By providing a Submission or agreeing to the Program Terms, you agree that you may not publicly disclose your findings or the contents of your Submission to any third parties in any way without PayPal’s prior written approval.”

HackerOne underscores that safe harbor can be contingent on agreeing to program terms, including signing an NDA, in their disclosure guidelines. Bug finders who don’t wish to sign an NDA to report a security flaw may contact the affected organization directly, but without safe harbor protections.

“Submit directly to the Security Team outside of the Program,” they write. “In this situation, Finders are advised to exercise good judgement as any safe harbor afforded by the Program Policy may not be available.”

[…]

security researchers concerned about safe harbor protection should not rest easy with most safe harbor language, Electronic Frontier Foundation (EFF) Senior Staff Attorney Andrew Crocker tells CSO. “The terms of many bug bounty programs are often written to give the company leeway to determine ‘in its sole discretion’ whether a researcher has met the criteria for a safe harbor,” Crocker says. “That obviously limits how much comfort researchers can take from the offer of a safe harbor.”

“EFF strongly believes that security researchers have a First Amendment right to report their research and that disclosure of vulnerabilities is highly beneficial,” Crocker adds. In fact, many top security researchers refuse to participate on bug bounty platforms because of required NDAs.

[…]

Health insurance in the US is typically provided by employers to employees, and not to independent contractors. However, legal experts tell CSO that the bug bounty platforms violate both California and US federal labor law.

California AB 5, the Golden State’s new law to protect “gig economy” workers that came into effect in January 2020, clearly applies to bug bounty hunters working for HackerOne, Bugcrowd and Synack, Leanna Katz, an LLM candidate at Harvard Law School researching legal tests that distinguish between independent contractors and employees, tells CSO.

[…]

“My legal analysis suggests those workers [on bug bounty platforms] should at least be getting minimum wage, overtime compensation, and unemployment insurance,” Dubal tells CSO. “That is so exploitative and illegal,” she adds, saying that “under federal law it is conceivable that not just HackerOne but the client is a joint employer [of bug finders]. There might be liability for companies that use [bug bounty platform] services.”

“Finders are not employees,” Rice says, a sentiment echoed by Bugcrowd founder Ellis and Synack founder Jay Kaplan. Synack’s response is representative of all three platforms: “Like many companies in California, we’re closely monitoring how the state will apply AB 5, but we have a limited number of security researchers based in California and they represent only a fractional percentage of overall testing time,” a Synack representative tells CSO.

Using gig economy platform workers to discover and report security flaws may also have serious GDPR consequences when a security researcher discovers a data breach.

Bug bounty platforms may violate GDPR

When is a data breach not a data breach?

When a penetration testing consultancy with vetted employees discover the exposed data.

A standard penetrating testing engagement contract includes language that protects the penetration testers — in short, it’s not a crime if someone asks you to break into their building or corporate network on purpose, and signs a contract indemnifying you.

This includes data breaches discovered by penetration testers. Since the pen testers are brought under the umbrella of the client, say “Company X,” any publicly exposed Company X data discovered is not considered publicly exposed, since that would legally be the same as a Company X employee discovering a data breach, and GDPR’s data breach notification rules don’t come into play.

What about unvetted bug bounty hunters who discover a data breach as part of a bug bounty program? According to Joan Antokol, a GDPR expert, the EU’s data breach notification regulation applies to bug bounty platforms. Antokol is partner at Park Legal LLC and a longstanding member of the International Working Group on Data Protection in Technology (IWGDPT), which is chaired by the Berlin Data Protection Commissioner. She works closely with GDPR regulators.

“If a free agent hacker who signed up for a project via bug bounty companies to try to find vulnerabilities in the electronic systems of a bug bounty client (often a multinational company), was, in fact, able to access company personal data of the multinational via successful hacking into their systems,” she tells CSO, “the multinational (data controller) would have a breach notification obligation under the GDPR and similar laws of other countries.”

[…]

ISO 29147 standardizes how to receive security bug reports from an outside reporter for the first time and how to disseminate security advisories to the public.

ISO 30111 documents internal digestion of bug reports and remediation within an affected software maker. ISO provided CSO with a review copy of both standards, and the language is unambiguous.

These standards make clear that private bug bounty NDAs are not ISO compliant. “When non-disclosure is a required term or condition of reporting bugs via a bug bounty platform, that fundamentally breaks the process of vulnerability disclosure as outlined in ISO 29147,” Moussouris says. “The purpose of the standard is to allow for incoming vulnerability reports and [her emphasis] release of guidance to affected parties.”

ISO 29147 lists four major goals, including “providing users with sufficient information to evaluate risk due to vulnerabilities,” and lists eight different reasons why publishing security advisories is a standardized requirement, including “informing public policy decisions” and “transparency and accountability.” Further, 29147 says that public disclosure makes us all more secure in the long term. “The theory supporting vulnerability disclosure holds that the short-term risk caused by public disclosure is outweighed by longer-term benefits from fixed vulnerabilities, better informed defenders, and systemic defensive improvements.”

Source: Bug bounty platforms buy researcher silence, violate labor laws, critics say | CSO Online

Smart fridges are cool, but after a few short years you could be stuck with a big frosty brick in the kitchen

A report from consumer advocates Which? highlights the shockingly short lifespan of “smart” appliances, with some losing software support after just a few years, despite costing vastly more than “dumb” alternatives.

That lifespan varies between manufacturers: Most vendors were vague, with Beko offering “up to 10 years” and LG saying patches would be issued as required. Samsung said it would offer software support for a maximum of two years, according to the report.

Only one manufacturer, Miele, promised to issue software updates for a full decade after the release of a device, but then Miele tends to make premium priced products.

[…]

For consumers, that ambiguous (if not outright short) lifespan raises the possibility they could be forced to replace their expensive white goods before they otherwise would. According to the consumer watchdog, fridge-freezers typically last 11 years.

If a manufacturer decides to withdraw software support, or switch off central servers, users could find themselves with a big, frosty brick in their kitchen. In the wider IoT world, there’s precedent for this.

In 2016, owners of the Revolv smart home hub were infuriated after the Google-owned Nest deactivated the servers required for it to work. More recently, Belkin flicked the kill switch on its WeMo NetCam IP cameras, offering refunds only to those users whose devices were still in warranty and had the foresight to keep their receipts.

There’s another cause for concern. Given that smart appliances are essentially computers with a persistent connection to the internet, there’s a risk hackers could co-opt unpatched fridges and dishwashers, turning them into drones in vast botnets.

Again, there’s precedent. The Mirai botnet, for example, was effectively composed of hacked routers and IP cameras.

Source: Smart fridges are cool, but after a few short years you could be stuck with a big frosty brick in the kitchen • The Register

Secure the software development lifecycle with machine learning

At Microsoft, 47,000 developers generate nearly 30 thousand bugs a month. These items get stored across over 100 AzureDevOps and GitHub repositories. To better label and prioritize bugs at that scale, we couldn’t just apply more people to the problem. However, large volumes of semi-curated data are perfect for machine learning. Since 2001 Microsoft has collected 13 million work items and bugs. We used that data to develop a process and machine learning model that correctly distinguishes between security and non-security bugs 99 percent of the time and accurately identifies the critical, high priority security bugs, 97 percent of the time. This is an overview of how we did it.

Source: Secure the software development lifecycle with machine learning – Microsoft Security

Belg opent lijnvlucht met private jets naar Ibiza

Voor 495 euro in een private jet naar Ibiza vliegen, met 25 kilogram bagage, luxesnacks en een glaasje champagne. Dat wil de Limburgse luchtvaartondernemer Philippe Bodson vanaf 4 juli onder de naam Flying Executive in de markt zetten. Op wekelijkse basis vanuit Brussel.

Een lijnvlucht voor private jets is geen primeur in Europa. Maar de timing is wel opvallend. Met dat concept roeit Bodson, de topman van ASL Group, naar eigen zeggen tegen de stroom in. ‘Het staat haaks op alle tendensen in de luchtvaartsector, die door low cost wordt gedreven. Maar het sluit perfect aan op de nieuwe noden van het postcoronareizen.’

Bodson, die op zijn 34ste een pilotenbrevet haalde en daarna van zijn hobby zijn beroep maakte door een eigen luchtvaartbedrijf op te richten, schakelt voor de nieuwe formule twee toestellen van het type Embraer in. Dat zijn vliegtuigen met een beperkt aantal zitplaatsen (respectievelijk 30 en 42) en meer beenruimte (plus 12 centimeter) dan op een gewone lijnvlucht.

De binnenruimte in die toestellen – met één zetel links en twee zetels rechts – biedt volgens hem ook een veel betere vluchtervaring. ‘Het voordeel is dat reizigers steeds alleen of naast een bekende kunnen zitten’, zegt hij. ‘In tijden van Covid-19 geeft dat een prettiger gevoel.’

Source: Belg opent lijnvlucht met private jets naar Ibiza | De Tijd

Guides for Visualizing Reality – and checking on the charts

We like to complain about how data is messy, not in the right format, and how parts don’t make sense. Reality is complicated though. Data comes from the realities. Here are several guides to help with visualizing these realities, which seem especially important these days.

Visualizing Incomplete and Missing Data

We love complete and nicely formatted data. That’s not what we get a lot of the time.

Visualizing Outliers

Step 1: Figure out why the outlier exists in the first place. Step 2: Choose from these visualization options to show the outlier.

Visualizing Differences

Focus on finding or displaying contrasting points, and some visual methods are more helpful than others.

Visualizing Patterns on Repeat

Things have a way of repeating themselves, and it can be useful to highlight these patterns in data.

Source: Guides for Visualizing Reality | FlowingData

Astronomers have found a planet like Earth orbiting a star like the sun

Three thousand light-years from Earth sits Kepler 160, a sun-like star that’s already thought to have three planets in its system. Now researchers think they’ve found a fourth. Planet KOI-456.04, as it’s called, appears similar to Earth in size and orbit, raising new hopes we’ve found perhaps the best candidate yet for a habitable exoplanet that resembles our home world. The new findings bolster the case for devoting more time to looking for planets orbiting stars like Kepler-160 and our sun, where there’s a better chance a planet can receive the kind of illumination that’s amenable to life.

Most exoplanet discoveries so far have been made around red dwarf stars. This isn’t totally unexpected; red dwarfs are the most common type of star out there. And our main method for finding exoplanets involves looking for stellar transits—periodic dips in a star’s brightness as an orbiting object passes in front of it. This is much easier to do for dimmer stars like red dwarfs, which are smaller than our sun and emit more of their energy as infrared radiation

[…]

Data on the new exoplanet orbiting Kepler 160, published in Astronomy and Astrophysics on Thursday, points to a different situation entirely. From what researchers can tell, KOI 456.04 looks to be less than twice the size of Earth and is apparently orbiting Kepler-160 at about the same distance from Earth to the sun (one complete orbit is 378 days). Perhaps most important, it receives about 93% as much light as Earth gets from the sun.

This is critical, because one of the biggest obstacles to habitability around red dwarf stars is they can emit a lot of high-energy flares and radiation that could fry a planet and any life on it. By contrast, stars like the sun—and Kepler-160, in theory—are more stable and suitable for the evolution of life.

[…]

Right now the researchers say it’s 85% probable KOI-456.04 is an actual planet. But it could still be an artifact of Kepler’s instruments or the new analysis—an object needs to pass a threshold of 99% to be a certified exoplanet. Getting that level of certainty will require direct observations. The instruments on NASA’s upcoming James Webb Space Telescope are expected to be up to the task, as are those on ESA’s PLATO space telescope, due to launch in 2026.

Source: Astronomers have found a planet like Earth orbiting a star like the sun | MIT Technology Review

Brave Browser Mistake Adds Its Referrer Code For Cryptocurrency Sites – quite a big oops also for privacy

The following report appeared on Yahoo! Finance: Privacy-focused browser Brave was found to autocomplete several websites and keywords in its address bar with an affiliate code. Shortly after a user published his findings, Brave CEO and co-founder Brendan Eich addressed the incident and called it “a mistake we’re correcting.” Eich said that while Brave is a Binance affiliate [a cryptocurrency exchange], the browser’s autocompleting feature should not have added any new affiliate codes.

“The autocomplete default was inspired by search query clientid attribution that all browsers do, but unlike keyword queries, a typed-in URL should go to the domain named, without any additions,” Eich wrote in the thread. “Sorry for this mistake — we are clearly not perfect, but we correct course quickly,” he added.
Android Police reports the mistake occured more than 10 weeks ago — and that referrer codes were also included for other cryptocurrency-related sites: The browser’s GitHub repository reveals the functionality was first added on March 25th, and the current list of sites includes Binance, Coinbase, Ledger, and Trezor. Brave Software receives a kickback for purchases/accounts made with those services — for example, Coinbase says that when you refer a new customer to the service, you can earn 50% of their fees for the first three months.

The nature of these affiliate programs also allows the referrer — in this case, Brave Software — to view some amount of data about the customers who sign up with the code. Coinbase’s program provides “direct access to your campaign’s performance data,” while Trezor offers a “detailed overview of purchases.”
Brave CEO and co-founder Brendan Eich (who also created the JavaScript programming language) tweeted, “For what it’s worth there’s a setting to disable the autocomplete defaults that add affiliate codes, in brave://settings first page. Current plan is to flip default to off as shown here. You can disable ahead of our release schedule if you want to.

“Good to hear from supporters who’ll enable it.”

Source: Brave Browser Mistake Adds Its Referrer Code For Cryptocurrency Sites – Slashdot