The Linkielist

Linking ideas with the world

The Linkielist

8 of worlds top tech companies pwned for years by China

Eight of the world’s biggest technology service providers were hacked by Chinese cyber spies in an elaborate and years-long invasion, Reuters found. The invasion exploited weaknesses in those companies, their customers, and the Western system of technological defense.

[…]

The hacking campaign, known as “Cloud Hopper,” was the subject of a U.S. indictment in December that accused two Chinese nationals of identity theft and fraud. Prosecutors described an elaborate operation that victimized multiple Western companies but stopped short of naming them. A Reuters report at the time identified two: Hewlett Packard Enterprise and IBM.

Yet the campaign ensnared at least six more major technology firms, touching five of the world’s 10 biggest tech service providers.

Also compromised by Cloud Hopper, Reuters has found: Fujitsu, Tata Consultancy Services, NTT Data, Dimension Data, Computer Sciences Corporation and DXC Technology. HPE spun-off its services arm in a merger with Computer Sciences Corporation in 2017 to create DXC.

Waves of hacking victims emanate from those six plus HPE and IBM: their clients. Ericsson, which competes with Chinese firms in the strategically critical mobile telecoms business, is one. Others include travel reservation system Sabre, the American leader in managing plane bookings, and the largest shipbuilder for the U.S. Navy, Huntington Ingalls Industries, which builds America’s nuclear submarines at a Virginia shipyard.

“This was the theft of industrial or commercial secrets for the purpose of advancing an economy,” said former Australian National Cyber Security Adviser Alastair MacGibbon. “The lifeblood of a company.”

[…]

The corporate and government response to the attacks was undermined as service providers withheld information from hacked clients, out of concern over legal liability and bad publicity, records and interviews show. That failure, intelligence officials say, calls into question Western institutions’ ability to share information in the way needed to defend against elaborate cyber invasions. Even now, many victims may not be aware they were hit.

The campaign also highlights the security vulnerabilities inherent in cloud computing, an increasingly popular practice in which companies contract with outside vendors for remote computer services and data storage.

[…]

For years, the company’s predecessor, technology giant Hewlett Packard, didn’t even know it had been hacked. It first found malicious code stored on a company server in 2012. The company called in outside experts, who found infections dating to at least January 2010.

Hewlett Packard security staff fought back, tracking the intruders, shoring up defenses and executing a carefully planned expulsion to simultaneously knock out all of the hackers’ known footholds. But the attackers returned, beginning a cycle that continued for at least five years.

The intruders stayed a step ahead. They would grab reams of data before planned eviction efforts by HP engineers. Repeatedly, they took whole directories of credentials, a brazen act netting them the ability to impersonate hundreds of employees.

The hackers knew exactly where to retrieve the most sensitive data and littered their code with expletives and taunts. One hacking tool contained the message “FUCK ANY AV” – referencing their victims’ reliance on anti-virus software. The name of a malicious domain used in the wider campaign appeared to mock U.S. intelligence: “nsa.mefound.com”

Then things got worse, documents show.

After a 2015 tip-off from the U.S. Federal Bureau of Investigation about infected computers communicating with an external server, HPE combined three probes it had underway into one effort called Tripleplay. Up to 122 HPE-managed systems and 102 systems designated to be spun out into the new DXC operation had been compromised, a late 2016 presentation to executives showed.

[…]

According to Western officials, the attackers were multiple Chinese government-backed hacking groups. The most feared was known as APT10 and directed by the Ministry of State Security, U.S. prosecutors say. National security experts say the Chinese intelligence service is comparable to the U.S. Central Intelligence Agency, capable of pursuing both electronic and human spying operations.

[…]

It’s impossible to say how many companies were breached through the service provider that originated as part of Hewlett Packard, then became Hewlett Packard Enterprise and is now known as DXC.

[…]

HP management only grudgingly allowed its own defenders the investigation access they needed and cautioned against telling Sabre everything, the former employees said. “Limiting knowledge to the customer was key,” one said. “It was incredibly frustrating. We had all these skills and capabilities to bring to bear, and we were just not allowed to do that.”

[…]

The threat also reached into the U.S. defense industry.

In early 2017, HPE analysts saw evidence that Huntington Ingalls Industries, a significant client and the largest U.S. military shipbuilder, had been penetrated by the Chinese hackers, two sources said. Computer systems owned by a subsidiary of Huntington Ingalls were connecting to a foreign server controlled by APT10.

During a private briefing with HPE staff, Huntington Ingalls executives voiced concern the hackers could have accessed data from its biggest operation, the Newport News, Va., shipyard where it builds nuclear-powered submarines, said a person familiar with the discussions. It’s not clear whether any data was stolen.

[…]

Like many Cloud Hopper victims, Ericsson could not always tell what data was being targeted. Sometimes, the attackers appeared to seek out project management information, such as schedules and timeframes. Another time they went after product manuals, some of which were already publicly available.

[…]

much of Cloud Hopper’s activity has been deliberately kept from public view, often at the urging of corporate victims.

In an effort to keep information under wraps, security staff at the affected managed service providers were often barred from speaking even to other employees not specifically added to the inquiries.

In 2016, HPE’s office of general counsel for global functions issued a memo about an investigation codenamed White Wolf. “Preserving confidentiality of this project and associated activity is critical,” the memo warned, stating without elaboration that the effort “is a sensitive matter.” Outside the project, it said, “do not share any information about White Wolf, its effect on HPE, or the activities HPE is taking.”

The secrecy was not unique to HPE. Even when the government alerted technology service providers, the companies would not always pass on warnings to clients, Jeanette Manfra, a senior cybersecurity official with the U.S. Department of Homeland Security, told Reuters.

Source: Stealing Clouds

Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites (note, there’s lots of them influencing your unconsious to buy!)

Dark patterns are user interface design choices that benefit an online service by coercing, steering, or deceivingusers into making unintended and potentially harmful decisions. We present automated techniques that enableexperts to identify dark patterns on a large set of websites. Using these techniques, we study shoppingwebsites, which often use dark patterns these to influence users into making more purchases or disclosingmore information than they would otherwise. Analyzing∼53K product pages from∼11K shopping websites,we discover 1,841 dark pattern instances, together representing 15 types and 7 categories. We examine theunderlying influence of these dark patterns, documenting their potential harm on user decision-making. Wealso examine these dark patterns for deceptive practices, and find 183 websites that engage in such practices.Finally, we uncover 22 third-party entities that offer dark patterns as a turnkey solution. Based on our findings,we make recommendations for stakeholders including researchers and regulators to study, mitigate, andminimize the use of these patterns.

Dark patterns [31,47] are user interface design choices that benefit an online service by coercing,steering, or deceiving users into making decisions that, if fully informed and capable of selectingalternatives, they might not make. Such interface design is an increasingly common occurrence ondigital platforms including social media [45] and shopping websites [31], mobile apps [5,30], and video games [83]. At best, dark patterns annoy and frustrate users. At worst, dark patterns userscan mislead and deceive users, e.g., by causing financial loss [1,2], tricking users into giving upvast amounts of personal data [45], or inducing compulsive and addictive behavior in adults [71]and children [20].While prior work [30,31,37,47] has provided a starting point for describing the types ofdark patterns, there is no large-scale evidence documenting the prevalence of dark patterns, or asystematic and descriptive investigation of how the various different types of dark patterns harmusers. If we are to develop countermeasures against dark patterns, we first need to examine where,how often, and the technical means by which dark patterns appear, and second, we need to be ableto compare and contrast how various dark patterns influence user decision-making. By doing so,we can both inform users about and protect them from such patterns, and given that many of thesepatterns are unlawful, aid regulatory agencies in addressing and mitigating their use.In this paper, we present an automated approach that enables experts to identify dark patternsat scale on the web. Our approach relies on (1) a web crawler, built on top of OpenWPM [24,39]—aweb privacy measurement platform—to simulate a user browsing experience and identify userinterface elements; (2) text clustering to extract recurring user interface designs from the resultingdata; and (3) inspecting the resulting clusters for instances of dark patterns. We also develop a noveltaxonomy of dark pattern characteristics so that researchers and regulators can use descriptive andcomparative terminology to understand how dark patterns influence user decision-making.While our automated approach generalizes, we focus this study on shopping websites. Darkpatterns are especially common on shopping websites, used by an overwhelming majority of theAmerican public [75], where they trick users into signing up for recurring subscriptions and makingunwanted purchases, resulting in concrete financial loss. We use our web crawler to visit the∼11Kmost popular shopping websites worldwide, and from the resulting analysis create a large data setof dark patterns and document their prevalence. In doing so, we discover several new instancesand variations of previously documented dark patterns [31,47]. We also classify the dark patternswe encounter using our taxonomy of dark pattern characteristics.

We have five main findings:

•We discovered 1,841 instances of dark patterns on shopping websites, which together repre-sent 15 types of dark patterns and 7 broad categories.

•These 1,841 dark patterns were present on 1,267 of the∼11K shopping websites (∼11.2%) inour data set. Shopping websites that were more popular, according to Alexa rankings [9],were more likely to feature dark patterns. This represents a lower bound on the number ofdark patterns on these websites, since our automated approach only examined text-baseduser interfaces on a sample of products pages per website.

•Using our taxonomy of dark pattern characteristics, we classified the dark patterns wediscover on the basis whether they lead to anasymmetryof choice, arecovertin their effect,aredeceptivein nature,hide informationfrom users, andrestrictchoice. We also map the darkpatterns in our data set to the cognitive biases they exploit. These biases collectively describethe consumer psychology underpinnings of the dark patterns we identified.

•In total, we uncovered 234 instances of deceptive dark patterns across 183 websites. Wehighlight the types of dark patterns we discovered that rely on consumer deception.

•We identified 22 third-party entities that provide shopping websites with the ability to createdark patterns on their sites. Two of these entities openly advertised practices that enabledeceptive messages

[…]

We developed a taxonomy of dark pattern characteristics that allows researchers, policy-makers and journalists to have a descriptive, comprehensive, and comparative terminology for understand-ing the potential harm and impact of dark patterns on user decision-making. Our taxonomy is based upon the literature on online manipulation [33,74,81] and dark patterns highlighted in previous work [31,47], and it consists of the following five dimensions, each of which poses a possible barrier to user decision-making:

•Asymmetric: Does the user interface design impose unequal weights or burdens on theavailable choices presented to the user in the interface3? For instance, a website may presenta prominent button to accept cookies on the web but hide the opt-out button in another page.

•Covert: Is the effect of the user interface design choice hidden from users? A websitemay develop interface design to steer users into making specific purchases without theirknowledge. Often, websites achieve this by exploiting users’ cognitive biases, which aredeviations from rational behavior justified by some “biased” line of reasoning [50]. In aconcrete example, a website may leverage the Decoy Effect [51] cognitive bias, in whichan additional choice—the decoy—is introduced to make certain other choices seem moreappealing. Users may fail to recognize the decoy’s presence is merely to influence theirdecision making, making its effect hidden from users.

•Deceptive: Does the user interface design induce false beliefs either through affirmativemisstatements, misleading statements, or omissions? For example, a website may offer adiscount to users that appears to be limited-time, but actually repeats when they visit the siteagain. Users may be aware that the website is trying to offer them a deal or sale; however,they may not realize that the influence is grounded in a false belief—in this case, becausethe discount is recurring. This false belief affects users decision-making i.e., they may actdifferently if they knew that this sale is repeated.

•Hides Information: Does the user interface obscure or delay the presentation of neces-sary information to the user? For example, a website may not disclose, hide, or delay thepresentation of information about charges related to a product from users.3We narrow the scope of asymmetry to only refer to explicit choices in the interface.

•Restrictive: Does the user interface restrict the set of choices available to users? For instance,a website may only allow users to sign up for an account with existing social media accountssuch as Facebook or Google so they can gather more information about them.
In Section 5, we also draw an explicit connection between each dark pattern we discover and thecognitive biases they exploit. The biases we refer to in our findings are:
(1)Anchoring Effect [77]: The tendency for individuals to overly rely on an initial piece ofinformation—the “anchor”—on future decisions
(2)Bandwagon Effect [72]: The tendency for individuals to value something more because othersseem to value it.
(3)Default Effect [53]: The tendency of individuals to stick with options that are assigned tothem by default, due to inertia in the effort required to change the option.
(4)Framing Effect [78]: A phenomenon that individuals may reach different decisions from thesame information depending on how it is presented or “framed”.
(5)Scarcity Bias [62]: The tendency of individuals to place a higher value on things that arescarce.
(6)Sunk Cost Fallacy [28]: The tendency for individuals to continue an action if they haveinvested resource (e.g., time and money) into it, even if that action would make them worse off.
[…]
We discovered a total of 22 third-party entities, embedded in 1,066of the 11K shopping websites in our data set, and in 7,769 of the Alexa top million websites. Wenote that the prevalence figures from the Princeton Web Census Project data should be taken as a

24A. Mathur et al.lower bound since their crawls are limited to home pages of websites. […] we discovered that many shopping websites only embedded them intheir product—and not home—pages, presumably for functionality and performance reasons.

[…]
Many of the third-parties advertised practices that appeared to be—and sometimes unambiguouslywere—manipulative: “[p]lay upon [customers’] fear of missing out by showing shoppers whichproducts are creating a buzz on your website” (Fresh Relevance), “[c]reate a sense of urgency toboost conversions and speed up sales cycles with Price Alert Web Push” (Insider), “[t]ake advantageof impulse purchases or encourage visitors over shipping thresholds” (Qubit). Further, Qubit alsoadvertised Social Proof Activity Notifications that could be tailored to users’ preferences andbackgrounds.
In some instances, we found that third parties openly advertised the deceptive capabilities of theirproducts. For example, Boost dedicated a web page—titled “Fake it till you make it”—to describinghow it could help create fake orders [12]. Woocommerce Notification—a Woocommerce platformplugin—also advertised that it could create fake social proof messages: “[t]he plugin will create fakeorders of the selected products” [23]. Interestingly, certain third parties (Fomo, Proof, and Boost)used Social Proof Activity Messages on their own websites to promote their products.
[…]
These practices are unambiguously unlawful in the United States(under Section 5 of the Federal Trade Commission Act and similar state laws [43]), the EuropeanUnion (under the Unfair Commercial Practices Directive and similar member state laws [40]), andnumerous other jurisdictions.We also find practices that are unlawful in a smaller set of jurisdictions. In the European Union,businesses are bound by an array of affirmative disclosure and independent consent requirements inthe Consumer Rights Directive [41]. Websites that use the Sneaking dark patterns (Sneak into Basket,Hidden Subscription, and Hidden Costs) on European Union consumers are likely in violation ofthe Directive. Furthermore, user consent obtained through Trick Questions and Visual Interferencedark patterns do not constitute freely given, informed and active consent as required by the GeneralData Protection Regulation (GDPR) [42]. In fact, the Norwegian Consumer Council filed a GDPRcomplaint against Google in 2018, arguing that Google used dark patterns to manipulate usersinto turning on the “Location History” feature on Android, and thus enabling constant locationtracking [46

Source: Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites Draft: June 25, 2019 – dark-patterns.pdf

Mozilla Has a New Tool for Tricking Advertisers Into Believing You’re Filthy Rich

If you notice the ads being served to you are eerily similar to stuff you were just browsing online, it’s not all in your head, and it’s the insidious truth of existing online without installing a bunch of browser extensions. But there’s now a tool that, while comically absurd in execution, can stick it to the man (advertisers) by effectively disguising your true interests. Hope you like tabs.

The tool, called Track THIS, was developed by the Mozilla Firefox folks and lets you pick one of four profiles—Hypebeast, Filthy Rich, Doomsday, or Influencer. You’ll then allow the tool to open 100 tabs based on the associated profile type. Data brokers and advertisers build a profile on you based on how you navigate the internet, which includes the webpages you visit. So whichever one of these personalities you choose will, theoretically, be how advertisers view you, which in turn will influence the type of ads you see.

I tried out both the Filthy Rich and Doomsday Prepper profiles. It took a few minutes for all 100 tabs to open up for each on Chrome. (If you’re on a computer that doesn’t have much RAM, just know that you might have to restart after everything freezes.) For the former, there were a lot of yacht sites, luxury designers, stock market sites, expensive watches, some equestrian real estate brokers, a page to sign up for a Mastercard Gold Card, and a page to book a room at the MGM Grand. For the latter, links to survival supplies and checklists, tents, mylar blankets, doomsday movies, and a lot (a lot) of conspiracy theories. I’m about to get served some ads for some luxury-ass Hazmat suits.

Screenshot: Melanie Ehrenkranz

As Mozilla noted in a blog post announcing the tool, it’ll likely only work as intended for a few days and then will revert back to showing you ads more in tune with your actual preferences. “This will show you ads for products you might not be interested in at all, so it’s really just throwing off brands who want to advertise to a very specific type of person,” the company wrote. “You’ll still be seeing ads. And eventually, if you just use the internet as you typically would day to day, you’ll start seeing ads again that align more closely to your normal browsing habits.”

Of course, you’re probably not going to fire up 100 tabs routinely to trick advertisers—the tool is more of a brilliantly ridiculous nod to the lengths we have to go to only temporarily be just a little less intimately targeted.

Source: Mozilla Has a New Tool for Tricking Advertisers Into Believing You’re Filthy Rich

SpaceX launches successfully but still can’t land – explody centre stage and only half a fairing caught

Launch occurred at 0630 UTC on 25 June and the side boosters of the heavy lifter were shut down and separated from the centre core approximately 2 minutes 30 seconds later. The boosters, previously used for the last Falcon Heavy launch, headed back to briefly light up Landing Zones 1 and 2 with a synchronised touchdown.

The remaining Falcon 9 first stage continued its burn for another minute before it too was shut down and separated from the second stage of the Falcon Heavy.

Unlike the side boosters, the centre core was faced with what the SpaceX PAO breathlessly described as “the most difficult landing we’ve had to date” with the spent booster coming in fast towards the drone ship Of Course I Still Love You, which was stationed twice as far into the North Atlantic Ocean (from Port Canaveral) than usual.

Not that anything involving landing the first stage of an orbital booster on its end atop a platform at sea should ever be described as something so mundane as “usual”.

SpaceX has yet to successfully recover a Falcon Heavy centre stage. The maiden launch of the rocket saw the stage undergo a rapid disassembly after its engines failed to reignite to slow the thing down. The second did land, but subsequently toppled over.

Third time was, alas, not the charm. While the engines (the centre and two extra) ignited as planned, cameras on the drone ship captured the returning first stage appearing to miss the barge before creating its own night-into-day moment with a spectacular explosion.

[…]

And the fairing? Much whooping could be heard as SpaceX finally managed to catch one half in the net strung atop Ms Tree (pic here), the ship formerly known as Mr Steven. This was the first time the company has accomplished the feat. The other half will be recovered from the water.

Source: We’ve Falcon caught it! SpaceX finally nets a fairing half after a successful Heavy launch • The Register

Telcos around the world were so severely pwned, they didn’t notice the hackers setting up VPN points

Hackers infiltrated the networks of at least ten cellular telcos around the world, and remained hidden for years, as part of a long-running tightly targeted surveillance operation, The Register has learned. This espionage campaign is still ongoing, it is claimed.

Cyber-spy hunters at US security firm Cybereason told El Reg on Monday the miscreants responsible for the intrusions were, judging from their malware and skills, either part of the infamous Beijing-backed hacking crew dubbed APT10 – or someone operating just like them, perhaps deliberately so.

Whoever it was, the snoops apparently spent the past two or more years inside ten-plus cellphone networks dotted around the planet. In some cases, we’re told, the hackers were able to deploy their own VPN services on the telcos’ infrastructure to gain quick, persistent, and direct access to the carriers rather than hop through compromised internal servers and workstations. These VPN services were not detected by the telcos’ IT staff.

[…]

The undetected VPN deployments underscore just how deeply the hacker crew was able to drill into the unnamed telcos and compromise pretty much everything needed to get the job done. The gang sought access to hundreds of gigabytes of phone records, text messages, device and customer metadata, and location data on hundreds of millions of subscribers.

This was all done, we’re told, to spy on and gather the whereabouts of some 20 to 30 high-value targets – think politicians, diplomats, and foreign agents. The hackers and their masters would thus be able to figure out who their targets have talked to, where they work and stay, and so on.

[…]

To cover their tracks, the hackers would have long periods of inactivity.

“They come in, they do something, and they disappear for one to three months,” said Serper. “Then they come in again, disappear, and so forth.”

Source: What the cell…? Telcos around the world were so severely pwned, they didn’t notice the hackers setting up VPN points • The Register

BGP super-blunder: How Verizon today sparked a ‘cascading catastrophic failure’ that knackered Cloudflare, Amazon, etc

Verizon sent a big chunk of the internet down a black hole this morning – and caused outages at Cloudflare, Facebook, Amazon, and others – after it wrongly accepted a network misconfiguration from a small ISP in Pennsylvania, USA.

For nearly three hours, web traffic that was supposed to go to some of the biggest names online was instead accidentally rerouted through a steel giant based in Pittsburgh.

It all started when new internet routes for more than 20,000 IP address prefixes – roughly two per cent of the internet – were wrongly announced by regional US ISP DQE Communications: this announcement informed the sprawling internet’s backbone equipment to thread netizens’ traffic through DQE and one of its clients, steel giant Allegheny Technologies, a redirection that was then, mindbogglingly, accepted and passed on to the world by Verizon, a trusted major authority on the internet’s highways and byways. This happened because Allegheny is also a customer of Verizon: it too announced the route changes to Verizon, which disseminated them further.

And so, systems around the planet were automatically updated, and connections destined for Facebook, Cloudflare, and others, ended up going through DQE and Allegheny, which buckled under the strain, causing traffic to disappear into a black hole.

A diagram showing the route leaks

Diagram showing how network routes were erroneously announced to Verizon via DQE and Allegheny … Click to enlarge. Source: Cloudflare

Internet engineers blamed a piece of automated networking software – a BGP optimizer built by Noction – that was used by DQE to improve its connectivity. And even though these kinds of misconfigurations happen every day, there is significant frustration and even disbelief that a US telco as large as Verizon would pass on this amount of incorrect routing information.

Source: BGP super-blunder: How Verizon today sparked a ‘cascading catastrophic failure’ that knackered Cloudflare, Amazon, etc • The Register

When Myspace Was King, Employees Abused a Tool Called ‘Overlord’ to Spy on Users

During the social network’s heyday, multiple Myspace employees abused an internal company tool to spy on users, in some cases including ex-partners, Motherboard has learned.

Named ‘Overlord,’ the tool allowed employees to see users’ passwords and their messages, according to multiple former employees. While the tool was originally designed to help moderate the platform and allow MySpace to comply with law enforcement requests, multiple sources said the tool was used for illegitimate purposes by employees who accessed Myspace user data without authorization to do so.

“It was basically an entire backdoor to the Myspace platform,” one of the former employees said of Overlord. (Motherboard granted five former Myspace employees anonymity to discuss internal Myspace incidents.)

[…]

The existence and abuse of Overlord, which was not previously reported, shows that since the earliest days of social media, sensitive user data and communication has been vulnerable to employees of huge platforms. In some cases, user data has been maliciously accessed, a problem that companies like Facebook and Snapchat have also faced.

[…]

“Every company has it,” Hemanshu Nigam, who was Myspace’s Chief Security Officer from 2006 to 2010, said in a phone interview referring to such administration tools. “Whether it’s for dealing with abuse, or responding to law enforcement or civil requests, or for managing a user’s account because they’re raising some type of issue with it.”

[…]

Even though social media platforms may need a tool like this for legitimate law enforcement purposes, four former Myspace workers said the company fired employees for abusing Overlord.

“The tool was used to gain access to a boyfriend/girlfriend’s login credentials,” one of the sources added. A second source wasn’t sure if the abuse did target ex-partners, but said they assumed so.

“Myspace, the higher ups, were able to cross reference the specific policy enforcement agent with their friends on their Myspace page to see if they were looking up any of their contacts or ex-boyfriends/girlfriends,” that former employee said, explaining how Myspace could identify employees abusing their Overlord access.

[…]

“Misuse of user data will result in termination of employment,” the spokesperson wrote.

The Myspace spokesperson added that, today, access is limited to a “very small number of employees,” and that all access is logged and reviewed.

Several of the former employees emphasised the protections in place to mitigate against insider abuse.

“The account access would be searched to see which agents accessed the account. Managers would then take action. Unless the account was previously associated with a support case, that employee was terminated immediately. This was a zero tolerance policy,” one former employee, who worked in a management role, said.

Another former employee said Myspace “absolutely” warned employees about abusing Overlord.

“There were strict access controls; there was training before you were allowed to use the tools; there was also managerial monitoring of how tools were being used; and there was a strict no-second-chance policy, that if you did violate any of the capabilities given to you, you were removed from not only your position, but from the company completely,” Nigam, the former CSO, said.

Nonetheless, the former employees said the tool was still abused.

“Any tool that is written for a specific, very highly privileged purpose can be misused,” Wendy Nather, head of advisory chief information security officers at cybersecurity firm Duo, said in a phone call. “It’s the responsibility of the designer and the developer to put in controls when it’s being built to assume that it could be abused, and to put checks on that.”

[…]

Several tech giants and social media platforms have faced their own malicious employee issues. Motherboard previously reported Facebook has fired multiple employees for abusing their data access, including one as recently as last year. Last month, Motherboard revealed Snapchat employees abused their own access to spy on users, and described an internal tool called SnapLion. That tool was also designed to respond to legitimate law enforcement requests before being abused.

Source: When Myspace Was King, Employees Abused a Tool Called ‘Overlord’ to Spy on Users – VICE

U.S. and Iran’s Hackers Are Trading Blows

Chris Krebs, the director of the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency, issued a statement on June 22 following similar warnings from private American cybersecurity firms.

Krebs, whose recently renamed agency is tasked with protecting American critical infrastructure, said CISA is “aware of a recent rise in malicious cyber activity” against American companies and government agencies by Iranian actors.

CISA specifically warned about “wiper” attacks which, in addition to stealing data, then destroy it as well. It’s not clear who exactly was targeted.

American operators are targeting Iranians as well, Yahoo News reported on Friday. The news was confirmed by the Washington Post and the New York Times. Iranian officials said the attacks were unsuccessful, Americans deemed the attacks “very” effective.

The Americans say they hacked Iranian spies who were allegedly involved in several attacks against oil tankers in the Persian Gulf over recent weeks. The cyberattacks followed a U.S. spy drone being shot down over Iran last week.

Even though President Donald Trump called off a kinetic attack with just minutes to spare last week, there’s little reason to think the overall conflict is over. The U.S. is preparing more hacking plans to target Iran while American businesses are expecting that if tension continues, it’ll be them in the crosshairs.

Cyberwar has fundamentally changed some of the calculus of war. Two decades ago, when the U.S. invaded a pair of countries on the other side of the world, the conflict was largely confined to those countries. Hacking levels the playing field and allows a country like Iran — which would generally not be able to compete with the American military’s traditional superiority — to inflict damage inside the U.S. itself.

Source: U.S. and Iran’s Hackers Are Trading Blows

And this is how monopolies take advantage of Open Source: Google’s plan to fork curl for no reason than to have their own version

Google is planning to reimplement parts of libcurl, a widely used open-source file transfer library, as a wrapper for Chromium’s networking API – but curl’s lead developer does not welcome the “competition”.

Issue 973603 in the Chromium bug tracker describes libcrurl,”a wrapper library for the libcurl easy interface implemented via Cronet API”.

Cronet is the Chromium network stack, used not only by Google’s browser but also available to Android applications.

The rationale is that:

Implementing libcurl using Cronet would allow developers to take advantage of the utility of the Chrome Network Stack, without having to learn a new interface and its corresponding workflow. This would ideally increase ease of accessibility of Cronet, and overall improve adoption of Cronet by first-party or third-party applications.

The Google engineer also believes that “it may also be desirable to develop a ‘crurl’ tool, which would potentially function as a substitute for the curl command in terminal or similar processes. This would be useful to troubleshoot connection issues or test the functionality of the Chrome Network Stack in a easily [sic] reproducible manner.”

Daniel Stenberg, lead developer of curl, has his doubts:

Getting basic functionality for a small set of use cases should be simple and straight forward. But even if they limit the subset to number of functions and libcurl options, making them work exactly as we have them documented will be hard and time consuming.

I don’t think applications will be able to arbitrarily use either library for a very long time, if ever. libcurl has 80 public functions and curl_easy_setopt alone takes 268 different options!

The real issue, though, is not so much Google’s ability to do this – after all, as Stenberg noted: “If they just put two paid engineers on their project they already have more dedicated man power than the original libcurl project does.”

Rather, it is why Google is reimplementing libcurl as a wrapper for its own APIs rather than simply using libcurl and potentially improving it for everyone.

“I think introducing half-baked implementations of the API will cause users grief since it will be hard for users to understand what API it is and how they differ,” Stenberg wrote. He also feels that naming the Chromium versions “libcrurl” and “crurl” will cause confusion as they “look like typos of the original names”.

Stenberg is clear that the Google team is morally and legally allowed to do this, since curl is free and open source under the MIT licence. But he added:

We are determined to keep libcurl the transfer library for the internet. We support the full API and we offer full backwards compatibility while working the same way on a vast amount of different platforms and architectures. Why use a copy when the original is free, proven and battle-tested since years?

Over to you, Google.®

Source: Kids can be so crurl: Lead dev unchuffed with Google’s plan to remake curl in its own image • The Register

Meds prescriptions for 78,000 patients left in a database with no password

A MongoDB database was left open on the internet without a password, and by doing so, exposed the personal details and prescription information for more than 78,000 US patients.

The leaky database was discovered by the security team at vpnMentor, led by Noam Rotem and Ran Locar, who shared their findings exclusively with ZDNet earlier this week.

The database contained information on 391,649 prescriptions for a drug named Vascepa; used for lowering triglycerides (fats) in adults that are on a low-fat and low-cholesterol diet.

Additionally, the database also contained the collective information of over 78,000 patients who were prescribed Vascepa in the past.

Leaked information included patient data such as full names, addresses, cell phone numbers, and email addresses, but also prescription info such as prescribing doctor, pharmacy information, NPI number (National Provider Identifier), NABP E-Profile Number (National Association of Boards of Pharmacy), and more.

HIPAA leak screenshot
Image: vpnMentor

According to the vpnMentor team, all the prescription records were tagged as originating from PSKW, the legal name for a company that provides patient and provider messaging, co-pay, and assistance programs for healthcare organizations via a service named ConntectiveRX.

Source: Meds prescriptions for 78,000 patients left in a database with no password | ZDNet

Buyer Beware: Used Nest Cams Can Let People Spy on You

A member of the Facebook Wink Users Group discovered that after selling his Nest cam, he was still able to access images from his old camera—except it wasn’t a feed of his property. Instead, he was tapping into the feed of the new owner, via his Wink account. As the original owner, he had connected the Nest Cam to his Wink smart-home hub, and somehow, even after he reset it, the connection continued.

We decided to test this ourselves and found that, as it happened for the person on Facebook, images from our decommissioned Nest Cam Indoor were still viewable via a previously linked Wink hub account—although instead of a video stream, it was a series of still images snapped every several seconds.

Here’s the process we used to confirm it:

Our Nest cam had recently been signed up to Nest Aware, but the subscription was canceled in the past week. That Nest account was also linked to a Wink Hub 2. Per Nest’s instructions, we confirmed that our Aware subscription was not active, after which we removed our Nest cam from our Nest account—this is Nest’s guidance for a “factory reset” of this particular camera.

A screenshot on the Nest website with instructions for factory-resetting Nest Cams and Dropcams.
Nest’s instructions for doing a factory reset on the Nest Cam indicate that there is no factory reset button, a common feature on smart-home devices.

After that, we were unable to access the live stream with either the mobile Nest app or the desktop Nest app, as expected. We also couldn’t access the camera using the Wink app, because the camera was not online. We then created a new Nest account on a new (Android) device that had a new data connection. We followed the steps for adding the Nest Cam Indoor to that new Nest account, and we were able to view a live stream successfully through the Nest mobile app. However, going back to our Wink app, we were also able to view a stream of still images from the Nest cam, despite its being associated with a new Nest account.

In simpler terms: If you buy and set up a used Nest indoor camera that has been paired with a Wink hub, the previous owner may have unfettered access to images from that camera. And we currently don’t know of any cure for this problem.

Source: Buyer Beware: Used Nest Cams Can Let People Spy on You: Reviews by Wirecutter | A New York Times Company

Updated: patch your nest to fix it!

Hack of U.S. Border Surveillance Contractor Is Way Bigger Than the Government Lets On

Even as Homeland Security officials have attempted to downplay the impact of a security intrusion that reached deep into the network of a federal surveillance contractor, secret documents, handbooks, and slides concerning surveillance technology deployed along U.S. borders are being widely and openly shared online.

A terabyte of torrents seeded by Distributed Denial of Secrets (DDOS)—journalists dispersing records that governments and corporations would rather nobody read—are as of writing being downloaded daily. As of this week, that includes more than 400 GB of data stolen by an unknown actor from Perceptics, a discreet contractor based in Knoxville, Tennessee, that works for Customs and Border Protection (CBP) and is, regardless of whatever U.S. officials say, right now the epicenter of a major U.S. government data breach.

The files include powerpoint presentations, manuals, marketing materials, budgets, equipment lists, schematics, passwords, and other documents detailing Perceptics’ work for CBP and other government agencies for nearly a decade. Tens of thousands of surveillance photographs taken of travelers and their vehicles at the U.S. border are among the first tranches of data to be released. Reporters are digging through the dump and already expanding our understanding of the enormous surveillance apparatus that is being erected on our border.

In a statement last week, CBP insisted that none of the image data had been identified online, even as one headline declared, “Here Are Images of Drivers Hacked From a U.S. Border Protection Contractor.”

“The breach covers a huge amount of data which has, until now, been protected by dozens of Non-Disclosure Agreements and the (b)(4) trade-secrets exemption which Perceptics has demanded DHS apply to all Perceptics information,” DDOS team member Emma Best, who often reports for the Freedom of Information site MuckRock, told Gizmodo.

(Best has also contributed reporting on WikiLeaks for Gizmodo.)

Despite the government’s attempt to downplay the breach, the Perceptics files, she said, “include schematics, plans, and reports for DHS, the DEA, and the Pentagon as well as foreign clients.”

While the files can be viewed online, according to Best, DDOS has experienced nearly a 50 percent spike in traffic from users who’ve opted to download the entire dataset.

“We’re making these files available for public review because they provide an unprecedented and intimate look at the mass surveillance of legal travel, as well as more local surveillance of turnpike and secure facilities,” Best said. “Most importantly they provide a glimpse of how the government and these companies protect our information—or, in some cases, how they fail to.”

Neither CBP nor Perceptics immediately responded to a request for comment.

Source: Hack of U.S. Border Surveillance Contractor Is Way Bigger Than the Government Lets On

Millions of Dell PCs Vulnerable to Flaw in SupportAssist software

Millions of PCs made by Dell and other OEMs are vulnerable to a flaw stemming from a component in pre-installed SupportAssist software. The flaw could enable a remote attacker to completely takeover affected devices.

The high-severity vulnerability (CVE-2019-12280) stems from a component in SupportAssist, a proactive monitoring software pre-installed on PCs with automatic failure detection and notifications for Dell devices. That component is made by a company called PC-Doctor, which develops hardware-diagnostic software for various PC and laptop original equipment manufacturers (OEMs).

“According to Dell’s website, SupportAssist is preinstalled on most of Dell devices running Windows, which means that as long as the software is not patched, this vulnerability probably affects many Dell users,” Peleg Hadar, security researcher with SafeBreach Labs – who discovered the breach – said in a Friday analysis.

Source: Millions of Dell PCs Vulnerable to Flaw in Third-Party Component | Threatpost

Chrome is the biggest snoop of all on your computer or cell phone – so switch browser before there is no alternative any more

You open your browser to look at the Web. Do you know who is looking back at you?

Over a recent week of Web surfing, I peered under the hood of Google Chrome and found it brought along a few thousand friends. Shopping, news and even government sites quietly tagged my browser to let ad and data companies ride shotgun while I clicked around the Web.

This was made possible by the Web’s biggest snoop of all: Google. Seen from the inside, its Chrome browser looks a lot like surveillance software.

Lately I’ve been investigating the secret life of my data, running experiments to see what technology really gets up to under the cover of privacy policies that nobody reads. It turns out, having the world’s biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop.

It made me decide to ditch Chrome for a new version of nonprofit Mozilla’s Firefox, which has default privacy protections. Switching involved less inconvenience than you might imagine.

My tests of Chrome vs. Firefox unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker “cookies” that Chrome would have ushered right onto my computer but were automatically blocked by Firefox. These little files are the hooks that data firms, including Google itself, use to follow what websites you visit so they can build profiles of your interests, income and personality.

Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service’s log-in pages.

And that’s not the half of it.

Look in the upper right corner of your Chrome browser. See a picture or a name in the circle? If so, you’re logged in to the browser, and Google might be tapping into your Web activity to target ads. Don’t recall signing in? I didn’t, either. Chrome recently started doing that automatically when you use Gmail.

Chrome is even sneakier on your phone. If you use Android, Chrome sends Google your location every time you conduct a search. (If you turn off location sharing it still sends your coordinates out, just with less accuracy.)

Firefox isn’t perfect — it still defaults searches to Google and permits some other tracking. But it doesn’t share browsing data with Mozilla, which isn’t in the data-collection business.

At a minimum, Web snooping can be annoying. Cookies are how a pair of pants you look at in one site end up following you around in ads elsewhere. More fundamentally, your Web history — like the color of your underpants — ain’t nobody’s business but your own. Letting anyone collect that data leaves it ripe for abuse by bullies, spies and hackers.

[…]

Choosing a browser is no longer just about speed and convenience — it’s also about data defaults.

It’s true that Google usually obtains consent before gathering data, and offers a lot of knobs you can adjust to opt out of tracking and targeted advertising. But its controls often feel like a shell game that results in us sharing more personal data.

I felt hoodwinked when Google quietly began signing Gmail users into Chrome last fall. Google says the Chrome shift didn’t cause anybody’s browsing history to be “synced” unless they specifically opted in — but I found mine was being sent to Google and don’t recall ever asking for extra surveillance. (You can turn off the Gmail auto-login by searching “Gmail” in Chrome settings and switching off “Allow Chrome sign-in.”)

After the sign-in shift, Johns Hopkins associate professor Matthew Green made waves in the computer science world when he blogged he was done with Chrome. “I lost faith,” he told me. “It only takes a few tiny changes to make it very privacy unfriendly.”

When you use Chrome, signing into Gmail automatically logs in the browser to your Google account. When “sync” is also on, Google receives your browsing history.

There are ways to defang Chrome, which is much more complicated than just using “Incognito Mode.” But it’s much easier to switch to a browser not owned by an advertising company.

Like Green, I’ve chosen Firefox, which works across phones, tablets, PCs and Macs. Apple’s Safari is also a good option on Macs, iPhones and iPads, and the niche Brave browser goes even further in trying to jam the ad-tech industry.

What does switching to Firefox cost you? It’s free, and downloading a different browser is much simpler than changing phones.

[…]

And as a nonprofit, it earns money when people make searches in the browser and click on ads — which means its biggest source of income is Google. Mozilla’s chief executive says the company is exploring new paid privacy services to diversify its income.

Its biggest risk is that Firefox might someday run out of steam in its battle with the Chrome behemoth. Even though it’s the No. 2 desktop browser,with about 10 percent of the market, major sites could decide to drop support, leaving Firefox scrambling.

If you care about privacy, let’s hope for another David and Goliath outcome.

Source: Google is the biggest snoop of all on your computer or cell phone

Upgrade your memory with a surgically implanted chip!

In a grainy black-and-white video shot at the Mayo Clinic in Minnesota, a patient sits in a hospital bed, his head wrapped in a bandage. He’s trying to recall 12 words for a memory test but can only conjure three: whale, pit, zoo. After a pause, he gives up, sinking his head into his hands.

In a second video, he recites all 12 words without hesitation. “No kidding, you got all of them!” a researcher says. This time the patient had help, a prosthetic memory aid inserted into his brain.

Over the past five years, the U.S. Defense Advanced Research Projects Agency (Darpa) has invested US$77 million to develop devices intended to restore the memory-generation capacity of people with traumatic brain injuries. Last year two groups conducting tests on humans published compelling results.

The Mayo Clinic device was created by Michael Kahana, a professor of psychology at the University of Pennsylvania, and the medical technology company Medtronic Plc. Connected to the left temporal cortex, it monitors the brain’s electrical activity and forecasts whether a lasting memory will be created. “Just like meteorologists predict the weather by putting sensors in the environment that measure humidity and wind speed and temperature, we put sensors in the brain and measure electrical signals,” Kahana says. If brain activity is suboptimal, the device provides a small zap, undetectable to the patient, to strengthen the signal and increase the chance of memory formation. In two separate studies, researchers found the prototype consistently boosted memory 15 per cent to 18 per cent.

The second group performing human testing, a team from Wake Forest Baptist Medical Center in Winston-Salem, N.C., aided by colleagues at the University of Southern California, has a more finely tuned method. In a study published last year, their patients showed memory retention improvement of as much as 37 per cent. “We’re looking at questions like, ‘Where are my keys? Where did I park the car? Have I taken my pills?’ ” says Robert Hampson, lead author of the 2018 study.

To form memories, several neurons fire in a highly specific way, transmitting a kind of code. “The code is different for unique memories, and unique individuals,” Hampson says. By surveying a few dozen neurons in the hippocampus, the brain area responsible for memory formation, his team learned to identify patterns indicating correct and incorrect memory formation for each patient and to supply accurate codes when the brain faltered.

In presenting patients with hundreds of pictures, the group could even recognize certain neural firing patterns as particular memories. “We’re able to say, for example, ‘That’s the code for the yellow house with the car in front of it,’ ” says Theodore Berger, a professor of bioengineering at the University of Southern California who helped develop mathematical models for Hampson’s team.

Both groups have tested their devices only on epileptic patients with electrodes already implanted in their brains to monitor seizures; each implant requires clunky external hardware that won’t fit in somebody’s skull. The next steps will be building smaller implants and getting approval from the U.S. Food and Drug Administration to bring the devices to market. A startup called Nia Therapeutics Inc. is already working to commercialize Kahana’s technology.

Justin Sanchez, who just stepped down as director of Darpa’s biological technologies office, says veterans will be the first to use the prosthetics. “We have hundreds of thousands of military personnel with traumatic brain injuries,” he says. The next group will likely be stroke and Alzheimer’s patients. Eventually, perhaps, the general public will have access—though there’s a serious obstacle to mass adoption. “I don’t think any of us are going to be signing up for voluntary brain surgery anytime soon,” Sanchez says. “Only when these technologies become less invasive, or noninvasive, will they become widespread.”

Source: Upgrade your memory with a surgically implanted chip! – BNN Bloomberg

FYI: Your Venmo transfers with those edgy emojis aren’t private by default. And someone’s put 7m of them into a public DB

Graduate student Dan Salmon has released online seven million Venmo transfers, scraped from the social payment biz in recent months, to call attention to the privacy risks of public transaction data.

Venmo, for the uninitiated, is an app that allows friends to pay each other money for stuff. El Reg‘s Bay Area vultures primarily use it for settling restaurant and bar bills that we have no hope of expensing; one person pays on their personal credit card, and their pals transfer their share via Venmo. It makes picking up the check a lot easier.

Because it’s the 2010s, by default, Venmo makes those transactions public along with attached messages and emojis, sorta like Twitter but for payments, allowing people to pry into strangers’ spending and interactions. Who went out with whom for drinks, who owed someone a sizable debt, who went on vacation, and so on.

“I am releasing this dataset in order to bring attention to Venmo users that all of this data is publicly available for anyone to grab without even an API key,” said Salmon in a post to GitHub. “There is some very valuable data here for any attacker conducting [open-source intelligence] research.”

[…]

Despite past criticism from privacy advocates and a settlement with the US Federal Trade Commission, Venmo has kept person-to-person purchases public by default.

[…]

Last July, Berlin-based researcher Hang Do Thi Duc explored some 200m Venmo transactions from 2017 and set up a website, PublicByDefault.fyi, to peruse the e-commerce data. His stated goal was to change people’s attitudes about sharing data unnecessarily.

When The Register asked about transaction privacy last year, after a developer created a bot that tweeted Venmo purchases mentioning drugs, a company spokesperson said, “Like on other social networks, Venmo users can choose what they want to share on the Venmo public feed. There are a number of different settings that users can customize when it comes to sharing payments on Venmo.”

The current message from the company is not much different: “Venmo was designed for sharing experiences with your friends in today’s social world, and the newsfeed has always been a big part of this,” a Venmo spokesperson told The Register in an email. “Our users trust us with their money and personal information, and we take this responsibility very seriously.”

“I think Venmo is resisting calls to make their data private because it would go against the entire pitch of the app,” said Salmon. “Venmo is designed to be a “‘social’ app and the more open and social you make things, the more you open yourself to problems.”

Venmo’s privacy policy details all the ways in which customer data is not private.

Source: FYI: Your Venmo transfers with those edgy emojis aren’t private by default. And someone’s put 7m of them into a public DB • The Register

Siemens Gamesa Unveils World First Electrothermal Energy Storage System, stores electricity in volcanic rock

Spanish renewable energy giant and offshore wind energy leader Siemens Gamesa Renewable Energy last week inaugurated operations of its electrothermal energy storage system which can store up to 130 megawatt-hours of electricity for a week in volcanic rock.

[…]

The heat storage facility consists of around 1,000 tonnes of volcanic rock which is used as the storage medium. The rock is fed with electrical energy which is then converted into hot air by means of a resistance heater and a blower that, in turn, heats the rock to 750°C/1382 °F. When demand requires the stored energy, ETES uses a steam turbine to re-electrify the stored energy and feeds it back into the grid.

The new ETES facility in Hamburg-Altenwerder can store up to 130 MWh of thermal energy for a week, and storage capacity remains constant throughout the charging cycles.

Source: Siemens Gamesa Unveils World First Electrothermal Energy Storage System | CleanTechnica

Google Calendar was down for hours after major outage

Google Calendar was down for users around the world for nearly three hours earlier today. Calendar users trying to access the service were met with a 404 error message through a browser from around 10AM ET until around 12:40PM ET. Google’s Calendar service dashboard now reveals that issues should be resolved for everyone within the next hour.

“We expect to resolve the problem affecting a majority of users of Google Calendar at 6/18/19, 1:40 PM,” the message reads. “Please note that this time frame is an estimate and may change.” Google Calendar appears to have returned for most users, though. Other Google services such as Gmail and Google Maps appeared to be unaffected during the calendar outage, although Hangouts Meet reportedly experiencing some difficulties.

Google Calendar’s issues come in the same month as another massive Google outage which saw YouTube, Gmail, and Snapchat taken offline because of problems with the company’s overall Cloud service. At the time, Google blamed “high levels of network congestion in the eastern USA” for the issues.

The outage also came just over an hour after Google’s G Suite twitter account sent out a tweet promoting Google Calendar’s ability to making scheduling simpler.

Source: Google Calendar was down for hours after major outage

Software below the poverty line – Open Source Developers being exploited

However, I recently met other open source developers that make a living from donations, and they helped widen my perspective. At Amsterdam.js, I heard Henry Zhu speak about sustainability in the Babel project and beyond, and it was a pretty dire picture. Later, over breakfast, Henry and I had a deeper conversation on this topic. In Amsterdam I also met up with Titus, who maintains the Unified project full-time. Meeting with these people I confirmed my belief in the donation model for sustainability. It works. But, what really stood out to me was the question: is it fair?

I decided to collect data from OpenCollective and GitHub, and take a more scientific sample of the situation. The results I found were shocking: there were two clearly sustainable open source projects, but the majority (more than 80%) of projects that we usually consider sustainable are actually receiving income below industry standards or even below the poverty threshold.

What the data says

I picked popular open source projects from OpenCollective, and selected the yearly income data from each. Then I looked up their GitHub repositories, to measure the count of stars, and how many “full-time” contributors they have had in the past 12 months. Sometimes I also looked up the Patreon pages for those few maintainers that had one, and added that data to the yearly income for the project. For instance, it is obvious that Evan You gets money on Patreon to work on Vue.js. These data points allowed me to measure: project popularity (a proportional indicator of the number of users), yearly revenue for the whole team, and team size.

[…]

Those that work full-time sometimes complement their income with savings or by living in a country with lower costs of living, or both (Sindre Sorhus).

Then, based on the latest StackOverflow developer survey, we know that the low end of developer salaries is around $40k, while the high end of developer salaries is above $100k. That range depicts the industry standard for developers, given their status as knowledge workers, many of which are living in OECD countries. This allowed me to classify the results into four categories:

  • BLUE: 6-figure salary
  • GREEN: 5-figure salary within industry standards
  • ORANGE: 5-figure salary below our industry standards
  • RED: salary below the official US poverty threshold

The first chart, below, shows team size and “price” for each GitHub star.

Open source projects, income-per-star versus team size

More than 50% of projects are red: they cannot sustain their maintainers above the poverty line. 31% of the projects are orange, consisting of developers willing to work for a salary that would be considered unacceptable in our industry. 12% are green, and only 3% are blue: Webpack and Vue.js. Income per GitHub star is important: sustainable projects generally have above $2/star. The median value, however, is $1.22/star. Team size is also important for sustainability: the smaller the team, the more likely it can sustain its maintainers.

The median donation per year is $217, which is substantial when understood on an individual level, but in reality includes sponsorship from companies that are doing this also for their own marketing purposes.

The next chart shows how revenue scales with popularity.

Open source projects, yearly revenue versus GitHub stars

You can browse the data yourself by accessing this Dat archive with a LibreOffice Calc spreadsheet:

dat://bf7b912fff1e64a52b803444d871433c5946c990ae51f2044056bf6f9655ecbf
 [...]

The total amount of money being put into open source is not enough for all the maintainers. If we add up all of the yearly revenue from those projects in this data set, it’s $2.5 million. The median salary is approximately $9k, which is below the poverty line. If split up that money evenly, that’s roughly $22k, which is still below industry standards.

The core problem is not that open source projects are not sharing the money received. The problem is that, in total numbers, open source is not getting enough money. $2.5 million is not enough. To put this number into perspective, startups get typically much more than that.

Tidelift has received $40 million in funding, to “help open source creators and maintainers get fairly compensated for their work” (quote). They have a team of 27 people, some of them ex-employees from large companies (such as Google and GitHub). They probably don’t receive the lower tier of salaries. Yet, many of the open source projects they showcase on their website are below poverty line regarding income from donations.

[…]

GitHub was bought by Microsoft for $7.5 billion. To make that quantity easier to grok, the amount of money Microsoft paid to acquire GitHub – the company – is more than 3000x what the open source community is getting yearly. In other words, if the open source community saved up every penny of the money they ever received, after a couple thousand years they could perhaps have enough money to buy GitHub jointly.

[…]

If Microsoft GitHub is serious about helping fund open source, they should put their money where their mouth is: donate at least $1 billion to open source projects. Even a mere $1.5 million per year would be enough to make all the projects in this study become green. The Matching Fund in GitHub Sponsors is not enough, it gives a maintainer at most just $5k in a year, which is not sufficient to raise the maintainer from the poverty threshold up to industry standard.

Source: André Staltz – Software below the poverty line

Unfortunately I’ve been talking about this for years now.

It’s time to make open source open but less free for the big users.

Anyone else find it weird that the bloke tasked with probing tech giants for antitrust abuses used to, um, work for the same tech giants?

The man heading up any potentially US government antitrust probes into tech giants like Apple and Google used to work for… Apple and Google.

In the revolving-door world that is Washington DC, that conflict may not seem like much but one person isn’t having it: Senator Elizabeth Warren (D-MA) this week sent Makan Delrahim a snotagram in which she took issue with him overseeing tech antitrust efforts.

“I am writing to urge you to recuse yourself from the Department of Justice’s (DOJ) reported antitrust investigations into Google and Apple,” she wrote. “Although you are the chief antitrust attorney in the DoJ, your prior work lobbying the federal government on behalf of these and other companies in antitrust matters compromises your ability to manage or advise on this investigation without real or perceived conflicts of interest.”

Warren then outlines precisely what she means by conflict of interests: “In 2007, Google hired you to lobby federal antitrust officials on behalf of the company’s proposed acquisition of online advertising company DoubleClick, a $3.1 billion merger that the federal government eventually signed off on… You reported an estimated $100,000 in income from Google in 2007.”

It’s not just Google either. “In addition to the investigation into Google, the DoJ will also have jurisdiction over Apple. In both 2006 and 2007, Apple hired you to lobby the federal government on its behalf on patent reform issues,” Warren continues.

She notes: “Federal ethics law requires that individuals recuse themselves from any ‘particular matter involving specific parties’ if ‘the circumstances would cause a reasonable person with knowledge of the relevant facts to question his impartiality in the matter.’ Given your extensive and lucrative previous work lobbying the federal government on behalf of Google and Apple… any reasonable person would surely question your impartiality in antitrust matters…”

This is fine

Delrahim has also done work for a range of other companies including Anthem, Pfizer, Qualcomm, and Caesars but it’s the fact that he has specific knowledge and connections with the very highest levels of tech giants while being in charge of one of the most anticipated antitrust investigations of the past 30 years that has got people concerned.

This is ridiculous, of course, because Delrahim is a professional and works for whoever hires him. It’s not as if he would do something completely inappropriate like give a speech outside the United States in which he walks through exactly how he would carry out an antitrust investigation into tech giants and the holes that would exist in such an investigation, thereby giving them a clear blueprint to work against.

Because that would be nuts.

He definitely did not do that. What he actually did was talk about how it was possible to investigate tech giants, despite some claiming it wasn’t – which is, you’ll understand, quite the opposite.

“The Antitrust Division does not take a myopic view of competition,” Delrahim said during a speech in Israel this week. “Many recent calls for antitrust reform, or more radical change, are premised on the incorrect notion that antitrust policy is only concerned with keeping prices low. It is well-settled, however, that competition has price and non-price dimensions.”

Instead, he noted: “Diminished quality is also a type of harm to competition… As an example, privacy can be an important dimension of quality. By protecting competition, we can have an impact on privacy and data protection.”

So that’s diminished quality and privacy as lines of attack. Anything else, Makan?

“Generally speaking, an exclusivity agreement is an agreement in which a firm requires its customers to buy exclusively from it, or its suppliers to sell exclusively to it. There are variations of this restraint, such as requirements contracts or volume discounts,” he mused at the Antitrust New Frontiers Conference in Tel Aviv.

Source: Anyone else find it weird that the bloke tasked with probing tech giants for antitrust abuses used to, um, work for the same tech giants? • The Register

So it looks as though he is ignoring most of what is making this antitrust predatory as he’s mainly looking at price, then a bit at quality and privacy. Except he’s not looking at quality and privacy. Or leverrage. Or the waterbed effect. Or undercutting. Or product copying. Or vertical integration. Or aggression.

For more on why monopolies are bad, check out

 

Facing Antitrust Pressure, Google Starts Spinning Its Own Too Big to Fail Argument

In an interview this week with CNN, Google CEO Sundar Pichai attempted to turn antitrust questions around by pointing to what they say is the silver lining of size: Big beats China. In the face of an intensifying push for antitrust action, the argument has been called tech’s version of “too big to fail.”

“Scale does offer many benefits, it’s important to understand that,” Google CEO Sundar Pichai said. “As a company, we sometimes invest five, ten years ahead without necessarily worrying about short term profits. If you think about how technology leadership contributes to leadership on a global economic scale. Big companies are what are investing in AI the most. There are many benefits to taking a long term view which big companies are able to do.”

Pichai, who did allow that scrutiny and competition were ultimately good things, made points that echoed arguments made by Facebook CEO Mark Zuckerberg who made his point a lot more frankly.

“I think you have this question from a policy perspective, which is, ‘Do we want American companies to be exporting across the world?’” Zuckerberg said last year. “I think that the alternative, frankly, is going to be the Chinese companies.”

Pichai never outright said the word “China” but he didn’t have to. China’s rising tech industry and increasingly tense relationship with the United States

“There are many countries around the world which aspire to be the next Silicon Valley. And they are supporting their companies, too,” Pichai said to CNN. “So we have to balance both. This doesn’t mean you don’t scrutinize large companies. But you have to balance it with the fact that you want big, successful companies as well.”

This has been one of Silicon Valley’s safest fallback arguments since antitrust sentiment began gaining steam in the United States. But the history of American industry offers a serious counterweight.

Columbia Law School professor Tim Wu spent much of 2018 outlining the case for antitrust action. He wrote a book on the subject, The Curse of Bigness: Antitrust in the New Gilded Age, and appeared all over media to make his argument. In an op-ed for the New York Times, Wu called back to the heated Japanese-American tech competition of the 1980s.

IBM faced an unprecedented international challenge in the mainframe market from Japan’s NEC while Sony, Panasonic, and Toshiba made giant leaps forward. The companies had the strong support of the Japanese government.

Wu laid out what happened next:

Had the United States followed the Zuckerberg logic, we would have protected and promoted IBM, AT&T and other American tech giants — the national champions of the 1970s. Instead, the federal government accused the main American tech firms of throttling competition. IBM was subjected to a devastating, 13-year-long antitrust investigation and trial, and the Justice Department broke AT&T into eight pieces in 1984. And indeed, the effect was to weaken some of America’s most powerful tech firms at a key moment of competition with a foreign power.

But something else happened as well. With IBM and AT&T under constant scrutiny, a whole series of industries and companies were born without fear of being squashed by a monopoly. The American software industry, freed from IBM, came to life, yielding companies like Microsoft, Sun and Lotus. Personal computers from Apple and other companies became popular, and after the breakup of AT&T, companies like CompuServe and America Online rushed into online networking, eventually yielding what we now call the “internet economy.”

Silicon Valley’s argument, however, does resonate. The 1980s is not the 2010s and the relationship between China and the U.S. today is significantly colder and even more complex than Japan and the U.S. three decades ago.

American politicians have echoed some of big tech’s concerns about Chinese leadership.

Congress just opened what promises to be a lengthy antitrust investigation into big tech that barely talked about China.

Source: Facing Antitrust Pressure, Google Starts Spinning Its Own Too Big to Fail Argument

I’d agree with Wu – the China argument is a fear trap. Antitrust history – in the tech, oil and telephony industries, among others – has shown that when titans fall, many smaller, agile and much more innovative companies spring up to take their place, fueling employment gains, exports and better lifestyles for all of us.

Phantom Brigade – turn based mech game where you can see into the future

Phantom Brigade is a hybrid turn-based & real-time tactical RPG, focusing on in-depth customization and player driven stories. As the last surviving squad of mech pilots, you must capture enemy equipment and facilities to level the playing field. Outnumbered and out-gunned, lead The Brigade through a desperate campaign to retake their war-torn homeland.

 

Source: Phantom Brigade | Brace Yourself Games

We Have Detected Signs of Our Milky Way Colliding With Another Galaxy

According to new research, Antlia 2’s current position is consistent with a collision with the Milky Way hundreds of millions of years ago that could have produced the perturbations we see today. The paper has been submitted for publication and is undergoing peer review.

Antlia 2 was a bit of a surprise when it showed up in the second Gaia mission data release last year. It’s really close to the Milky Way – one of our satellite galaxies – and absolutely enormous, about the size of the Large Magellanic Cloud.

But it’s incredibly diffuse and faint, and hidden from view by the galactic disc, so it managed to evade detection.

That data release also showed in greater detail ripples in the Milky Way’s disc. But astronomers had known about perturbations in that region of the disc for several years by that point, even if the data wasn’t as clear as that provided by Gaia.

It was based on this earlier information that, in 2009, astrophysicist Sukanya Chakrabarti of the Rochester Institute of Technology and colleagues predicted the existence of a dwarf galaxy dominated by dark matter in pretty much the exact location Antlia 2 was found nearly a decade later.

Using the new Gaia data, the team calculated Antlia 2’s past trajectory, and ran a series of simulations. These produced not just the dwarf galaxy’s current position, but the ripples in the Milky Way’s disc by way of a collision less than a billion years ago.

antlia collisionSimulation of the collision: The gas distribution is on the left, stars on the right. (RIT)

Previously, a different team of researchers had attributed these perturbations to an interaction with the Sagittarius Dwarf Spheroidal Galaxy, another of the Milky Way’s satellites.

Chakrabarti and her team also ran simulations of this scenario, and found that the Sagittarius galaxy’s gravity probably isn’t strong enough to produce the effects observed by Gaia.

“Thus,” the researchers wrote in their paper, “we argue that Antlia 2 is the likely driver of the observed large perturbations in the outer gas disk of the Galaxy.”

Source: We Have Detected Signs of Our Milky Way Colliding With Another Galaxy

Storm in a teacup: Linux Command-Line Editors Do What they’re supposed to do, are called Vulnerable to High-Severity Bugs by ‘researcher’

A bug impacting editors Vim and Neovim could allow a trojan code to escape sandbox mitigations.

A high-severity bug impacting two popular command-line text editing applications, Vim and Neovim, allow remote attackers to execute arbitrary OS commands. Security researcher Armin Razmjou warned that exploiting the bug is as easy as tricking a target into clicking on a specially crafted text file in either editor.

Razmjou’s PoC is able to bypass modeline mitigations, which execute value expressions in a sandbox. That’s to prevent somebody from creating a trojan horse text file in modelines, the researcher said.

“However, the :source! command (with the bang [!] modifier) can be used to bypass the sandbox. It reads and executes commands from a given file as if typed manually, running them after the sandbox has been left,” according to the PoC report.

Vim and Neovim have both released patches for the bug (CVE-2019-12735) that the National Institute of Standards and Technology warns, “allows remote attackers to execute arbitrary OS commands via the :source! command in a modeline.”

“Beyond patching, it’s recommended to disable modelines in the vimrc (set nomodeline), to use the securemodelinesplugin, or to disable modelineexpr (since patch 8.1.1366, Vim-only) to disallow expressions in modelines,” the researcher said.

Source: Linux Command-Line Editors Vulnerable to High-Severity Bug | Threatpost

First off, you can’t click in vi, but OK. Second, the whole idea is that you can run commands from vi. So basically he is calling a functionality a flaw.

Readability of privacy policies for big tech companies visualised

For The New York Times, Kevin Litman-Navarro plotted the length and readability of privacy policies for large companies:

To see exactly how inscrutable they have become, I analyzed the length and readability of privacy policies from nearly 150 popular websites and apps. Facebook’s privacy policy, for example, takes around 18 minutes to read in its entirety – slightly above average for the policies I tested.

The comparison is between websites with a focus on Facebook and Google, but the main takeaway I think is that almost all privacy policies are complex, because they’re not there for the users.

Source: Readability of privacy policies for big tech companies | FlowingData