Source: Cinemaps — Andrew DeGraff
Stock trading service Robinhood has admitted today to storing some customers’ passwords in cleartext, according to emails the company has been sending to impacted customers, and seen by ZDNet.
“On Monday night, we discovered that some user credentials were stored in a readable format within our internal system,” the company said.
“We resolved the issue, and after thorough review, found no evidence that this information was accessed by anyone outside our response team.”
Robinhood is now resetting passwords out of an abundance of caution, despite not finding any evidence of abuse.
[…]
Storing passwords in cleartext is a huge security blunder; however, Robinhood is in “good company.” This year alone, Facebook, Instagram, and Google have all admitted to storing users passwords in cleartext.
Facebook admitted in March to storing passwords in cleartext for hundreds of millions of Facebook Lite users and tens of millions of Facebook users.
Facebook then admitted again in April to storing passwords in cleartext for millions of Instagram users.
Google admitted in May to also storing an unspecified number of passwords in cleartext for G Suite users for nearly 14 years.
And, a year before, in 2018, both Twitter and GitHub admitted to accidentally storing user plaintext passwords in internal logs.
Robinhood is a web and mobile service with a huge following, allowing zero-commission trading in classic stocks, but also cryptocurrencies.
Source: Robinhood admits to storing some passwords in cleartext | ZDNet
The scientific consensus that humans are causing global warming is likely to have passed 99%, according to the lead author of the most authoritative study on the subject, and could rise further after separate research that clears up some of the remaining doubts.
Three studies published in Nature and Nature Geoscience use extensive historical data to show there has never been a period in the last 2,000 years when temperature changes have been as fast and extensive as in recent decades.
It had previously been thought that similarly dramatic peaks and troughs might have occurred in the past, including in periods dubbed the Little Ice Age and the Medieval Climate Anomaly. But the three studies use reconstructions based on 700 proxy records of temperature change, such as trees, ice and sediment, from all continents that indicate none of these shifts took place in more than half the globe at any one time.
The Little Ice Age, for example, reached its extreme point in the 15th century in the Pacific Ocean, the 17th century in Europe and the 19th century elsewhere, says one of the studies. This localisation is markedly different from the trend since the late 20th century when records are being broken year after year over almost the entire globe, including this summer’s European heatwave.
[…]
“There is no doubt left – as has been shown extensively in many other studies addressing many different aspects of the climate system using different methods and data sets,” said Stefan Brönnimann, from the University of Bern and the Pages 2K consortium of climate scientists.
Commenting on the study, other scientists said it was an important breakthrough in the “fingerprinting” task of proving how human responsibility has changed the climate in ways not seen in the past.
“This paper should finally stop climate change deniers claiming that the recent observed coherent global warming is part of a natural climate cycle. This paper shows the truly stark difference between regional and localised changes in climate of the past and the truly global effect of anthropogenic greenhouse emissions,” said Mark Maslin, professor of climatology at University College London.
Previous studies have shown near unanimity among climate scientists that human factors – car exhausts, factory chimneys, forest clearance and other sources of greenhouse gases – are responsible for the exceptional level of global warming.
A 2013 study in Environmental Research Letters found 97% of climate scientists agreed with this link in 12,000 academic papers that contained the words “global warming” or “global climate change” from 1991 to 2011. Last week, that paper hit 1m downloads, making it the most accessed paper ever among the 80+ journals published by the Institute of Physics, according to the authors.
Source: ‘No doubt left’ about scientific consensus on global warming, say experts | Science | The Guardian
Some models of Airbus A350 airliners still need to be hard rebooted after exactly 149 hours, despite warnings from the EU Aviation Safety Agency (EASA) first issued two years ago.
In a mandatory airworthiness directive (AD) reissued earlier this week, EASA urged operators to turn their A350s off and on again to prevent “partial or total loss of some avionics systems or functions”.
The revised AD, effective from tomorrow (26 July), exempts only those new A350-941s which have had modified software pre-loaded on the production line. For all other A350-941s, operators need to completely power the airliner down before it reaches 149 hours of continuous power-on time.
[…]
Airbus’ rival Boeing very publicly suffered from a similar time-related problem with its 787 Dreamliner: back in 2015 a memory overflow bug was discovered that caused the 787’s generators to shut themselves down after 248 days of continual power-on operation. A software counter in the generators’ firmware, it was found, would overflow after that precise length of time. The Register is aware that this is not the only software-related problem to have plagued the 787 during its earlier years.
It is common for airliners to be left powered on while parked at airport gates so maintainers can carry out routine systems checks between flights, especially if the aircraft is plugged into ground power.
The remedy for the A350-941 problem is straightforward according to the AD: install Airbus software updates for a permanent cure, or switch the aeroplane off and on again.
Source: Airbus A350 software bug forces airlines to turn planes off and on every 149 hours • The Register
France will develop satellites armed with laser weapons, and will use the weapons against enemy satellites that threaten the country’s space forces. The announcement is just part of a gradual shift in acceptance of space-based weaponry as countries reliant on space for military operations in the air, on land, and at sea—as well as for economic purposes, bow to reality and accept space as a future battleground.
In remarks earlier today, French Defense Minister Florence Parly said, “If our satellites are threatened, we intend to blind those of our adversaries. We reserve the right and the means to be able to respond: that could imply the use of powerful lasers deployed from our satellites or from patrolling nano-satellites.”
“We will develop power lasers, a field in which France has fallen behind,” Parly added.
Last year France accused Russia of space espionage, stating that Moscow’s Luch satellite came too close to a Franco-Italian Athena-Fidus military communications satellite. The satellite, which has a transfer rate of 3 gigabits per second, passes video, imagery, and secure communications among French and Italian forces. “It got close. A bit too close,” Parly told an audience in 2018. “So close that one really could believe that it was trying to capture our communications.”
France also plans to develop nano-satellite patrollers—small satellites that act as bodyguards for larger French space assets by 2023. Per Parly’s remarks, nano-sats could be armed with lasers. According to DW, France is also adding cameras to new Syracuse military communications satellites.
Additionally France plans to set up its own space force, the “Air and Space Army,” as part of the French Air Force. The new organization will be based in Toulouse, but it’s not clear if the Air and Space Army will remain part of the French Air Force or become its own service branch.
Source: France Is Making Space-Based Anti-Satellite Laser Weapons
The weaponisation of space has properly begun
The two worked together to bring a training method called Population Based Training (PBT for short) to bear on Waymo’s challenge of building better virtual drivers, and the results were impressive — DeepMind says in a blog post that using PBT decreased by 24% false positives in a network that identifies and places boxes around pedestrians, bicyclists and motorcyclists spotted by a Waymo vehicle’s many sensors. Not only that, but is also resulted in savings in terms of both training time and resources, using about 50% of both compared to standard methods that Waymo was using previously.
[…]
To step back a little, let’s look at what PBT even is. Basically, it’s a method of training that takes its cues from how Darwinian evolution works. Neural nets essentially work by trying something and then measuring those results against some kind of standard to see if their attempt is more “right” or more “wrong” based on the desired outcome
[…]
But all that comparative training requires a huge amount of resources, and sorting the good from the bad in terms of which are working out relies on either the gut feeling of individual engineers, or massive-scale search with a manual component involved where engineers “weed out” the worst performing neural nets to free up processing capabilities for better ones.
What DeepMind and Waymo did with this experiment was essentially automate that weeding, automatically killing the “bad” training and replacing them with better-performing spin-offs of the best-in-class networks running the task. That’s where evolution comes in, since it’s kind of a process of artificial natural selection.
Wow, I hate when people actually write at you to read a sentence again (cut out for your mental wellness).
In June 2017, the notorious file-scrambling software nasty NotPetya caused global havoc that affected government agencies, power suppliers, healthcare providers and big biz.
The ransomware sought out vulnerabilities and used a modified version of the NSA’s leaked EternalBlue SMB exploit, generating one of the most financially costly cyber-attacks to date.
Among the victims was US food giant Mondelez – the parent firm of Oreo cookies and Cadburys chocolate – which is now suing insurance company Zurich American for denying a £76m claim (PDF) filed in October 2018, a year after the NotPetya attack. According to the firm, the malware rendered 1,700 of its servers and 24,000 of its laptops permanently dysfunctional.
In January, Zurich rejected the claim, simply referring to a single policy exclusion which does not cover “hostile or warlike action in time of peace or war” by “government or sovereign power; the military, naval, or air force; or agent or authority”.
Mondelez, meanwhile, suffered significant loss as the attack infiltrated the company – affecting laptops, the company network and logistics software. Zurich American claims the damage, as the result of an “an act of war”, is therefore not covered by Mondelez’s policy, which states coverage applies to “all risks of physical loss or damage to electronic data, programs, or software, including loss or damage caused by the malicious introduction of a machine code or instruction.”
While war exclusions are common in insurance policies, the court papers themselves refer to the grounds as “unprecedented” in relation to “cyber incidents”.
Previous claims have only been based on conventional armed conflicts.
Zurich’s use of this sort of exclusion in a cybersecurity policy could be a game-changer, with the obvious question being: was NotPetya an act of war, or just another incidence of ransomware?
The UK, US and Ukrainian governments, for their part, blamed the attack on Russian, state-sponsored hackers, claiming it was the latest act in an ongoing feud between Russia and Ukraine.
[…]
The minds behind the Tallinn Manual – the international cyberwar rules of engagement – were divided as to whether damage caused met the armed criterion. However, they noted there was a possibility that it could in rare circumstances.
Professor Michael Schmitt, director of the Tallinn Manual project, indicated (PDF) that it is reasonable to extend armed attacks to cyber-attacks. The International Committee of the Red Cross (ICRC) went further to enunciate that cyber operations that only disable certain objects are still qualified as an attack, despite no physical damage.