About Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

Capital One gets Capital Done: Hacker swipes personal info on 106 million US, Canadian credit card applicants

A hacker raided Capital One’s cloud storage buckets and stole personal information on 106 million credit card applicants in America and Canada.

The swiped data includes 140,000 US social security numbers and 80,000 bank account numbers, we’re told, as well as one million Canadian social insurance numbers, plus names, addresses, phone numbers, dates of birth, and reported incomes.

The pilfered data was submitted to Capital One by credit card hopefuls between 2005 and early 2019. The info was siphoned between March this year and July 17, and Capital One learned of the intrusion on July 19.

Seattle software engineer Paige A. Thompson, aka “erratic,” aka 0xA3A97B6C on Twitter, was suspected of nicking the data, and was collared by the FBI at her home on Monday this week. The 33-year-old has already appeared in court, charged with violating the US Computer Fraud and Abuse Act. She will remain in custody until her next hearing on August 1.

According to the Feds in their court paperwork [PDF], Thompson broke into Capital One’s cloud-hosted storage, believed to be Amazon Web Services’ S3 buckets, and downloaded their contents.

The financial giant said the intruder exploited a “configuration vulnerability,” while the Feds said a “firewall misconfiguration permitted commands to reach and be executed” by Capital One’s cloud-based storage servers. US prosecutors said the thief slipped past a “misconfigured web application firewall.”

Source: Capital One gets Capital Done: Hacker swipes personal info on 106 million US, Canadian credit card applicants • The Register

Not so much a hack as poor security by Capital One then

Dutch ministry of Justice recommends to Dutch gov to stop using office 365 and windows 10

Basically they don’t like data being shared with third parties doing predictive profiling with the data and they don’t like all the telemetry being sent everywhere, nor do they like MS being able to view and running through content such as text, pictures and videos.

Source: Ministerie van justitie: Stop met gebruik Office 365 – Webwereld (Dutch)

Meet the AI robots being used to help solve America’s recycling crisis

The way the robots work is simple. Guided by cameras and computer systems trained to recognize specific objects, the robots’ arms glide over moving conveyor belts until they reach their target. Oversized tongs or fingers with sensors that are attached to the arms snag cans, glass, plastic containers, and other recyclable items out of the rubbish and place them into nearby bins.

The robots — most of which have come online only within the past year — are assisting human workers and can work up to twice as fast. With continued improvements in the bots’ ability to spot and extract specific objects, they could become a formidable new force in the $6.6 billion U.S. industry.

Researchers like Lily Chin, a PhD. student at the Distributed Robotics Lab at MIT, are working to develop sensors for these robots that can improve their tactile capabilities and improve their sense of touch so they can determine plastic, paper and metal through their fingers. “Right now, robots are mostly reliant on computer vision, but they can get confused and make mistakes,” says Chin. “So now we want to integrate these new tactile capabilities.”

Denver-based AMP Robotics, is one of the companies on the leading edge of innovation in the field. It has developed software — a AMP Neuron platform that uses computer vision and machine learning — so robots can recognize different colors, textures, shapes, sizes and patterns to identify material characteristics so they can sort waste.

The robots are being installed at the Single Stream Recyclers plant in Sarasota, Florida and they will be able to pick 70 to 80 items a minute, twice as fast as humanly possible and with greater accuracy.

CNBC: trash seperating robot
Bulk Handling Systems Max-AI AQC-C robot
Bulk Handling Systems

“Using this technology you can increase the quality of the material and in some cases double or triple its resale value,” says AMP Robotics CEO Mantaya Horowitz. “Quality standards are getting stricter that’s why companies and researchers are working on high tech solutions.”

Source: Meet the robots being used to help solve America’s recycling crisis

Facebook’s answer to the encryption debate: install spyware with content filters! (updated: maybe not)

The encryption debate is typically framed around the concept of an impenetrable link connecting two services whose communications the government wishes to monitor. The reality, of course, is that the security of that encryption link is entirely separate from the security of the devices it connects. The ability of encryption to shield a user’s communications rests upon the assumption that the sender and recipient’s devices are themselves secure, with the encrypted channel the only weak point.

After all, if either user’s device is compromised, unbreakable encryption is of little relevance.

This is why surveillance operations typically focus on compromising end devices, bypassing the encryption debate entirely. If a user’s cleartext keystrokes and screen captures can be streamed off their device in real-time, it matters little that they are eventually encrypted for transmission elsewhere.

[…]

Facebook announced earlier this year preliminary results from its efforts to move a global mass surveillance infrastructure directly onto users’ devices where it can bypass the protections of end-to-end encryption.

In Facebook’s vision, the actual end-to-end encryption client itself such as WhatsApp will include embedded content moderation and blacklist filtering algorithms. These algorithms will be continually updated from a central cloud service, but will run locally on the user’s device, scanning each cleartext message before it is sent and each encrypted message after it is decrypted.

The company even noted that when it detects violations it will need to quietly stream a copy of the formerly encrypted content back to its central servers to analyze further, even if the user objects, acting as true wiretapping service.

Facebook’s model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once.

Asked the current status of this work and when it might be deployed in the production version of WhatsApp, a company spokesperson declined to comment.

Of course, Facebook’s efforts apply only to its own encryption clients, leaving criminals and terrorists to turn to other clients like Signal or their own bespoke clients they control the source code of.

The problem is that if Facebook’s model succeeds, it will only be a matter of time before device manufacturers and mobile operating system developers embed similar tools directly into devices themselves, making them impossible to escape. Embedding content scanning tools directly into phones would make it possible to scan all apps, including ones like Signal, effectively ending the era of encrypted communications.

Governments would soon use lawful court orders to require companies to build in custom filters of content they are concerned about and automatically notify them of violations, including sending a copy of the offending content.

Rather than grappling with how to defeat encryption, governments will simply be able to harness social media companies to perform their mass surveillance for them, sending them real-time alerts and copies of the decrypted content.

Source: The Encryption Debate Is Over – Dead At The Hands Of Facebook

Update 4/8/19 Bruce Schneier is convinced that this story has been concocted from a single source and Facebook is not in fact planning to do this currently. I’m inclined to agree.

source: More on Backdooring (or Not) WhatsApp

Deep TabNine AI-powered autocompletion software is Gmail’s Smart Compose for coders

Deep TabNine is what’s known as a coding autocompleter. Programmers can install it as an add-on in their editor of choice, and when they start writing, it’ll suggest how to continue each line, offering small chunks at a time. Think of it as Gmail’s Smart Compose feature but for code.

Jacob Jackson, the computer science undergrad at the University of Waterloo who created Deep TabNine, says this sort of software isn’t new, but machine learning has hugely improved what it can offer. “It’s solved a problem for me,” he tells The Verge.

Jackson started work on the original version of the software, TabNine, in February last year before launching it that November. But earlier this month, he released an updated version that uses a deep learning text-generation algorithm called GPT-2, which was designed by the research lab OpenAI, to improve its abilities. The update has seriously impressed coders, who have called it “amazing,” “insane,” and “absolutely mind-blowing” on Twitter.

[…]

Deep TabNine is trained on 2 million files from coding repository GitHub. It finds patterns in this data and uses them to suggest what’s likely to appear next in any given line of code, whether that’s a variable name or a function.

Using deep learning to create autocompletion software offers several advantages, says Jackson. It makes it easy to add support for new languages, for a start. You only need to drop more training data into Deep TabNine’s hopper, and it’ll dig out patterns, he says. This means that Deep TabNine supports some 22 different coding languages while most alternatives just work with one.

(The full list of languages Deep TabNine supports are as follows: Python, JavaScript, Java, C++, C, PHP, Go, C#, Ruby, Objective-C, Rust, Swift, TypeScript, Haskell, OCaml, Scala, Kotlin, Perl, SQL, HTML, CSS, and Bash.)

Most importantly, thanks to the analytical abilities of deep learning, the suggestions Deep TabNine makes are of a high overall quality. And because the software doesn’t look at users’ own code to make suggestions, it can start helping with projects right from the word go, rather than waiting to get some cues from the code the user writes.

The software isn’t perfect, of course. It makes mistakes in its suggestions and isn’t useful for all types of coding. Users on various programming hang-outs like Hacker News and the r/programming subreddit have debated its merits and offered some mixed reviews (though they mostly skew positive). As you’d expect from a coding tool built for coders, people have a lot to say about how exactly it works with their existing editors and workflow.

One complaint that Jackson agrees is legitimate is that Deep TabNine is more suited to certain types of coding. It works best when autocompleting relatively rote code, the sort of programming that’s been done thousands of times with small variations. It’s less able to write exploratory code, where the user is solving a novel problem. That makes sense considering that the software’s smarts come from patterns found in archival data.

Deep TabNine being used to write some C++.

So how useful is it really for your average coder? That’ll depend on a whole lot of factors, like what programming language they use and what they’re trying to achieve. But Jackson says it’s more like a faster input method than a human coding partner (a common practice known as pair programming).

Source: This AI-powered autocompletion software is Gmail’s Smart Compose for coders – The Verge

Intellectual Debt (in AI): With Great Power Comes Great Ignorance

For example, aspirin was discovered in 1897, and an explanation of how it works followed in 1995. That, in turn, has spurred some research leads on making better pain relievers through something other than trial and error.

This kind of discovery — answers first, explanations later — I call “intellectual debt.” We gain insight into what works without knowing why it works. We can put that insight to use immediately, and then tell ourselves we’ll figure out the details later. Sometimes we pay off the debt quickly; sometimes, as with aspirin, it takes a century; and sometimes we never pay it off at all.

Be they of money or ideas, loans can offer great leverage. We can get the benefits of money — including use as investment to produce more wealth — before we’ve actually earned it, and we can deploy new ideas before having to plumb them to bedrock truth.

Indebtedness also carries risks. For intellectual debt, these risks can be quite profound, both because we are borrowing as a society, rather than individually, and because new technologies of artificial intelligence — specifically, machine learning — are bringing the old model of drug discovery to a seemingly unlimited number of new areas of inquiry. Humanity’s intellectual credit line is undergoing an extraordinary, unasked-for bump up in its limit.

[…]

Technical debt arises when systems are tweaked hastily, catering to an immediate need to save money or implement a new feature, while increasing long-term complexity. Anyone who has added a device every so often to a home entertainment system can attest to the way in which a series of seemingly sensible short-term improvements can produce an impenetrable rat’s nest of cables. When something stops working, this technical debt often needs to be paid down as an aggravating lump sum — likely by tearing the components out and rewiring them in a more coherent manner.

[…]

Machine learning has made remarkable strides thanks to theoretical breakthroughs, zippy new hardware, and unprecedented data availability. The distinct promise of machine learning lies in suggesting answers to fuzzy, open-ended questions by identifying patterns and making predictions.

[…]

Researchers have pointed out thorny problems of technical debt afflicting AI systems that make it seem comparatively easy to find a retiree to decipher a bank system’s COBOL. They describe how machine learning models become embedded in larger ones and then be forgotten, even as their original training data goes stale and their accuracy declines.

But machine learning doesn’t merely implicate technical debt. There are some promising approaches to building machine learning systems that in fact can offer some explanations — sometimes at the cost of accuracy — but they are the rare exceptions. Otherwise, machine learning is fundamentally patterned like drug discovery, and it thus incurs intellectual debt. It stands to produce answers that work, without offering any underlying theory. While machine learning systems can surpass humans at pattern recognition and predictions, they generally cannot explain their answers in human-comprehensible terms. They are statistical correlation engines — they traffic in byzantine patterns with predictive utility, not neat articulations of relationships between cause and effect. Marrying power and inscrutability, they embody Arthur C. Clarke’s observation that any sufficiently advanced technology is indistinguishable from magic.

But here there is no David Copperfield or Ricky Jay who knows the secret behind the trick. No one does. Machine learning at its best gives us answers as succinct and impenetrable as those of a Magic 8-Ball — except they appear to be consistently right. When we accept those answers without independently trying to ascertain the theories that might animate them, we accrue intellectual debt.

Source: Intellectual Debt: With Great Power Comes Great Ignorance

Hot weather cuts French, German nuclear power output by ~ 8%

Electricity output was curtailed at six reactors by 0840 GMT on Thursday, while two other reactors were offline, data showed. High water temperatures and sluggish flows limit the ability to use river water to cool reactors.

In Germany, PreussenElektra, the nuclear unit of utility E.ON, said it would take its Grohnde reactor offline on Friday due to high temperatures in the Weser river.

The second heatwave in successive months to hit western Europe is expected to peak on Thursday with record temperatures seen in several towns in France.

Utility EDF, which operates France’s 58 nuclear reactors, said that generation at its Bugey, St-Alban and Tricastin nuclear power plants may be curbed until after July 26 because of the low flow rate and high temperatures of the Rhone.

Its two reactors at the 2,600 megawatt (MW) Golfech nuclear power plant in the south of France were offline due to high temperatures on the Garonne river.

EDF’s use of water from rivers as a coolant is regulated by law to protect plant and animal life and it is obliged to cut output in hot weather when water temperatures rise, or when river levels and flow rates are low.

Atomic power from France’s 58 reactors accounts for over 75 percent of its electricity needs. Available nuclear power supply was down 1.4 percentage points at 65.3% of total capacity compared with Wednesday.

A spokeswoman for grid operator RTE said that although electricity demand was expected to rise due to increased consumption for cooling, France had enough generation capacity to cover demand. Peak power demand could be above 59.7 GW reached the previous day.

Source: Hot weather cuts French, German nuclear power output – Reuters

Apple Contractors Reportedly Overhear Sensitive Information and Sexy Times Thanks to Siri

First Amazon, then Google, and now Apple have all confirmed that their devices are not only listening to you, but complete strangers may be reviewing the recordings. Thanks to Siri, Apple contractors routinely catch intimate snippets of users’ private lives like drug deals, doctor’s visits, and sexual escapades as part of their quality control duties, the Guardian reported Friday.

As part of its effort to improve the voice assistant, “[a] small portion of Siri requests are analysed to improve Siri and dictation,” Apple told the Guardian. That involves sending these recordings sans Apple IDs to its international team of contractors to rate these interactions based on Siri’s response, amid other factors. The company further explained that these graded recordings make up less than 1 percent of daily Siri activations and that most only last a few seconds.

That isn’t the case, according to an anonymous Apple contractor the Guardian spoke with. The contractor explained that because these quality control procedures don’t weed out cases where a user has unintentionally triggered Siri, contractors end up overhearing conversations users may not ever have wanted to be recorded in the first place. Not only that, details that could potentially identify a user purportedly accompany the recording so contractors can check whether a request was handled successfully.

“There have been countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on. These recordings are accompanied by user data showing location, contact details, and app data,” the whistleblower told the Guardian.

And it’s frighteningly easy to activate Siri by accident. Most anything that sounds remotely like “Hey Siri” is likely to do the trick, as UK’s Secretary of Defense Gavin Williamson found out last year when the assistant piped up as he spoke to Parliament about Syria. The sound of a zipper may even be enough to activate it, according to the contractor. They said that of Apple’s devices, the Apple Watch and HomePod smart speaker most frequently pick up accidental Siri triggers, and recordings can last as long as 30 seconds.

While Apple told the Guardian the information collected from Siri isn’t connected to other data Apple may have on a user, the contractor told a different story:

“There’s not much vetting of who works there, and the amount of data that we’re free to look through seems quite broad. It wouldn’t be difficult to identify the person that you’re listening to, especially with accidental triggers—addresses, names and so on.”

Staff were told to report these accidental activations as technical problems, the worker told the paper, but there wasn’t guidance on what to do if these recordings captured confidential information.

All this makes Siri’s cutesy responses to users questions seem far less innocent, particularly its answer when you ask if it’s always listening: “I only listen when you’re talking to me.”

Fellow tech giants Amazon and Google have faced similar privacy scandals recently over recordings from their devices. But while these companies also have employees who monitor each’s respective voice assistant, users can revoke permissions for some uses of these recordings. Apple provides no such option in its products.

[The Guardian]

Source: Apple Contractors Reportedly Overhear Sensitive Information and Sexy Times Thanks to Siri

Most YouTube climate change videos ‘oppose the consensus view’

The majority of YouTube videos about the climate crisis oppose the scientific consensus and “hijack” technical terms to make them appear credible, a new study has found. Researchers have warned that users searching the video site to learn about climate science may be exposed to content that goes against mainstream scientific belief.

Dr Joachim Allgaier of RWTH Aachen University in Germany analysed 200 YouTube videos to see if they adhered to or challenged the scientific consensus. To do so, he chose 10 search terms:

  • Chemtrails
  • Climate
  • Climate change
  • Climate engineering
  • Climate hacking
  • Climate manipulation
  • Climate modification
  • Climate science
  • Geoengineering
  • Global warming

The videos were then assessed to judge how closely they adhered to the scientific consensus, as represented by the findings of reports by UN Intergovernmental Panel on Climate Change (IPCC) from 2013 onwards.

These concluded that humans have been the “dominant cause” of global warming since the 1950s. However, Allgaier found that the message of 120 of the top 200 search results went against this view.

To avoid personalised results, Allgaier used the anonymisation tool Tor, which hides a computer’s IP address and means YouTube treats each search as coming from a different user.

The results for the search terms climate, climate change, climate science and global warming mostly reflected the scientific consensus view. Allgaier said this was because many contained excerpts from TV news programmes or documentaries.

The same could not be said for the results of searches related to chemtrails, climate engineering, climate hacking, climate manipulation, climate modification and geoengineering. Very few of these videos explained the scientific rationale behind their ideas, Allgaier said.

Source: Most YouTube climate change videos ‘oppose the consensus view’ | Technology | The Guardian

In a Lab Accident, Scientists Create the First-Ever Permanently Magnetic Liquid

Using a technique to 3D-print liquids, the scientists created millimeter-size droplets from water, oil and iron-oxides. The liquid droplets keep their shape because some of the iron-oxide particles bind with surfactants — substances that reduce the surface tension of a liquid. The surfactants create a film around the liquid water, with some iron-oxide particles creating part of the filmy barrier, and the rest of the particles enclosed inside, Russell said.

The team then placed the millimeter-size droplets near a magnetic coil to magnetize them. But when they took the magnetic coil away, the droplets demonstrated an unseen behavior in liquids — they remained magnetized. (Magnetic liquids called ferrofluids do exist, but these liquids are only magnetized when in the presence of a magnetic field.)

When those droplets approached a magnetic field, the tiny iron-oxide particles all aligned in the same direction. And once they removed the magnetic field, the iron-oxide particles bound to the surfactant in the film were so jam-packed that they couldn’t move and so remained aligned. But those free-floating inside the droplet also remained aligned.

The scientists don’t fully understand how these particles hold onto the field, Russell said. Once they figure that out, there are many potential applications. For example, Russell imagines printing a cylinder with a non-magnetic middle and two magnetic caps. “The two ends would come together like a horseshoe magnet,” and be used as a mini “grabber,” he said.

In an even more bizarre application, imagine a mini liquid person — a smaller-scale version of the liquid T-1000 from the second “Terminator” movie — Russell said. Now imagine that parts of this mini liquid man are magnetized and parts aren’t. An external magnetic field could then force the little person to move its limbs like a marionette.

“For me, it sort of represents a sort of new state of magnetic materials,” Russell said. The findings were published on July 19 in the journal Science.

Source: In a Lab Accident, Scientists Create the First-Ever Permanently Magnetic Liquid

Robinhood fintech app admits to storing some passwords in cleartext

Stock trading service Robinhood has admitted today to storing some customers’ passwords in cleartext, according to emails the company has been sending to impacted customers, and seen by ZDNet.

“On Monday night, we discovered that some user credentials were stored in a readable format within our internal system,” the company said.

“We resolved the issue, and after thorough review, found no evidence that this information was accessed by anyone outside our response team.”

Robinhood is now resetting passwords out of an abundance of caution, despite not finding any evidence of abuse.

[…]

Storing passwords in cleartext is a huge security blunder; however, Robinhood is in “good company.” This year alone, Facebook, Instagram, and Google have all admitted to storing users passwords in cleartext.

Facebook admitted in March to storing passwords in cleartext for hundreds of millions of Facebook Lite users and tens of millions of Facebook users.

Facebook then admitted again in April to storing passwords in cleartext for millions of Instagram users.

Google admitted in May to also storing an unspecified number of passwords in cleartext for G Suite users for nearly 14 years.

And, a year before, in 2018, both Twitter and GitHub admitted to accidentally storing user plaintext passwords in internal logs.

Robinhood is a web and mobile service with a huge following, allowing zero-commission trading in classic stocks, but also cryptocurrencies.

Source: Robinhood admits to storing some passwords in cleartext | ZDNet

‘No doubt left’ about scientific consensus on global warming, say experts

The scientific consensus that humans are causing global warming is likely to have passed 99%, according to the lead author of the most authoritative study on the subject, and could rise further after separate research that clears up some of the remaining doubts.

Three studies published in Nature and Nature Geoscience use extensive historical data to show there has never been a period in the last 2,000 years when temperature changes have been as fast and extensive as in recent decades.

It had previously been thought that similarly dramatic peaks and troughs might have occurred in the past, including in periods dubbed the Little Ice Age and the Medieval Climate Anomaly. But the three studies use reconstructions based on 700 proxy records of temperature change, such as trees, ice and sediment, from all continents that indicate none of these shifts took place in more than half the globe at any one time.

The Little Ice Age, for example, reached its extreme point in the 15th century in the Pacific Ocean, the 17th century in Europe and the 19th century elsewhere, says one of the studies. This localisation is markedly different from the trend since the late 20th century when records are being broken year after year over almost the entire globe, including this summer’s European heatwave.

[…]

“There is no doubt left – as has been shown extensively in many other studies addressing many different aspects of the climate system using different methods and data sets,” said Stefan Brönnimann, from the University of Bern and the Pages 2K consortium of climate scientists.

Commenting on the study, other scientists said it was an important breakthrough in the “fingerprinting” task of proving how human responsibility has changed the climate in ways not seen in the past.

“This paper should finally stop climate change deniers claiming that the recent observed coherent global warming is part of a natural climate cycle. This paper shows the truly stark difference between regional and localised changes in climate of the past and the truly global effect of anthropogenic greenhouse emissions,” said Mark Maslin, professor of climatology at University College London.

Previous studies have shown near unanimity among climate scientists that human factors – car exhausts, factory chimneys, forest clearance and other sources of greenhouse gases – are responsible for the exceptional level of global warming.

A 2013 study in Environmental Research Letters found 97% of climate scientists agreed with this link in 12,000 academic papers that contained the words “global warming” or “global climate change” from 1991 to 2011. Last week, that paper hit 1m downloads, making it the most accessed paper ever among the 80+ journals published by the Institute of Physics, according to the authors.

Source: ‘No doubt left’ about scientific consensus on global warming, say experts | Science | The Guardian

Airbus A350 software bug forces airlines to turn planes off and on every 149 hours – must have borrowed some old Boeing 787 code

Some models of Airbus A350 airliners still need to be hard rebooted after exactly 149 hours, despite warnings from the EU Aviation Safety Agency (EASA) first issued two years ago.

In a mandatory airworthiness directive (AD) reissued earlier this week, EASA urged operators to turn their A350s off and on again to prevent “partial or total loss of some avionics systems or functions”.

The revised AD, effective from tomorrow (26 July), exempts only those new A350-941s which have had modified software pre-loaded on the production line. For all other A350-941s, operators need to completely power the airliner down before it reaches 149 hours of continuous power-on time.

[…]

Airbus’ rival Boeing very publicly suffered from a similar time-related problem with its 787 Dreamliner: back in 2015 a memory overflow bug was discovered that caused the 787’s generators to shut themselves down after 248 days of continual power-on operation. A software counter in the generators’ firmware, it was found, would overflow after that precise length of time. The Register is aware that this is not the only software-related problem to have plagued the 787 during its earlier years.

It is common for airliners to be left powered on while parked at airport gates so maintainers can carry out routine systems checks between flights, especially if the aircraft is plugged into ground power.

The remedy for the A350-941 problem is straightforward according to the AD: install Airbus software updates for a permanent cure, or switch the aeroplane off and on again.

Source: Airbus A350 software bug forces airlines to turn planes off and on every 149 hours • The Register

France Is Making Space-Based Anti-Satellite Laser Weapons

France will develop satellites armed with laser weapons, and will use the weapons against enemy satellites that threaten the country’s space forces. The announcement is just part of a gradual shift in acceptance of space-based weaponry as countries reliant on space for military operations in the air, on land, and at sea—as well as for economic purposes, bow to reality and accept space as a future battleground.

In remarks earlier today, French Defense Minister Florence Parly said, “If our satellites are threatened, we intend to blind those of our adversaries. We reserve the right and the means to be able to respond: that could imply the use of powerful lasers deployed from our satellites or from patrolling nano-satellites.”

“We will develop power lasers, a field in which France has fallen behind,” Parly added.

Last year France accused Russia of space espionage, stating that Moscow’s Luch satellite came too close to a Franco-Italian Athena-Fidus military communications satellite. The satellite, which has a transfer rate of 3 gigabits per second, passes video, imagery, and secure communications among French and Italian forces. “It got close. A bit too close,” Parly told an audience in 2018. “So close that one really could believe that it was trying to capture our communications.”

France also plans to develop nano-satellite patrollers—small satellites that act as bodyguards for larger French space assets by 2023. Per Parly’s remarks, nano-sats could be armed with lasers. According to DW, France is also adding cameras to new Syracuse military communications satellites.

Additionally France plans to set up its own space force, the “Air and Space Army,” as part of the French Air Force. The new organization will be based in Toulouse, but it’s not clear if the Air and Space Army will remain part of the French Air Force or become its own service branch.

Source: France Is Making Space-Based Anti-Satellite Laser Weapons

The weaponisation of space has properly begun

Waymo and DeepMind mimic evolution to develop a new, better way to train self-driving AI

The two worked together to bring a training method called Population Based Training (PBT for short) to bear on Waymo’s challenge of building better virtual drivers, and the results were impressive — DeepMind says in a blog post that using PBT decreased by 24% false positives in a network that identifies and places boxes around pedestrians, bicyclists and motorcyclists spotted by a Waymo vehicle’s many sensors. Not only that, but is also resulted in savings in terms of both training time and resources, using about 50% of both compared to standard methods that Waymo was using previously.

[…]

To step back a little, let’s look at what PBT even is. Basically, it’s a method of training that takes its cues from how Darwinian evolution works. Neural nets essentially work by trying something and then measuring those results against some kind of standard to see if their attempt is more “right” or more “wrong” based on the desired outcome

[…]

But all that comparative training requires a huge amount of resources, and sorting the good from the bad in terms of which are working out relies on either the gut feeling of individual engineers, or massive-scale search with a manual component involved where engineers “weed out” the worst performing neural nets to free up processing capabilities for better ones.

What DeepMind and Waymo did with this experiment was essentially automate that weeding, automatically killing the “bad” training and replacing them with better-performing spin-offs of the best-in-class networks running the task. That’s where evolution comes in, since it’s kind of a process of artificial natural selection.

Source: Waymo and DeepMind mimic evolution to develop a new, better way to train self-driving AI | TechCrunch

Wow, I hate when people actually write at you to read a sentence again (cut out for your mental wellness).

Cyberlaw wonks squint at NotPetya insurance smackdown: Should ‘war exclusion’ clauses apply to network hacks?

In June 2017, the notorious file-scrambling software nasty NotPetya caused global havoc that affected government agencies, power suppliers, healthcare providers and big biz.

The ransomware sought out vulnerabilities and used a modified version of the NSA’s leaked EternalBlue SMB exploit, generating one of the most financially costly cyber-attacks to date.

Among the victims was US food giant Mondelez – the parent firm of Oreo cookies and Cadburys chocolate – which is now suing insurance company Zurich American for denying a £76m claim (PDF) filed in October 2018, a year after the NotPetya attack. According to the firm, the malware rendered 1,700 of its servers and 24,000 of its laptops permanently dysfunctional.

In January, Zurich rejected the claim, simply referring to a single policy exclusion which does not cover “hostile or warlike action in time of peace or war” by “government or sovereign power; the military, naval, or air force; or agent or authority”.

Mondelez, meanwhile, suffered significant loss as the attack infiltrated the company – affecting laptops, the company network and logistics software. Zurich American claims the damage, as the result of an “an act of war”, is therefore not covered by Mondelez’s policy, which states coverage applies to “all risks of physical loss or damage to electronic data, programs, or software, including loss or damage caused by the malicious introduction of a machine code or instruction.”

While war exclusions are common in insurance policies, the court papers themselves refer to the grounds as “unprecedented” in relation to “cyber incidents”.

Previous claims have only been based on conventional armed conflicts.

Zurich’s use of this sort of exclusion in a cybersecurity policy could be a game-changer, with the obvious question being: was NotPetya an act of war, or just another incidence of ransomware?

The UK, US and Ukrainian governments, for their part, blamed the attack on Russian, state-sponsored hackers, claiming it was the latest act in an ongoing feud between Russia and Ukraine.

[…]

The minds behind the Tallinn Manual – the international cyberwar rules of engagement – were divided as to whether damage caused met the armed criterion. However, they noted there was a possibility that it could in rare circumstances.

Professor Michael Schmitt, director of the Tallinn Manual project, indicated (PDF) that it is reasonable to extend armed attacks to cyber-attacks. The International Committee of the Red Cross (ICRC) went further to enunciate that cyber operations that only disable certain objects are still qualified as an attack, despite no physical damage.

Source: Cyberlaw wonks squint at NotPetya insurance smackdown: Should ‘war exclusion’ clauses apply to network hacks? • The Register

Google to Pay only $13 Million for sniffing passwords and emails over your wifi using Street View cars between 2007 – 2010

After nearly a decade in court, Google has agreed to pay $13 million in a class-action lawsuit alleging its Street View program collected people’s private data over wifi from 2007 to 2010. In addition to the moolah, the settlement—filed Friday in San Francisco—also calls for Google to destroy all the collected data and teach people how to encrypt their wifi networks.

A quick refresher. Back when Google started deploying its little Street View cars around our neighborhoods, the company also ended up collecting about 600 GB of emails, passwords, and other payload data from unencrypted wifi networks in over 30 countries. In a 2010 blog, Google said the data collection was a “mistake” after a German data protection group asked to audit the data collected by the cars.

[…]

The basis for the class-action lawsuit was that Google was basically infringing on federal wiretapping laws. Google had argued in a separate case on the same issue, Joffe vs Google, that its “mistake” was legal, as unencrypted wifi are a form of radio communication and thereby, readily accessible by the general public. The courts did not agree, and in 2013 ruled Google’s defense was bunk. And despite Google claiming the collection was a “mistake,” according to CNN, in this particular class-action lawsuit, investigators found that Google engineers created the software and embedded them into Street View cars intentionally.

[…]

If you thought Google would pay out the nose for this particular brand of evil, you’d be mistaken. The class-action netted $13 million, with punitive payments only going to the original 22 plaintiffs—additional class members won’t get anything. The remaining money will be then distributed to eight data privacy and consumer protection organizations. Similarly, another case brought by 38 states on yet again, the same issue, only netted a $7 million settlement.

Source: Google Set to Pay $13 Million in Street View Class-Action Suit

Big Tech faces broad U.S. Justice Department antitrust probe

The U.S. Justice Department said on Tuesday it was opening a broad investigation of major digital technology firms into whether they engage in anticompetitive practices, the strongest sign the Trump administration is stepping up its scrutiny of Big Tech.

The review will look into “whether and how market-leading online platforms have achieved market power and are engaging in practices that have reduced competition, stifled innovation, or otherwise harmed consumers,” the Justice Department said in a statement.

The Justice Department did not identify specific companies but said the review would consider concerns raised about “search, social media, and some retail services online” — an apparent reference to Alphabet Inc, Amazon.com Inc and Facebook Inc, and potentially Apple Inc.

[…]

Senator Richard Blumenthal, a Democrat, said the Justice Department “must now be bold and fearless in stopping Big Tech’s misuse of its monopolistic power. Too long absent and apathetic, enforcers now must prevent privacy abuse, anticompetitive tactics, innovation roadblocks, and other hallmarks of excessive market power.”

In June, Reuters reported the Trump administration was gearing up to investigate whether Amazon, Apple, Facebook and Alphabet’s Google misuse their massive market power, setting up what could be an unprecedented, wide-ranging probe of some of the world’s largest companies.

[…]

The Justice Department said the review “is to assess the competitive conditions in the online marketplace in an objective and fair-minded manner and to ensure Americans have access to free markets in which companies compete on the merits to provide services that users want.”

[…]

“There is growing consensus among venture capitalists and startups that there is a kill zone around Google, Amazon, Facebook and Apple that prevents new startups from entering the market with innovative products and services to challenge these incumbents,” said Representative David Cicilline, a Democrat who heads the subcommittee.

[…]

Senator Marsha Blackburn, a Republican, praised the investigation and said a Senate tech task force she chairs would be looking at how to “foster free markets and competition.”

Source: Big Tech faces broad U.S. Justice Department antitrust probe – Reuters

It’s good to hear that the arguments are not only founded on product pricing but are much more wider ranging and address what exactly makes a monopoly.

Tinder Bypasses Google Play, Revolt Against App Store “Fee” (30% monopolistic arm wrench)

Tinder joined a growing backlash against app store taxes by bypassing Google Play in a move that could shake up the billion-dollar industry dominated by Google and Apple Inc.

The online dating site launched a new default payment process that skips Google Play and forces users to enter their credit card details straight into Tinder’s app, according to new research by Macquarie analyst Ben Schachter. Once a user has entered their payment information, the app not only remembers it, but also removes the choice to swap back to Google Play for future purchases, he wrote.

“This is a huge difference,” Schachter said in an interview. “It’s an incredibly high-margin business for Google bringing in billions of dollars,” he said

The shares of Tinder’s parent company, Match Group Inc., spiked 5% when Schachter’s note was published on Thursday. Shares of Google parent Alphabet Inc. were little changed.

Apple and Google launched their app stores in 2008, and they soon grew into powerful marketplaces that matched the creations of millions of independent developers with billions of smartphone users. In exchange, the companies take as much as 30% of revenue. The app economy is expected to grow to $157 billion in 2022, according to App Annie projections.

As the market expands, a growing revolt has been gaining steam over the past year. Spotify Technology SA filed an antitrust complaint with the European Commission earlier this year, claiming the cut Apple takes amounts to a tax on competitors. Netflix Inc. has recently stopped letting Apple users subscribe via the App Store and Epic Games Inc. said last year it wouldn’t distribute Fortnite, one of the world’s most popular video games, through Google Play.

Source: Tinder (MTCH) Bypasses Google Play, Revolt Against App Store Fee – Bloomberg

Microsoft Bribes U.S. gov with $25 Million to End U.S. Probe Into Bribery Overseas

Microsoft Corp. agreed to pay $25 million to settle U.S. government investigations into alleged bribery by former employees in Hungary.

The software maker’s Hungarian subsidiary entered into a non-prosecution agreement with the U.S. Department of Justice and a cease-and-desist order with the Securities and Exchange Commission, Microsoft said in an email to employees from Chief Legal Officer Brad Smith that was posted Monday on the company’s web site. The case concerned violations of the Foreign Corrupt Practices Act, according to an SEC filing

The Justice Department concluded that between 2013 and June 2015 “a senior executive and some other employees at Microsoft Hungary participated in a scheme to inflate margins in the Microsoft sales channel, which were used to fund improper payments under the FCPA,” Smith wrote in the email.

Microsoft sold software to partners at a discount and the partners then resold the products to the Hungarian government at a higher price. The difference went to fund kickbacks to government officials, the Wall Street Journal reported in 2018. The company fired the employees involved, Smith noted.

[…]

The SEC noted that some Microsoft employees violated the law by engaging in unscrupulous sales practices in Saudi Arabia, Turkey and Thailand.

[…]

The U.S. uses the FCPA to police bribe-paying around the world, in what officials have said is an effort to even the playing field. Since 2005, the government has collected billions of dollars in fines from foreign companies and U.S. firms found to be in violation of the law.

Source: Microsoft Pays $25 Million to End U.S. Probe Into Bribery Overseas – Bloomberg

UK cops want years of data from victims phones for no real reason, but it is being misused

A report (PDF), released today by Big Brother Watch and eight other civil rights groups, has argued that complainants are being subjected to “suspicion-less, far-reaching digital interrogations when they report crimes to police”.

It added: “Our research shows that these digital interrogations have been used almost exclusively for complainants of rape and serious sexual offences so far. But since police chiefs formalised this new approach to victims’ data through a national policy in April 2019, they claim they can also be used for victims and witnesses of potentially any crime.”

The policy referred to relates to the Digital Processing Notices instituted by forces earlier this year, which victims of crime are asked to sign, allowing police to download large amounts of data, potentially spanning years, from their phones. You can see what one of the forms looks like here (PDF).

[…]

The form is 9 pages long and states ‘if you refused permission… it may not be possible for the investigation or prosecution to continue’. Someone in a vulnerable position is unlikely to feel that they have any real choice. This does not constitute informed consent either.

Rape cases dropped over cops’ demands for search

The report described how “Kent Police gave the entire contents of a victim’s phone to the alleged perpetrator’s solicitor, which was then handed to the defendant”. It also outlined a situation where a 12-year-old rape survivor’s phone was trawled, despite a confession from the perpetrator. The child’s case was delayed for months while the Crown Prosecution Service “insisted on an extensive digital review of his personal mobile phone data”.

Another case mentioned related to a complainant who reported being attacked by a group of strangers. “Despite being willing to hand over relevant information, police asked for seven years’ worth of phone data, and her case was then dropped after she refused.”

Yet another individual said police had demanded her mobile phone after she was raped by a stranger eight years ago, even after they had identified the attacker using DNA evidence.

Source: UK cops blasted over ‘disproportionate’ slurp of years of data from crime victims’ phones • The Register

Researchers Reveal That Anonymized Data Is Easy To Reverse Engineer

Researchers at Imperial College London published a paper in Nature Communications on Tuesday that explored how inadequate current techniques to anonymize datasets are. Before a company shares a dataset, they will remove identifying information such as names and email addresses, but the researchers were able to game this system.

Using a machine learning model and datasets that included up to 15 identifiable characteristics—such as age, gender, and marital status—the researchers were able to accurately reidentify 99.98 percent of Americans in an anonymized dataset, according to the study. For their analyses, the researchers used 210 different data sets that were gathered from five sources including the U.S. government that featured information on more than 11 million individuals. Specifically, the researchers define their findings as a successful effort to propose and validate “a statistical model to quantify the likelihood for a re-identification attempt to be successful, even if the disclosed dataset is heavily incomplete.”

[…]Even the hypothetical illustrated by the researchers in the study isn’t a distant fiction. In June of this year, a patient at the University of Chicago Medical Center filed a class-action lawsuit against both the private research university and Google for the former sharing his data with the latter without his consent. The medical center allegedly de-identified the dataset, but still gave Google records with the patient’s height, weight, vital signs, information on diseases they have, medical procedures they’ve undergone, medications they are on, and date stamps. The complaint pointed out that aside from the breach of privacy in sharing intimate data without a patient’s consent, that even if it was in some way anonymized, the tools available to a powerful tech corporation make it pretty easy for them to reverse engineer that information and identify a patient.

“Companies and governments have downplayed the risk of re-identication by arguing that the datasets they sell are always incomplete,” de Montjoye said in a statement. “Our findings contradict this and demonstrate that an attacker could easily and accurately estimate the likelihood that the record they found belongs to the person they are looking for.”

Source: Researchers Reveal That Anonymized Data Is Easy To Reverse Engineer

IBM gives cancer-killing drug AI projects to the open source community

Researchers from IBM’s Computational Systems Biology group in Zurich are working on AI and machine learning (ML) approaches to “help to accelerate our understanding of the leading drivers and molecular mechanisms of these complex diseases,” as well as methods to improve our knowledge of tumor composition.

“Our goal is to deepen our understanding of cancer to equip industries and academia with the knowledge that could potentially one day help fuel new treatments and therapies,” IBM says.

The first project, dubbed PaccMann — not to be confused with the popular Pac-Man computer game — is described as the “Prediction of anticancer compound sensitivity with Multi-modal attention-based neural networks.”

[…]

The ML algorithm exploits data on gene expression as well as the molecular structures of chemical compounds. IBM says that by identifying potential anti-cancer compounds earlier, this can cut the costs associated with drug development.

[…]

The second project is called “Interaction Network infErence from vectoR representATions of words,” otherwise known as INtERAcT. This tool is a particularly interesting one given its automatic extraction of data from valuable scientific papers related to our understanding of cancer.

With roughly 17,000 papers published every year in the field of cancer research, it can be difficult — if not impossible — for researchers to keep up with every small step we make in our understanding.

[…]

INtERAcT aims to make the academic side of research less of a burden by automatically extracting information from these papers. At the moment, the tool is being tested on extracting data related to protein-protein interactions — an area of study which has been marked as a potential cause of the disruption of biological processes in diseases including cancer.

[…]

The third and final project is “pathway-induced multiple kernel learning,” or PIMKL. This algorithm utilizes datasets describing what we currently know when it comes to molecular interactions in order to predict the progression of cancer and potential relapses in patients.

PIMKL uses what is known as multiple kernel learning to identify molecular pathways crucial for categorizing patients, giving healthcare professionals an opportunity to individualize and tailor treatment plans.

PaccMann and INtERAcT‘s code has been released and are available on the projects’ websites. PIMKL has been deployed on the IBM Cloud and the source code has also been released.

Source: IBM gives cancer-killing drug AI project to the open source community | ZDNet

But  now the big question: will they maintain it?