About Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

Assange Charges Finally Reveal Why Chelsea Manning Is Sitting in Jail

Charges announced by the Justice Department on Thursday against WikiLeaks founder Julian Assange provide fresh insight into why federal prosecutors sought to question whistleblower Chelsea Manning last month before a federal grand jury in the Eastern District of Virginia.

Manning, convicted in 2013 of leaking classified U.S. government documents to WikiLeaks, was jailed in early March as a recalcitrant witness after refusing to answer the grand jury’s questions. After her arrest, she was held in solitary confinement in a Virginia jail for nearly a month before being moved into its general population—all in an attempt to coerce her into answering questions about conversations she allegedly had with Assange at the time of her illegal disclosures, according to court filings.

Though Manning confessed to leaking more than 725,000 classified documents to WikiLeaks following her deployment to Iraq in 2009—including battlefield reports and five Guantanamo Bay detainee profiles—she was charged with leaking portions of only a couple hundred documents, including dozens of diplomatic cables that have since been declassified.

British authorities on Thursday removed Assange from the Ecuadorian embassy in London, his home for nearly seven years, following Ecuador’s decision to rescind his asylum. The U.S. government has requested that he be extradited to the United States to face a federal charge of conspiracy to commit computer crimes.

Source: Assange Charges Finally Reveal Why Chelsea Manning Is Sitting in Jail

EU Tells Internet Archive That Much Of Its Site Is ‘Terrorist Content’, shows how it will censor the internet with no recourse

We’ve been trying to explain for the past few months just how absolutely insane the new EU Terrorist Content Regulation will be for the internet. Among many other bad provisions, the big one is that it would require content removal within one hour as long as any “competent authority” within the EU sends a notice of content being designated as “terrorist” content. The law is set for a vote in the EU Parliament just next week.

And as if they were attempting to show just how absolutely insane the law would be for the internet, multiple European agencies (we can debate if they’re “competent”) decided to send over 500 totally bogus takedown demands to the Internet Archive last week, claiming it was hosting terrorist propaganda content.

In the past week, the Internet Archive has received a series of email notices from Europol’s European Union Internet Referral Unit (EU IRU) falsely identifying hundreds of URLs on archive.org as “terrorist propaganda”. At least one of these mistaken URLs was also identified as terrorist content in a separate take down notice from the French government’s L’Office Central de Lutte contre la Criminalité liée aux Technologies de l’Information et de la Communication (OCLCTIC).

And just in case you think that maybe the requests are somehow legit, they are so obviously bogus that anyone with a browser would know they are bogus. Included in the list of takedown demands are a bunch of the Archive’s “collection pages” including the entire Project Gutenberg page of public domain texts, it’s collection of over 15 million freely downloadable texts, the famed Prelinger Archive of public domain films and the Archive’s massive Grateful Dead collection. Oh yeah, also a page of CSPAN recordings. So much terrorist content!

And, as the Archive explains, there’s simply no way that (1) the site could have complied with the Terrorist Content Regulation had it been law last week when they received the notices, and (2) that they should have blocked all that obviously non-terrorist content.

The Internet Archive has a few staff members that process takedown notices from law enforcement who operate in the Pacific time zone. Most of the falsely identified URLs mentioned here (including the report from the French government) were sent to us in the middle of the night – between midnight and 3am Pacific – and all of the reports were sent outside of the business hours of the Internet Archive.

The one-hour requirement essentially means that we would need to take reported URLs down automatically and do our best to review them after the fact.

It would be bad enough if the mistaken URLs in these examples were for a set of relatively obscure items on our site, but the EU IRU’s lists include some of the most visited pages on archive.org and materials that obviously have high scholarly and research value.

Those are the requests from Europol, who unfortunately likely qualify as a “competent” authority under the law. The Archive also points out the request from both Europol and the French computer crimes unit targeting a page providing commentary on the Quran as being terrorist content. The French agency told the Archive it needed to take down that content within 24 hours or the Archive may get blocked in France.

Source: EU Tells Internet Archive That Much Of Its Site Is ‘Terrorist Content’ | Techdirt

Serious flaws found in WPA3’s wifi Handshake

because WPA2 is more than 14 years old, the Wi-Fi Alliance recently announced the new and more secure WPA3 protocol. One of the main advantages of WPA3 is that, thanks to its underlying Dragonfly handshake, it’s near impossible to crack the password of a network. Unfortunately, we found that even with WPA3, an attacker within range of a victim can still recover the password of the network. This allows the adversary to steal sensitive information such as credit cards, password, emails, and so on, when the victim uses no extra layer of protection such as HTTPS. Fortunately, we expect that our work and coordination with the Wi-Fi Alliance will allow vendors to mitigate our attacks before WPA3 becomes widespread.

The Dragonfly handshake, which forms the core of WPA3, is also used on certain Wi-Fi networks that require a username and password for access control. That is, Dragonfly is also used in the EAP-pwd protocol. Unfortunately, our attacks against WPA3 also work against EAP-pwd, meaning an adversary can even recover a user’s password when EAP-pwd is used. We also discovered serious bugs in most products that implement EAP-pwd. These allow an adversary to impersonate any user, and thereby access the Wi-Fi network, without knowing the user’s password. Although we believe that EAP-pwd is used fairly infrequently, this still poses serious risks for many users, and illustrates the risks of incorrectly implementing Dragonfly.

The technical details behind our attacks against WPA3 can be found in our detailed research paper titled Dragonblood: A Security Analysis of WPA3’s SAE Handshake. The details of our EAP-pwd attacks are explained on this website.

[…]

The discovered flaws can be abused to recover the password of the Wi-Fi network, launch resource consumption attacks, and force devices into using weaker security groups. All attacks are against home networks (i.e. WPA3-Personal), where one password is shared among all users. Summarized, we found the following vulnerabilities in WPA3:

  • CERT ID #VU871675: Downgrade attack against WPA3-Transtition mode leading to dictionary attacks.
  • CERT ID #VU871675: Security group downgrade attack against WPA3’s Dragonfly handshake.
  • CVE-2019-9494: Timing-based side-channel attack against WPA3’s Dragonfly handshake.
  • CVE-2019-9494: Cache-based side-channel attack against WPA3’s Dragonfly handshake.
  • CERT ID #VU871675: Resource consumption attack (i.e. denial of service) against WPA3’s Dragonfly handshake.

[…]

We have made scripts to test for certain vulnerabilities:

  • Dragonslayer: implements attacks against EAP-pwd (to be released shortly).
  • Dragondrain: this tool can be used to test to which extend an Access Point is vulnerable to denial-of-service attacks against WPA3’s SAE handshake.
  • Dragontime: this is an experimental tool to perform timing attacks against the SAE handshake if MODP group 22, 23, or 24 is used. Note that most WPA3 implementations by default do not enable these groups.
  • Dragonforce: this is an experimental tool which takes the information recover from our timing or cache-based attacks, and performs a password partitioning attack. This is similar to a dictionary attack.

Source: Dragonblood: Analysing WPA3’s Dragonfly Handshake

A Team At Amazon Is Listening To Recordings Captured By Alexa

Seven people, described as having worked in Amazon’s voice review program, told Bloomberg that they sometimes listen to as many as 1,000 recordings per shift, and that the recordings are associated with the customer’s first name, their device’s serial number, and an account number. Among other clips, these employees and contractors said they’ve reviewed recordings of what seemed to be a woman singing in the shower, a child screaming, and a sexual assault. Sometimes, when recordings were difficult to understand — or when they were amusing — team members shared them in an internal chat room, according to Bloomberg.

In an emailed statement to BuzzFeed News, an Amazon spokesperson wrote that “an extremely small sample of Alexa voice recordings” is annotated, and reviewing the audio “helps us train our speech recognition and natural language understanding systems, so Alexa can better understand your requests, and ensure the service works well for everyone.”

[…]

Amazon’s privacy policy says that Alexa’s software provides a variety of data to the company (including your use of Alexa, your Alexa Interactions, and other Alexa-enabled products), but doesn’t explicitly state how employees themselves interact with the data.

Apple and Google, which make two other popular voice-enabled assistants, also employ humans who review audio commands spoken to their devices; both companies say that they anonymize the recordings and don’t associate them with customers’ accounts. Apple’s Siri sends a limited subset of encrypted, anonymous recordings to graders, who label the quality of Siri’s responses. The process is outlined on page 69 of the company’s security white paper. Google also saves and reviews anonymized audio snippets captured by Google Home or Assistant, and distorts the audio.

On an FAQ page, Amazon states that Alexa is not recording all your conversations. Amazon’s Echo smart speakers and the dozens of other Alexa-enabled devices are designed to capture and process audio, but only when a “wake word” — such as “Alexa,” “Amazon,” “Computer,” or “Echo” — is uttered. However, Alexa devices do occasionally capture audio inadvertently and send that audio to Amazon servers or respond to it with triggered actions. In May 2018, an Echo unintentionally sent audio recordings of a woman’s private conversation to one of her husband’s employees.

Source: A Team At Amazon Is Listening To Recordings Captured By Alexa

Does Google meet its users’ expectations around consumer privacy? This news industry research says no

While the ethics around data collection and consumer privacy have been questioned for years, it wasn’t until Facebook’s Cambridge Analytics scandal that people began to realize how frequently their personal data is shared, transferred, and monetized without their permission.

Cambridge Analytica was by no means an isolated case. Last summer, an AP investigation found that Google’s location tracking remains on even if you turn it off in Google Maps, Search, and other apps. Research from Vanderbilt professor Douglas Schmidt found that Google engages in “passive” data collection, often without the user’s knowledge. His research also showed that Google utilizes data collected from other sources to de-anonymize existing user data.

That’s why we at Digital Content Next, the trade association of online publishers I lead, wrote this Washington Post op-ed, “It isn’t just about Facebook, it’s about Google, too” when Facebook first faced Capitol Hill. It’s also why the descriptor surveillance advertising is increasingly being used to describe Google and Facebook’s advertising businesses, which use personal data to tailor and micro-target ads.

[…]

The results of the study are consistent with our Facebook study: People don’t want surveillance advertising. A majority of consumers indicated they don’t expect to be tracked across Google’s services, let alone be tracked across the web in order to make ads more targeted.

Do you expect Google to collect data about a person’s activities on Google platforms (e.g. Android and Chrome) and apps (e.g. Search, YouTube, Maps, Waze)?

Yes: 48%
No: 52%
Do you expect Google to track a person’s browsing across the web in order to make ads more targeted?

Yes: 43%
No: 57%

Nearly two out of three consumers don’t expect Google to track them across non-Google apps, offline activities from data brokers, or via their location history.

Do you expect Google to collect data about a person’s locations when a person is not using a Google platform or app?

Yes: 34%
No: 66%
Do you expect Google to track a person’s usage of non-Google apps in order to make ads more targeted?

Yes: 36%
No: 64%
Do you expect Google to buy personal information from data companies and merge it with a person’s online usage in order to make ads more targeted?

Yes: 33%
No: 67%

There was only one question where a small majority of respondents felt that Google was acting according to their expectations. That was about Google merging data from search queries with other data it collects on its own services. They also don’t expect Google to connect the data back to the user’s personal account, but only by a small majority. Google began doing both of these in 2016 after previously promising it wouldn’t.

Do you expect Google to collect and merge data about a person’s search activities with activities on its other applications?

Yes: 57%
No: 43%
Do you expect Google to connect a variety of user data from Google apps, non-Google apps, and across the web with that user’s personal Google account?

Yes: 48%
No: 52%

Google’s personal data collection practices affect the more than 2 billion people who use devices running their Android operating software and hundreds of millions more iPhone users who rely on Google for browsing, maps, or search. Most of them expect Google to collect some data about them in exchange for use of services. However, as our research shows, a significant majority of consumers do not expect Google to track their activities across their lives, their locations, on other sites, and on other platforms. And as the AP discovered, Google continues to do some of this even after consumers explicitly turn off tracking.

Source: Does Google meet its users’ expectations around consumer privacy? This news industry research says no » Nieman Journalism Lab

Google and other tech giants are quietly buying up the most important part of the internet

In February, the company announced its intention to move forward with the development of the Curie cable, a new undersea line stretching from California to Chile. It will be the first private intercontinental cable ever built by a major non-telecom company.

And if you step back and just look at intracontinental cables, Google has fully financed a number of those already; it was one of the first companies to build a fully private submarine line.

Google isn’t alone. Historically, cables have been owned by groups of private companies — mostly telecom providers — but 2016 saw the start of a massive submarine cable boom, and this time, the buyers are content providers. Corporations like FacebookMicrosoft, and Amazon all seem to share Google’s aspirations for bottom-of-the-ocean dominance.

I’ve been watching this trend develop, being in the broadband space myself, and the recent movements are certainly concerning. Big tech’s ownership of the internet backbone will have far-reaching, yet familiar, implications. It’s the same old consumer tradeoff; more convenience for less control — and less privacy.

We’re reaching the next stage of internet maturity; one where only large, incumbent players can truly win in media.

[…]

If you want to measure the internet in miles, fiber-optic submarine cables are the place to start. These unassuming cables crisscross the ocean floor worldwide, carrying 95-99 percent of international data over bundles of fiber-optic cable strands the diameter of a garden hose. All told, there are more than 700,000 miles of submarine cables in use today.

[…]

Google will own 10,433 miles of submarine cables internationally when the Curie cable is completed later this year.

The total shoots up to 63,605 miles when you include cables it owns in consortium with Facebook, Microsoft, and Amazon

Source: Google and other tech giants are quietly buying up the most important part of the internet | VentureBeat

Toyota to give royalty-free access to hybrid-vehicle patents

The pledge by one of the world’s biggest automakers to share its closely guarded patents, the second time it has opened up a technology, is aimed at driving industry uptake of hybrids and fending off the challenge of all-battery electric vehicles(EVs).

Toyota said it would grant licenses on nearly 24,000 patents on technologies used in its Prius, the world’s first mass-produced “green” car, and offer to supply competitors with components including motors, power converters and batteries used in its lower-emissions vehicles.

“We want to look beyond producing finished vehicles,” Toyota Executive Vice President Shigeki Terashi told reporters.

“We want to contribute to an increase in take up (of electric cars) by offering not just our technology but our existing parts and systems to other vehicle makers.”

The Nikkei Asian Review first reported Toyota’s plans to give royalty-free access to hybrid-vehicle patents.

Terashi said that the access excluded patents on its lithium-ion battery technology.

[…]

Toyota is also betting on hydrogen fuel cell vehicles (FCVs) as the ultimate zero-emissions vehicle, and as a result, has lagged many of its rivals in marketing all-battery EVs.

In 2015, it said it would allow access to its FCV-related patents through 2020.

Source: Toyota to give royalty-free access to hybrid-vehicle patents – Reuters

Unidentified satellites reveal the need for better space tracking

the afternoon of December 3rd, 2018, a SpaceX Falcon 9 rocket took off from the southern coast of California, lofting the largest haul of individual satellites the vehicle had ever transported. At the time, it seemed like the mission was a slam dunk, with all 64 satellites deploying into space as designed.

But nearly four months later, more than a dozen satellites from the launch have yet to be identified in space. We know that they’re up there, and where they are, but it’s unclear which satellites belong to which satellite operator on the ground.

They are, truly, unidentified flying objects.

The launch, called the SSO-A SmallSat Express, sent those small satellites into orbit for various countries, commercial companies, schools, and research organizations. Currently, all of the satellites are being tracked by the US Air Force’s Space Surveillance Network — an array of telescopes and radars throughout the globe responsible for keeping tabs on as many objects in orbit as possible. Yet 19 of those satellites are still unidentified in the Air Force’s orbital catalog. Many of the satellite operators do not know which of these 19 probes are theirs exactly, and the Air Force can’t figure it out either.

[…]

Not knowing the exact location of a spacecraft is a major problem for operators. If they can’t communicate with their satellite, the company’s orbiting hardware becomes, essentially, space junk. It brings up liability and transparency concerns, too. If an unidentified satellite runs into something else in space, it’s hard to know who is to blame, making space less safe — and less understood — for everyone. That’s why analysts and space trackers say both technical and regulatory changes need to be made to our current tracking system so that we know who owns every satellite that’s speeding around the Earth. “The whole way we do things is just no longer up to the task,” Jonathan McDowell, an astrophysicist at Harvard and spaceflight tracker, tells The Verge.

How to identify a satellite

until recently, figuring out a satellite’s identity has been relatively straightforward. The Air Force has satellites high above the Earth that detect the heat of rocket engines igniting on the ground, indicating when a vehicle has taken off. It’s a system that was originally put in place to locate the launch of a potential missile, but it’s also worked well for spotting rockets launching to orbit. And for most of spaceflight history, usually just one large satellite or spacecraft has gone up on a launch — simplifying the identification process.

“For more traditional launches, where there are fewer objects, it’s fairly simple to do,” Diana McKissock, the lead for space situational awareness sharing and spaceflight safety at the Air Force’s 18th Space Control Squadron, tells The Verge. As a result, the Air Force has maintained a robust catalog of more than 20,000 space objects in orbit, many of which have been identified.

But as rocket ride-shares have grown in popularity, the Air Force’s surveillance capabilities have sometimes struggled to identify every satellite that is deployed during a launch. One problem is that most of the spacecraft on board all look the same. Nearly 50 satellites on the SSO-A launch were modified CubeSats — a type of standardized satellite that’s roughly the size of a cereal box. That means they are all about the same size and have the same general boxy shape. Plus, these tiny satellites are often deployed relatively close together on ride-share launches, one right after the other. The result is a big swarm of nearly identical spacecraft that are difficult to tell apart from the ground below.

Operators often rely on tracking data from the Air Force to find their satellites, so if the military cannot tell a significant fraction of these CubeSats apart, the operators don’t know where to point their ground communication equipment to get in contact with their spacecraft.

It’s a bit of a Catch-22, though. The Air Force also relies on satellite operators to help identify their spacecraft. Before a launch, the Air Force collects information from satellite operators about the design of the spacecraft and where it’s going to go. The operators are also responsible for making sure that they have the proper equipment (in space and on the ground) to communicate with the satellite. “It’s really a cooperative, ongoing process that involves the satellite operators as much as it involves us here at the 18th, processing the data,” says McKissock.

SSO-A launch isn’t the only example of mistaken satellite identity. Five satellites are still unidentified from an Electron launch that took place in December last year, which sent up 13 objects, according to McDowell. And in 2017, a Russian Soyuz rocket deployed a total of 72 satellites, but eight are still unknown, says McDowell. The SSO-A launch is perhaps the most egregious example of this ride-share problem, as nearly a third of the satellites are still missing in the Air Force’s catalog.

The Air Force says the launch posed a unique challenge. One difficulty had to do with the way the satellites were deployed, according to McKissock, who says it was hard to predict before the launch where each satellite was going to be. The SSO-A launch was organized by a company called Spaceflight Industries, which acts as a broker for operators — finding room for their satellites on upcoming rocket launches. Spaceflight bought this entire Falcon 9 rocket for the SSO-A launch, and created the device that deployed all of these satellites into orbit. One satellite tracker, T.S. Kelso, who operates a tracking site called CelesTrak, agreed with the Air Force, saying that Spaceflight’s deployment platform made it hard to predict each satellite’s exact position. “[Spaceflight] had no way to provide the type of data needed,” Kelso writes in an email to The Verge.

[…]

The Air Force’s 18th Space Control Squadron has other priorities to consider, too. While identifying spacecraft is something the team always hopes to accomplish on every flight, the main function of the 18th is to track as many objects as possible and then provide information on the possibility of spacecraft running into each other in orbit. The identification of satellites is secondary to that safety concern. “I wouldn’t say it’s not a priority, but we certainly have other mission requirements to consider,” says McKissock.

Source: Unidentified satellites reveal the need for better space tracking – The Verge

Carbon Engineering receives $68m from energy companies to turn CO2 from air into fuel

A technology that removes carbon dioxide from the air has received significant backing from major fossil fuel companies.British Columbia-based Carbon Engineering has shown that it can extract CO2 in a cost-effective way.It has now been boosted by $68m in new investment from Chevron, Occidental and coal giant BHP.

[…]

The quest for technology for carbon dioxide removal (CDR) from the air received significant scientific endorsement last year with the publication of the IPCC report on keeping the rise in global temperatures to 1.5C this century.

In their “summary for policymakers”, the scientists stated that: “All pathways that limit global warming to 1.5C with limited or no overshoot project the use of CDR …over the 21st century.”

Around the world, a number of companies are racing to develop the technology that can draw down carbon. Swiss company Climeworks is already capturing CO2 and using it to boost vegetable production.

Carbon Engineering says that its direct air capture (DAC) process is now able to capture the gas for under $100 a tonne.

With its new funding, the company plans to build its first commercial facilities. These industrial-scale DAC plants could capture up to one million tonnes of CO2 from the air each year.

So how does this system work?

CO2 is a powerful warming gas but there’s not a lot of it in the atmosphere – for every million molecules of air, there are 410 of CO2.

While the CO2 is helping to drive temperatures up around the world, the comparatively low concentrations make it difficult to design efficient machines to remove the gas.

Carbon Engineering’s process is all about sucking in air and exposing it to a chemical solution that concentrates the CO2. Further refinements mean the gas can be purified into a form that can be stored or utilised as a liquid fuel.

[…]

The captured CO2 is mixed with hydrogen that’s made from water and green electricity. It’s then passed over a catalyst at 900C to form carbon monoxide. Adding in more hydrogen to the carbon monoxide turns it into what’s called synthesis gas.

Finally a Fischer-Tropsch process turns this gas into a synthetic crude oil. Carbon Engineering says the liquid can be used in a variety of engines without modification.

“The fuel that we make has no sulphur in it, it has these nice linear chains which means it burns cleaner than traditional fuel,” said Dr McCahill.

“It’s nice and clear and ready to be used in a truck, car or jet.”

[…]

CO2 can also be used to flush out the last remaining deposits of oil in wells that are past their prime. The oil industry in the US has been using the gas in this way for decades.

It’s estimated that using CO2 can deliver an extra 30% of crude from oilfields with the added benefit that the gas is then sequestered permanently in the ground.

“Carbon Engineering’s direct air capture technology has the unique capability to capture and provide large volumes of atmospheric CO2,” said Occidental Petroleum’s Senior Vice President, Richard Jackson, in a statement.

“This capability complements Occidental’s enhanced oil recovery business and provides further synergies by enabling large-scale CO2 utilisation and sequestration.”

One of the other investors in Carbon Engineering is BHP, best known for its coal mining interests.

“The reality is that fossil fuels will be around for several decades whether in industrial processes or in transportation,” said Dr Fiona Wild, BHP’s head of sustainability and climate change.

“What we need to do is invest in those low-emission technologies that can significantly reduce the emissions from these processes, and that’s why we are focusing on carbon capture and storage.”

Source: Climate change: ‘Magic bullet’ carbon solution takes big step – BBC News

Android TV update puts home-screen ads on multi-thousand-dollar Sony Smart TVs, you can’t get rid of them for long either

Google is trying out a new “Pilot Program” that puts a row of advertisements on the Android TV home screen. XDA Developers was the first to report on the new phenomenon, saying, “We’re currently seeing reports that it has shown up in Sony smart TVs, the Mi Box 3 from Xiaomi, NVIDIA Shield TV, and others.”

The advertising is a “Sponsored Channel” part of the “Android TV Core Services” app that ships with all Android TV devices. A “Channel” in Android TV parlance means an entire row of thumbnails in the UI will be dedicated to “sponsored” content. Google provided XDA Developers with a statement saying that yes, this is on purpose, but for now it’s a “pilot program.”

Android TV is committed to optimizing and personalizing the entertainment experience at home. As we explore new opportunities to engage the user community, we’re running a pilot program to surface sponsored content on the Android TV home screen.

Sony has tersely worded a support page detailing the “Sponsored channel,” too. There’s no mention here of it being a pilot program. Sony’s page, titled “A sponsored channel has suddenly appeared on my TV Home menu,” says, “This change is included in the latest Android TV Launcher app (Home app) update. The purpose is to help you discover new apps and contents for your TV.”

Sony goes on to say, “This channel is managed by Google” and “the Sponsored channel cannot be customized.” Sony basically could replace the entire page with a “Deal with it” sunglasses gif, and it would send the same message.

Buying a product knowing it has ads in it is one thing, but users on Reddit and elsewhere are understandably angry about ads suddenly being patched into their devices—especially in cases when these devices are multi-thousand-dollar 4K Sony televisions. There is an option to disable the ads if you dig into the settings but users are reporting the ads aren’t staying disabled. For now, uninstalling updates for the “Android TV Core Services” app is the best way to remove the ads.

Remember, for now this is a “pilot program.” So please share your valuable feedback with Google in the comments.

Source: Android TV update puts home-screen ads on multi-thousand-dollar Sony Smart TVs | Ars Technica

Well the old adage still holds: Never buy Sony!

Marketplace Pulse study on Amazon products shows blistering sales figures in article, but titles it: Far from successful.

Juozas Kaziukenas’ article “Amazon-Owned Brands Far From Successful” is based on a report he set up called “Amazon Private Label Brands“. This report is oddly disjointed, crossing statistics in and out, changing his metrics at random and finally coming out with a conclusion which is totally at variance with the content of the article. It’s impossible to see where the sales statistics come from and thus can’t be verified. Reviews – and unrelated metric – is used as a proxy for sales success where he doesn’t mention actual sales figures. Yet major news outlets, such as Bloomberg (Most Amazon Brands Are Duds, Not Disrupters, Study Finds), Business Insider (Most Amazon private labels aren’t flying off the shelves yet, but the company is taking huge steps to change that) and many more have apparently taken the conclusion of the article at face value, seemingly without reading the article itself and are publishing this piece as some sort of evidence that Amazon’s monopoly position is not a problem.

In his analysis, he starts out saying that the top 10 most successful private label brands contribute 81% to total sales at a value of $7.5 billion in 2018. He then arbitrarily removes 7 of these brands and states the total sales by private label brands at under $1 billion. For any retailer, this is a huge turnover. Oddly enough, the next figure presented is that total retail sales generated online by Amazon is $122.9 billion. A quick off the cuff guestimate puts the top 10 Amazon private label brands at around 7% of total online retail. Considering Amazon has 23,142 own products, you would assume the total Amazon slice of the pie would be quite a bit larger than 7%.

Interestingly, Marketplacepulse has a statistics page where Amazon international marketplace sales are shown to be a staggering $15.55 billion in Q3 2018 alone and North American sales pegged at $34,35 billion in the same quarter. Focussing on the top 10 brands seems again to be wilfully missing a huge amount of online retail revenue on marketplaces owned by Amazon.

Search is then stated to be the primary driver of purchases and some time is spent looking at click through rates. How he got these figures is up in the air, but could it be that they were provided by Amazon? Is it possible that Amazon is, in fact, funding this analysis? While mr Kaziukenas at some point does mention the related products feature and he does briefly demonstrate its importance in product visibility, search results for specific terms are the metric he goes for here.

The study then quickly and embarrassingly shows that in the lower end of the price spectrum, price is a driving factor. This will return in the study when it is shown that products like batteries are indeed stealing customers from other manufacturers.

Product reviews are used as a rating factor for product success in the study. Reviews are an unrelated metric and the article notes that where batteries and cables are concerned, Amazon owns the market share even with a below average rating. Unfortunately, turnover, or any financial metric, is no longer used to measure product success once the study has passed the opening paragraphs.

A lot of time is spent on a few randomly selected products, which are neither cheaper nor better than the competition. He manages to quite unsurprisingly demonstrate that more expensive, lower quality Amazon products don’t do quite as well as cheaper, better quality non-Amazon alternative products. A 6-foot-long HDMI cable is used as an example to prove that cheaper Amazon products do better than the competition: “AmazonBasics 6 feet HDMI cable sells for $6.99 and is the number one best-seller HDMI cable on Amazon” (again, how he knows what the number one best-seller is, is a mystery to me).

Continuing on, the study shows that Amazon does copy products and the contradictory statements start flying fast and hard.  First the quote is given: “In July, a similar stand appeared at about half the price. The brand: AmazonBasics. Since then, sales of the Rain Design original have slipped.” followed by the statement: “Today Rain Design’s laptop stand sells for $39.99 and seems to be outselling Amazon’s $19.99 copy.” I assume that the “seems to be outselling” part of this statement is based entirely on the review status and not on any actual sales data. Next the study claims that this product copying is “rare” and goes on to state “There is no basis to assume that copying products is part of the Amazon strategy.” This doesn’t ring very true next to the two examples on display – and surely many more examples can easily be found. Mr Kaziukenas states: “The story of Rain Design’s laptop stand is scary but doesn’t happen often.” Again I would like to see where the metrics being used here come from and the definition of “often”. It’s stated as though he has actual data on this, but chooses not to share this. I somehow doubt that Amazon would be happy to provide him with this data.

Now the study continues to say that having data on the competition is not useful, but specifies this as a vague “ability to utilize that data for brand building” and then states that because Amazon isn’t the first choice in the upper price market, or established brand space, it’s not utilising this data very well. He then goes on to state that where brand is not important (the cheap product space, eg. batteries) they are the number one seller. Let us not forget that this failed brand building of products in the space beyond the top three products (as arbitrarily chosen by this study in the beginning) is netting sales of around $6.5 billion!

Now comes a pretty bizarre part where an argument is put forward that if you use the search by specifying a brand name before the generic product name, Amazon products are not given an advantage, despite being shown in the related items. Even though if you put in a generic product name, Amazon products will come forward and fill the screen, unless you have a sponsored the search term, as demonstrated by a page full of cheaper Amazon HDMI cables. This is somehow used as an argument that there is no advantage in Organic Search Results, an arbitrarily and very narrowly chosen term which has no relation to the part of the article in which at every turn it is clearly shown that Amazon uses their advantage to push their products. Totally beside the wayside is the fact that different people are shown different search results, depending on a huge multitude of factors. What Mr Kaziukenas sees as results are not going to be the same as other shoppers on the platform, although he gives his search results as being that one single truth.

The conclusion of the piece states that Amazon’s private brand business (ie, those not labelled with the word “Amazon” in it) don’t do very well. The generic goods business (ie, those where potential customers have no reason to look specifically for a brand name) is cast aside. Somehow the final thought is that Amazon therefore doesn’t want to be in the physical products business. The sheer scale of the sales numbers presented in the article, however, belie this statement. Amazon is making billions of dollars in the physical goods segment and is using its position to push out competitors – to make no mention of the magic arbitration system of goods and fraud on the market place, the conflict of interest in being both a marketplace and a salesman in that marketplace: but that’s another story, covered by other articles.

8/4/19 EDIT:

If it feels like your Amazon search results have been overwhelmed with promotions for their private-label brands, like Amazon Basics, Mama Bear or Daily Ritual, that may be changing. As lawmakers pay more attention to the most powerful tech companies, Amazon has begun quietly removing some of the more obvious promotions, including banner ads, for its private-label products, reports CNBC, which spoke to Amazon sellers and consultants.

Amazon’s aggressive marketing of its own private brands, with ads that often appear in search results above listings for competing items from third-party sellers, have raised antitrust concerns.

[…]

Amazon’s private brands quickly became a major threat to third-party sellers on its platform, increasing from about a dozen brands in 2016, when some of its products began taking the lead in key categories like batteries, speakers and baby wipes, to a current roster of more than 135 private label brands and 330 brands exclusive to Amazon, according to TJI Research.

While Amazon benefits from higher margins, cost-savings from a more efficient supply chain and new data, third-party sellers often suffer. For example, they may have to cut prices to stay competitive, and even lower prices may not be enough to attract customers away from Amazon’s promotions for its own items, which show up in many search results.

Other recent measures Amazon has taken to ward off antitrust scrutiny include reportedly getting rid of its price parity requirement for third-party sellers, which meant they were not allowed to sell the same products on other sites for lower prices.

Satellite plane-tracking goes global

The US firm Aireon says its new satellite surveillance network is now fully live and being trialled over the North Atlantic.

The system employs a constellation of 66 spacecraft, which monitor the situational messages pumped out by aircraft transponders.

These report a plane’s position, altitude, direction and speed every eight seconds.

The two big navigation management companies that marshal plane movements across the North Atlantic – UK Nats and Nav Canada – intend to use Aireon to transform their operations.

[…]

ncreasing numbers of planes since the early 2000s have been fitted with Automatic Dependent Surveillance Broadcast (ADS-B) transponders. US and European regulators have mandated all aircraft carry this equipment as of next year.

ADS-B pushes out a bundle of information about an aircraft – from its identity to a GPS-determined altitude and ground speed. ADS-B was introduced to enhance surveillance and safety over land, but the messages can also be picked up by satellites.

Aireon has receivers riding piggyback on all 66 spacecraft of the Iridium sat-phone service provider. These sensors make it possible now to track planes even out over the ocean, beyond the visibility of radar – and ocean waters cover 70% of the globe

[…]

in the North Atlantic, traditional in-line safe separation distances will eventually be reduced from 40 nautical miles (80km) down to as little as 14 nautical miles (25km). As a result, more aircraft will be able to use the most efficient tracks.

[…]

“Eight out of 10 flights will now be able to fly without any kind of speed restriction compared with the far less efficient fixed-speed environment we previously had to operate within,” Mr Rolfe said. “These changes, made possible by Aireon, will generate net savings of $300 in fuel and two tonnes of carbon dioxide per flight.”

However, any carbon dividend is likely to be eaten into by the growth in traffic made possible by the introduction space-based ADS-B. Today, there are over 500,000 aircraft movements across the North Atlantic each year. This is projected to increase to 800,000 by 2030.

Source: Satellite plane-tracking goes global – BBC News

Dutch  medical patient files moved to Google Cloud – MPs want to know if US intelligence agencies can view them

Of course the US can look in, under CLOUD rules, because Google is an American company. The move of the files has been done without consent from the patients by Medical Research Data Management, a commercial company, because (they say), the hospitals have given permission. Also, hospitals don’t need to ask for patient permission, because patients have given hospitals permission through accepting the electronic patient filing system.

Another concern is the pseudo-anonymisation of the data. For a company like Google, it’s won’t be particularly hard to match the data to real people.

Source: Kamerleden eisen duidelijkheid over opslag patiëntgegevens bij Google – Emerce

540 Million Facebook User Records Exposed Online, Plus Passwords, Comments, and More

Researchers at the cybersecurity firm UpGuard on Wednesday said they had discovered the existence of two datasets together containing the personal data of hundreds of millions of Facebook users. Both were left publicly accessible.

In a blog post, UpGuard connected one of the leaky databases to a Mexico-based media company called Cultura Colectiva. The data set reportedly contains over 146 GB of data, which amounts to over 540 million Facebook user records, including comments, likes, reactions, account names, Facebook user IDs, and more.

A second leak, UpGuard said, was connected to a Facebook-integrated app called “At the pool” and had exposed roughly 22,000 passwords. “The passwords are presumably for the ‘At the Pool’ app rather than for the user’s Facebook account, but would put users at risk who have reused the same password across accounts,” the firm said. The database also contained data on users’ friends, likes, groups, and locations where they had checked in, said UpGuard.

Both datasets were stored in unsecured Amazon S3 buckets and could be accessed by virtually anyone. Neither was password protected. The buckets have since been secured or taken offline.

Source: 540 Million Facebook User Records Exposed Online, Plus Passwords, Comments, and More

A patchy Apache a-patchin: HTTP server gets fix for worrying root access hole

Apache HTTP Server has been given a patch to address a potentially serious elevation of privilege vulnerability.

Designated CVE-2019-0211, the flaw allows a “worker” process to change its privileges when the host server resets itself, potentially allowing anyone with a local account to run commands with root clearance, essentially giving them complete control over the targeted machine.

The bug was discovered by researcher Charles Fol of security shop Ambionics, who privately reported the issue to Apache. Admins can get the vulnerability sealed up by making sure their servers are updated to version 2.4.39 or later.

While elevation of privilege vulnerabilities are not generally considered particularly serious bugs (after all, you need to already be running code on the target machine, which is in and of itself a security compromise), the nature of Apache Server HTTP as a host machine means that this bug will almost always be exposed to some extent.

Fol told The Register that as HTTP servers are used for web hosting, multiple users will be given guest accounts on each machine. In the wild, this means the attacker could simply sign up for an account to have their site hosted on the target server.

“The web hoster has total access to the server through the ‘root’ account,” Fol explained.

“If one of the users successfully exploits the vulnerability I reported, he/she will get full access to the server, just like the web hoster. This implies read/write/delete any file/database of the other clients.”

Source: A patchy Apache a-patchin: HTTP server gets fix for worrying root access hole • The Register

Linux Mint 19.2 ‘Tina’ is on the way, but the developers seem defeated and depressed

I have been a bit critical of Linux Mint in the past, but the truth is, it is a great distribution that many people enjoy. While Mint is not my favorite desktop distro (that would be Fedora), I recognize its quality. Is it perfect? No, there is no such thing as a flawless Linux-based operating system.

Today should be happy times for the Linux Mint community, as we finally learn some new details about the upcoming version 19.2! It will be based on Ubuntu 18.04 and once again feature three desktop environments — Xfce, Mate, and Cinnamon. We even found out the code name for Linux Mint 19.2 — “Tina.” And yet, it is hard to celebrate. Why? Because the developers seem to be depressed and defeated. They even appear to be a bit disenchanted with Free Software development overall.

Clement Lefebvre, leader of the Linux Mint project, shared a very lengthy blog post today, and it really made me sad.

[…]

I can show them 500 people donated money last month, I can forward emails to the team where people tell me how much they love Linux Mint, I can tell them they’re making a difference but there’s nothing like interacting directly with a happy user, seeing first-hand somebody be delighted with what you worked on. How our community interacts with our developers is key, to their work, to their happiness and to their motivation.

Clem quite literally says he is not enjoying the Linux Mint development nowadays, which really breaks my heart.

[…]

I also have a life outside open source work, too. It’s not mentally sound to put the hours I’ve put into the compositor. I was only able to do what I could because I was unemployed in January. Now I’m working a job full time, and trying to keep up with bug fixes. I’ve been spending every night and weekend, basically every spare moment of my free time trying to fix things.

[…]

To make things even worse, Hicks is apparently embarrassed by the official Linux Mint blog post! Another Reddit member named tuxkrusader responds to Hicks by saying “I’m slightly concerned that you’re not a member of the linuxmint group on github anymore. I hope you’re not on bad terms with the project.” Hicks shockingly responds by saying “Nope, I hid my project affiliation because that blog post makes me look bad.”

Wow. Hiding his affiliation with the Linux Mint project on GitHub?  It seems things may be worse than I originally thought…

Source: Linux Mint 19.2 ‘Tina’ is on the way, but the developers seem defeated and depressed

Facebook Is Just Casually Asking Some New Users for Their Email Passwords [note – never give out your email password!!!!]

Facebook has been prompting some users registering for the first time to hand over the passwords to their email accounts, the Daily Beast reported on Tuesday—a practice that blares right past questionable and into “beyond sketchy” territory, security consultant Jake Williams told the Beast.

A Twitter account using the handle @originalesushi first posted an image of the screen several days ago, in which new users are told they can confirm their third-party email addresses “automatically” by giving Facebook their login credentials. The Beast wrote that the prompt appeared to trigger under circumstances where Facebook might think a sign-up attempt is “suspicious,” and confirmed it on their end by “using a disposable webmail address and connecting through a VPN in Romania.”

It is never, ever advisable for a user to give out their email password to anyone, except possibly to a 100 percent verified account administrator when no other option exists (which there should be). Email accounts tend to be primary gateways into the rest of the web, because a valid one is usually necessary to register accounts on everything from banks and financial institutions to social media accounts and porn sites. They obviously also contain copies of every un-deleted message ever sent to or from that address, as well as additional information like contact lists. It is for this reason that email password requests are one of the most obvious hallmarks of a phishing scam.

“That’s beyond sketchy,” Williams told the Beast. “They should not be taking your password or handling your password in the background. If that’s what’s required to sign up with Facebook, you’re better off not being on Facebook.”

“This is basically indistinguishable to a phishing attack,” Electronic Frontier Foundation security researcher Bennett Cyphers told Business Insider. “This is bad on so many levels. It’s an absurd overreach by Facebook and a sleazy attempt to trick people to upload data about their contacts to Facebook as the price of signing up… No company should ever be asking people for credentials like this, and you shouldn’t trust anyone that does.”

A Facebook spokesperson confirmed in a statement to Gizmodo that this screen appears for some users signing up for the first time, though the company wrote, “These passwords are not stored by Facebook.” It additionally characterized the number of users it asks for email passwords as “very small.” Those presented with the screen were signing up on desktop while using email addresses that did not support OAuth—an open standard for allowing third parties authenticated access to assets (such as for the purpose of verifying identities) without sharing login credentials. OAuth is typically a standard feature of major email providers.

Facebook noted in the statement that those users presented with this screen could opt out of sharing passwords and use another verification method such as email or phone. The company also said it would be ending the practice of asking for email passwords.

Source: Facebook Is Just Casually Asking Some New Users for Their Email Passwords

This beggars belief!

DOJ Warns Academy Over Proposed Oscar Rule Changes that exclude Netflix and other streamers

The Justice Department has warned the Academy of Motion Picture Arts and Sciences that its potential rule changes limiting the eligibility of Netflix and other streaming services for the Oscars could raise antitrust concerns and violate competition law.

According to a letter obtained by Variety, the chief of the DOJ’s Antitrust Division, Makan Delrahim, wrote to AMPAS CEO Dawn Hudson on March 21 to express concerns that new rules would be written “in a way that tends to suppress competition.”

“In the event that the Academy — an association that includes multiple competitors in its membership — establishes certain eligibility requirements for the Oscars that eliminate competition without procompetitive justification, such conduct may raise antitrust concerns,” Delrahim wrote.

The letter came in response to reports that Steven Spielberg, an Academy board member, was planning to push for rules changes to Oscars eligibility, restricting movies that debut on Netflix and other streaming services around the same time that they show in theaters. Netflix made a big splash at the Oscars this year, as the movie “Roma” won best director, best foreign language film and best cinematography.

[…]

Spielberg’s concerns over the eligibility of movies on streaming platforms have triggered intense debate in the industry. Netflix responded on Twitter early last month with the statement, “We love cinema. Here are some things we also love. Access for people who can’t always afford, or live in towns without, theaters. Letting everyone, everywhere enjoy releases at the same time. Giving filmmakers more ways to share art. These things are not mutually exclusive.”

Spielberg told ITV News last year that Netflix and other streaming platforms have boosted the quality of television, but “once you commit to a television format, you’re a TV movie. … If it’s a good show—deserve an Emmy, but not an Oscar.”

Source: DOJ Warns Academy Over Proposed Oscar Rule Changes – Variety

India’s Anti-Satellite Test Could Threaten the International Space Station

Last week, Indian Prime Minister Narendra Modi said the country’s space agency had tested a new anti-satellite weapon by destroying a satellite already in orbit. Now, an announcement by NASA Administrator Jim Bridenstine claims that India’s test could endanger other satellites and objects in orbit—including the International Space Station.

India launched a missile at a satellite believed to be the Indian spy satellite Microsat-r, launched a few months ago. The blowup created a field of satellite debris at that altitude. That debris is a problem because it sits at the same altitude as the ISS. In a worst-case scenario, some of that debris could impact the station creating a Gravity-esque scenario. Some of those pieces are too small for NASA to track, meaning we’ll have no way of predicting an impact beforehand.

“What we are tracking right now, objects big enough to track — we’re talking about 10 cm (4 inches) or bigger —about 60 pieces have been tracked,” Bridenstine said in an announcement on Monday.

India deliberately targeted a satellite that orbited at a lower altitude than the ISS to prevent this sort of situation, but some of the debris appears to have reached higher. Of those 60 debris objects tracked by NASA, Bridenstine says 24 of them are at the same altitude as the ISS or higher.

The nature of low Earth orbit means that even debris pieces residing above the ISS could still pose a threat. Satellites and debris are gradually slowed by the very thin atmosphere that resides there. The ISS, for instance, routinely has to fire its boosters to increase its altitude to counteract atmospheric drag.

Those small debris pieces will lose altitude over time and eventually burn up in the atmosphere, but the high-altitude debris will have to come in range of the ISS before that happens. That means an impact could happen even a few months from now as high-altitude debris continues to fall.

Source: India’s Anti-Satellite Test Could Threaten the International Space Station

The head of the United States’ National Aeronautics and Space Administration (NASA), Jim Bridenstine, on Tuesday branded India’s destruction of one of its satellites a “terrible thing” that had created 400 pieces of orbital debris and led to new dangers for astronauts aboard the International Space Station (ISS).

Mr. Bridenstine was addressing employees of the NASA five days after India shot down a low-orbiting satellite in a missile test to prove it was among the world’s advanced space powers.

Not all of the pieces were big enough to track, Mr. Bridenstine explained. “What we are tracking right now, objects big enough to track — we’re talking about 10 cm [six inches] or bigger — about 60 pieces have been tracked.”

The Indian satellite was destroyed at a relatively low altitude of 300 km, well below the ISS and most satellites in orbit.

But 24 of the pieces “are going above the apogee of the ISS,” said Mr. Bridenstine.

“That is a terrible, terrible thing to create an event that sends debris at an apogee that goes above the International Space Station. That kind of activity is not compatible with the future of human spaceflight. It’s unacceptable and NASA needs to be very clear about what its impact to us is,” he said.

But the risk will dissipate over time as much of the debris will burn up as it enters the atmosphere.

The U.S. military tracks objects in space to predict the collision risk of the ISS and satellites.

They are currently tracking 23,000 objects larger than 10 cm.

Chinese test created 3,000 debris

That includes about 10,000 pieces of space debris, of which nearly 3,000 were created by a single event: a Chinese anti-satellite test in 2007 at 530 miles from the surface.

As a result of the Indian test, the risk of collision with the ISS has increased by 44 percent over 10 days, Mr. Bridenstine said.

https://www.thehindu.com/sci-tech/technology/indias-asat-missile-test-created-400-pieces-of-debris-endangering-iss-nasa/article26708817.ece

Soon after the ASAT test, India said it was done in the lower atmosphere to ensure that there is no space debris. “Whatever debris that is generated will decay and fall back onto the earth within weeks.”

By conducting the test, the Ministry of External Affairs in New Delhi said, India was not in violation of any international law or treaty to which it is a party to or any national obligation.

Interestingly, Bridenstine is the first top official from the Trump administration to come out in public against the India’s ASAT test.

A day after India successfully carried out its ASAT test, acting US defence secretary Patrick Shanahan warned that the event could create a “mess” in space but said Washington was still studying the impact.

Bridenstine said the NASA is “learning more and more every hour” that goes by about this orbital debris field that has been created from the anti-satellite test.

“Where we were last week with an assessment that comes from NASA experts as well as the Joint Space Operations Center (part of US Strategic Command).. is that the risk to the International Space Station has increased by 44 per cent,” Bridenstine said.

“We are charged with commercialising of low earth orbit. We are charged with enabling more activities in space than we’ve ever seen before for the purpose of benefiting the human condition, whether it’s pharmaceuticals or printing human organs in 3D to save lives here on earth or manufacturing capabilities in space that you’re not able to do in a gravity well,” he said.

“All of those are placed at risk when these kinds of events happen,” Bridenstine said as he feared India’s ASAT test could risk proliferation of such activities by other countries.

“When one country does it, other countries feel like they have to do it as well,” he said.

“It’s unacceptable. The NASA needs to be very clear about what its impact to us is,” he said.

Risk gone up 44% over 10 days

The risk from small debris as a result of the ASAT test to the ISS went up 44 per cent over a period of 10 days. “So, the good thing is it’s low enough in earth orbit that over time this will all dissipate,” he told his NASA colleagues.

The ISS is a habitable artificial satellite, orbiting the Earth at an altitude between 330 and 435 km. It is a joint project between space agencies of US, Russia, Japan, Europe and Canada, and serves as a research laboratory for scientists to conduct space experiments.

As many as 236 astronauts from 18 countries have visited the space station, many of them multiple times, since November 2000.

Bridenstine said a lot of debris from the 2007 direct ascent anti-satellite test by China is still in the space.

“And we’re still dealing with it. We are still, we as a nation are responsible for doing space situational awareness and space traffic management, conjunction analysis for the entire world,” the NASA chief said.

“The International Space Station is still safe. If we need to manoeuvre it, we will. The probability of that I think is low. But at the end of the day we have to be clear also that these activities are not sustainable or compatible with human spaceflight,” he said.

https://www.thehindubusinessline.com/news/science/indias-shooting-down-of-satellite-created-400-pieces-of-debris-put-iss-at-risk-nasa/article26709952.ece

Former NSA spies hacked BBC host, Al Jazeera chairman for UAE

A group of American hackers who once worked for U.S. intelligence agencies helped the United Arab Emirates spy on a BBC host, the chairman of Al Jazeera and other prominent Arab media figures during a tense 2017 confrontation pitting the UAE and its allies against the Gulf state of Qatar.

The American operatives worked for Project Raven, a secret Emirati intelligence program that spied on dissidents, militants and political opponents of the UAE monarchy. A Reuters investigation in January revealed Project Raven’s existence and inner workings, including the fact that it surveilled a British activist and several unnamed U.S. journalists.

The Raven operatives — who included at least nine former employees of the U.S. National Security Agency and the U.S. military — found themselves thrust into the thick of a high-stakes dispute among America’s Gulf allies. The Americans’ role in the UAE-Qatar imbroglio highlights how former U.S. intelligence officials have become key players in the cyber wars of other nations, with little oversight from Washington.

[…]

Dana Shell Smith, the former U.S. ambassador to Qatar, said she found it alarming that American intelligence veterans were able to work for another government in targeting an American ally. She said Washington should better supervise U.S. government-trained hackers after they leave the intelligence community.

“Folks with these skill sets should not be able to knowingly or unknowingly undermine U.S. interests or contradict U.S. values,” Smith told Reuters.

Source: Former NSA spies hacked BBC host, Al Jazeera chairman for UAE

Wait, so once you are trained for something by the US government, basically you have entered into an enslaved indenture? You may only work for who the US decides you may work for ever after? Or… what, they assassinate you?

D.E.A. Secretly Collected Bulk Records of Money-Counter Purchases

WASHINGTON — The Drug Enforcement Administration secretly collected data in bulk about Americans’ purchases of money-counting machines — and took steps to hide the effort from defendants and courts — before quietly shuttering the program in 2013 amid the uproar over the disclosures by the National Security Agency contractor Edward Snowden, an inspector general report found.

Seeking leads about who might be a drug trafficker, the D.E.A. started in 2008 to issue blanket administrative subpoenas to vendors to learn who was buying money counters. The subpoenas involved no court oversight and were not pegged to any particular investigation. The agency collected tens of thousands of records showing the names and addresses of people who bought the devices.

The public version of the report, which portrayed the program as legally questionable, blacked out the device whose purchase the D.E.A. had tracked. But in a slip-up, the report contained one uncensored reference in a section about how D.E.A. policy called for withholding from official case files the fact that agents first learned the names of suspects from its database of its money-counter purchases.

[…]

The report cited field offices’ complaints that the program had wasted time with a high volume of low-quality leads, resulting in agents scrutinizing people “without any connection to illicit activity.” But the D.E.A. eventually refined its analysis to produce fewer but higher-quality leads, and the D.E.A. said it had led to arrests and seizures of drugs, guns, cars and illicit cash.

The idea for the nationwide program originated in a D.E.A. operation in Chicago, when a subpoena for three months of purchase records from a local store led to two arrests and “significant seizures of drugs and related proceeds,” it said.

But Sarah St. Vincent, a Human Rights Watch researcher who flagged the slip-up on Twitter, argued that it was an abuse to suck Americans’ names into a database that would be analyzed to identify criminal suspects, based solely upon their purchase of a lawful product.

[…]

In the spring of 2013, the report said, the D.E.A. submitted its database to a joint operations hub where law enforcement agencies working together on organized crime and drug enforcement could mine it. But F.B.I. agents questioned whether the data had been lawfully acquired, and the bureau banned its officials from gaining access to it.

The F.B.I. agents “explained that running all of these names, which had been collected without foundation, through a massive government database and producing comprehensive intelligence products on any ‘hits,’ which included detailed information on family members and pictures, ‘didn’t sit right,’” the report said.

Source: D.E.A. Secretly Collected Bulk Records of Money-Counter Purchases

Bezos’ Investigator Gavin de Becker Finds the Saudis Obtained the Amazon Chief’s Private Data (for the dick pic extortion thing a few weeks ago)

In January, the National Enquirer published a special edition that revealed an intimate relationship Bezos was having. He asked me to learn who provided his private texts to the Enquirer, and why. My office quickly identified the person whom the Enquirer had paid as a source: a man named Michael Sanchez, the now-estranged brother of Lauren Sanchez, whom Bezos was dating. What was unusual, very unusual, was how hard AMI people worked to publicly reveal their source’s identity. First through strong hints they gave to me, and later through direct statements, AMI practically pinned a “kick me” sign on Michael Sanchez.

“It was not the White House, it was not Saudi Arabia,” a company lawyer said on national television, before telling us more: “It was a person that was known to both Bezos and Ms. Sanchez.” In case even more was needed, he added, “Any investigator that was going to investigate this knew who the source was,” a very helpful hint since the name of who was being investigated had been made public 10 days earlier in a Daily Beast report.

Much was made about a recent front-page story in the Wall Street Journal, fingering Michael Sanchez as the Enquirer’s source—but that information was first published almost seven weeks ago by The Daily Beast, after “multiple sources inside AMI” told The Daily Beast the exact same thing. The actual news in the Journal article was that its reporters were able to confirm a claim Michael Sanchez had been making: It was the Enquirer who first contacted Michael Sanchez about the affair, not the other way around.

AMI has repeatedly insisted they had only one source on their Bezos story, but the Journal reports that when the Enquirer began conversations with Michael Sanchez, they had “already been investigating whether Mr. Bezos and Ms. Sanchez were having an affair.” Michael Sanchez has since confirmed to Page Six that when the Enquirer contacted him back in July, they had already “seen text exchanges” between the couple. If accurate, the WSJ and Page Six stories would mean, clearly and obviously, that the initial information came from other channels—another source or method.

[On Sunday, AMI issued a statement insisting that “it was Michael Sanchez who tipped the National Enquirer off to the affair on Sept. 10, 2018, and over the course of four months provided all of the materials for our investigation.” Read the full statement here. — ed.]

“Bezos directed me to ‘spend whatever is needed’ to learn who may have been complicit in the scheme, and why they did it. That investigation is now complete.”

Reality is complicated, and can’t always be boiled down to a simple narrative like “the brother did it,” even when that brother is a person who certainly supplied some information to a supermarket tabloid, and even when that brother is an associate of Roger Stone and Carter Page. Though interesting, it turns out those truths are also too simple.

Why did AMI’s people work so hard to identify a source, and insist to the New York Times and others that he was their sole source for everything?

My best answer is contained in what happened next: AMI threatened to publish embarrassing photos of Jeff Bezos unless certain conditions were met. (These were photos that, for some reason, they had held back and not published in their first story on the Bezos affair, or any subsequent story.) While a brief summary of those terms has been made public before, others that I’m sharing are new—and they reveal a great deal about what was motivating AMI.

An eight-page contract AMI sent for me and Bezos to sign would have required that I make a public statement, composed by them and then widely disseminated, saying that my investigation had concluded they hadn’t relied upon “any form of electronic eavesdropping or hacking in their news-gathering process.”

Note here that I’d never publicly said anything about electronic eavesdropping or hacking—and they wanted to be sure I couldn’t.

They also wanted me to say our investigation had concluded that their Bezos story was not “instigated, dictated or influenced in any manner by external forces, political or otherwise.” External forces? Such a strange phrase. AMI knew these statements did not reflect my conclusions, because I told AMI’s Chief Content Officer Dylan Howard (in a 90-minute recorded phone call) that what they were asking me to say about external forces and hacking “is not my truth,” and would be “just echoing what you are looking for.”

(Indeed, an earlier set of their proposed terms included AMI making a statement “affirming that it undertook no electronic eavesdropping in connection with its reporting and has no knowledge of such conduct”—but now they wanted me to say that for them.)

The contract further held that if Bezos or I were ever in our lives to “state, suggest or allude to” anything contrary to what AMI wanted said about electronic eavesdropping and hacking, then they could publish the embarrassing photos.

Todd Williamson/Getty

I’m writing this today because it’s exactly what the Enquirer scheme was intended to prevent me from doing. Their contract also contained terms that would have inhibited both me and Bezos from initiating a report to law enforcement.

Things didn’t work out as they hoped.

When the terms for avoiding publication of personal photos were presented to Jeff Bezos, he responded immediately: “No thank you.” Within hours, he wrote an essay describing his reasons for rejecting AMI’s threatening proposal. Then he posted it all on Medium, including AMI’s actual emails and their salacious descriptions of private photos. (After the Medium post, AMI put out a limp statement saying it “believed fervently that it acted lawfully in the reporting of the story of Mr. Bezos.”)

The issues Bezos raised in his Medium post have nothing whatsoever to do with Michael Sanchez, any more than revealing the name of a low-level Watergate burglar sheds light on the architects of the Watergate cover-up. Bezos was not expressing concerns about the Enquirer’s original story; he was focused on what he called “extortion and blackmail.”

Next, Bezos directed me to “spend whatever is needed” to learn who may have been complicit in the scheme, and why they did it.

That investigation is now complete. As has been reported elsewhere, my results have been turned over to federal officials. Since it is now out of my hands, I intend today’s writing to be my last public statement on the matter. Further, to respect officials pursuing this case, I won’t disclose details from our investigation. I am, however, comfortable confirming one key fact:

Our investigators and several experts concluded with high confidence that the Saudis had access to Bezos’ phone, and gained private information. As of today, it is unclear to what degree, if any, AMI was aware of the details.

Source: Bezos’ Investigator Gavin de Becker Finds the Saudis Obtained the Amazon Chief’s Private Data

Reuters is a bit shorter on the matter:

WASHINGTON (Reuters) – The security chief for Amazon chief executive Jeff Bezos said on Saturday that the Saudi government had access to Bezos’ phone and gained private information from it.

Gavin De Becker, a longtime security consultant, said he had concluded his investigation into the publication in January of leaked text messages between Bezos and Lauren Sanchez, a former television anchor who the National Enquirer tabloid newspaper said Bezos was dating.

Last month, Bezos accused the newspaper’s owner of trying to blackmail him with the threat of publishing “intimate photos” he allegedly sent to Sanchez unless he said in public that the tabloid’s reporting on him was not politically motivated.

In an article for The Daily Beast website, De Becker said the parent company of the National Enquirer, American Media Inc., had privately demanded that De Becker deny finding any evidence of “electronic eavesdropping or hacking in their newsgathering process.”

“Our investigators and several experts concluded with high confidence that the Saudis had access to Bezos’ phone, and gained private information,” De Becker wrote. “As of today, it is unclear to what degree, if any, AMI was aware of the details.”

https://www.reuters.com/article/us-people-bezos-saudi/saudis-gained-access-to-amazon-ceo-bezos-phone-bezos-security-chief-idUSKCN1RB0RS

 

A New Age of Warfare: How Internet Mercenaries Do Battle for Authoritarian Governments

NSO and a competitor, the Emirati firm DarkMatter, exemplify the proliferation of privatized spying. A monthslong examination by The New York Times, based on interviews with current and former hackers for governments and private companies and others as well as a review of documents, uncovered secret skirmishes in this burgeoning world of digital combat.

A former top adviser to the Saudi crown prince, Mohammed bin Salman, spoke of using NSO’s products abroad as part of extensive surveillance efforts.CreditGiuseppe Cacace/Agence France-Presse — Getty Images
Image
A former top adviser to the Saudi crown prince, Mohammed bin Salman, spoke of using NSO’s products abroad as part of extensive surveillance efforts.CreditGiuseppe Cacace/Agence France-Presse — Getty Images

The firms have enabled governments not only to hack criminal elements like terrorist groups and drug cartels but also in some cases to act on darker impulses, targeting activists and journalists. Hackers trained by United States spy agencies caught American businesspeople and human rights workers in their net. Cybermercenaries working for DarkMatter turned a prosaic household item, a baby monitor, into a spy device.

The F.B.I. is investigating current and former American employees of DarkMatter for possible cybercrimes, according to four people familiar with the investigation. The inquiry intensified after a former N.S.A. hacker working for the company grew concerned about its activities and contacted the F.B.I., Reuters reported.

NSO and DarkMatter also compete fiercely with each other, paying handsomely to lure top hacking talent from Israel, the United States and other countries, and sometimes pilfering recruits from each other, The Times found.

The Middle East is the epicenter of this new era of privatized spying. Besides DarkMatter and NSO, there is Black Cube, a private company run by former Mossad and Israeli military intelligence operatives that gained notoriety after Harvey Weinstein, the disgraced Hollywood mogul, hired it to dig up dirt on his accusers. Psy-Group, an Israeli company specializing in social media manipulation, worked for Russian oligarchs and in 2016 pitched the Trump campaign on a plan to build an online army of bots and avatars to swing Republican delegate votes.

Last year, a wealthy American businessman, Elliott Broidy, sued the government of Qatar and a New York firm run by a former C.I.A. officer, Global Risk Advisors, for what he said was a sophisticated breach of his company that led to thousands of his emails spilling into public. Mr. Broidy said that the operation was motivated by hard-nosed geopolitics: At the beginning of the Trump administration, he had pushed the White House to adopt anti-Qatar policies at the same time his firm was poised to receive hundreds of millions of dollars in contracts from the United Arab Emirates, the archrival to Qatar.

A judge dismissed Mr. Broidy’s lawsuit, but suspicions have grown that Qatar had a hand in other operations, including the hacking and leaking of the emails of Yousef al-Otaiba, the influential Emirati ambassador in Washington.

The rapid expansion of this global high-tech battleground, where armies of cybermercenaries clash, has prompted warnings of a dangerous and chaotic future.

Source: A New Age of Warfare: How Internet Mercenaries Do Battle for Authoritarian Governments – The New York Times

Paywalls block scientific progress. Research should be open to everyone – how copyright enriches the big boys and kills the little ones all over again

Academic and scientific research needs to be accessible to all. The world’s most pressing problems like clean water or food security deserve to have as many people as possible solving their complexities. Yet our current academic research system has no interest in harnessing our collective intelligence. Scientific progress is currently thwarted by one thing: paywalls.

Paywalls, which restrict access to content without a paid subscription, represent a common practice used by academic publishers to block access to scientific research for those who have not paid. This keeps £19.6bn flowing from higher education and science into for-profit publisher bank accounts. My recent documentary, Paywall: The Business of Scholarship, uncovered that the largest academic publisher, Elsevier, regularly has a profit margin between 35-40%, which is greater than Google’s. With financial capacity comes power, lobbyists, and the ability to manipulate markets for strategic advantages – things that underfunded universities and libraries in poorer countries do not have.

Furthermore, university librarians are regularly required to sign non-disclosure agreements on their contract-pricing specifics with the largest for-profit publishers. Each contract is tailored specifically to that university based upon a variety of factors: history, endowment, current enrolment. This thwarts any collective discussion around price structures, and gives publishers all the power.

This is why open access to research matters – and there have been several encouraging steps in the right direction. Plan S, which requires that scientific publications funded by public grants must be published in open access journals or platforms by 2020, is gaining momentum among academics across the globe. It’s been recently backed by Italy’s Compagnia di San Paolo, which receives €150m annually to spend on research, as well as the African Academy of Science and the National Science and Technology Council (NSTC) of Zambia. Plan S has also been endorsed by the Chinese government.

Equally, although the US has lagged behind Europe in taking a stand on encouraging open access to research, this is changing. The University of California system has just announced that it will be ending its longstanding subscription to Elsevier. The state of California also recently passed AB 2192, a law that requires anything funded by the state to be made open access within one year of publication. In January, the US President, Donald Trump, signed into law the Open, Public, Electronic and Necessary (OPEN) Government Data Act, which mandates that US federal agencies publish all non-sensitive government data under an open format. This could cause a ripple effect in other countries and organisations.

But there is a role for individual academics to play in promoting open access, too. All academics need to be familiar with their options and to stop signing over copyright unnecessarily. Authors should be aware they can make a copy of their draft manuscript accessible in some form in addition to the finalised manuscript submitted to publishers. There are helpful resources, such as Authors Alliance which helps researchers manage their rights, and Sherpa/RoMEO, which navigates permissions of individual publishers and author rights. In many cases, researchers can also make their historical catalogue of articles available to the public.

Without an academic collective voice demanding open access to their research, the movement will never completely take off. It’s a case of either giving broad society access to scientific advances or allowing these breakthroughs to stay locked away for financial gain. For the majority of academics, the choice should be easy.

Source: Paywalls block scientific progress. Research should be open to everyone | Jason Schmitt | Education | The Guardian