The Linkielist

Linking ideas with the world

The Linkielist

UK Gov launches study into Online platforms and digital advertising market and possible monopolies antitrust

3 July 2019: The CMA has launched a market study into online platforms and the digital advertising market in the UK. We are assessing three broad potential sources of harm to consumers in connection with the market for digital advertising:

  • to what extent online platforms have market power in user-facing markets, and what impact this has on consumers
  • whether consumers are able and willing to control how data about them is used and collected by online platforms
  • whether competition in the digital advertising market may be distorted by any market power held by platforms

We are inviting comments by 30 July 2019 on the issues raised in the statement of scope, including from interested parties such as online platforms, advertisers, publishers, intermediaries within the ad tech stack, representative professional bodies, government and consumer groups.

Source: Online platforms and digital advertising market study – GOV.UK

Amazon Confirms It Keeps Alexa Transcripts You Can’t Delete

Next time you use Amazon Alexa to message a friend or order a pizza, know that the record could be stored indefinitely, even if you ask to delete it.

In May, Delaware Senator Chris Coons sent Amazon CEO Jeff Bezos a letter asking why Amazon keeps transcripts of voices captured by Echo devices, citing privacy concerns over the practice. He was prompted by reports that Amazon stores the text.

“Unfortunately, recent reporting suggests that Amazon’s customers may not have as much control over their privacy as Amazon had indicated,” Coons wrote in the letter. “While I am encouraged that Amazon allows users to delete audio recordings linked to their accounts, I am very concerned by reports that suggest that text transcriptions of these audio records are preserved indefinitely on Amazon’s servers, and users are not given the option to delete these text transcripts.”

CNET first reported that Amazon’s vice president of public policy, Brian Huseman, responded to the senator on June 28, informing him that Amazon keeps the transcripts until users manually delete the information. The letter states that Amazon works “to ensure those transcripts do not remain in any of Alexa’s other storage systems.”

However, there are some Alexa-captured conversations that Amazon retains, regardless of customers’ requests to delete the recordings and transcripts, according to the letter.

As an example of records that Amazon may choose to keep despite deletion requests, Huseman mentioned instances when customers use Alexa to subscribe to Amazon’s music or delivery service, request a rideshare, order pizza, buy media, set alarms, schedule calendar events, or message friends. Huseman writes that it keeps these recordings because “customers would not want or expect deletion of the voice recording to delete the underlying data or prevent Alexa from performing the requested task.”

The letter says Amazon generally stores recordings and transcripts so users can understand what Alexa “thought it heard” and to train its machine learning systems to better understand the variations of speech “based on region, dialect, context, environment, and the individual speaker, including their age.” Such transcripts are not anonymized, according to the letter, though Huseman told Coons in his letter, “When a customer deletes a voice recording, we delete the transcripts associated with the customer’s account of both of the customer’s request and Alexa’s response.”

Amazon declined to provide a comment to Gizmodo beyond what was included in Huseman’s letter.

In his public response to the letter, Coons expressed concern that it shed light on the ways Amazon is keeping some recordings.

“Amazon’s response leaves open the possibility that transcripts of user voice interactions with Alexa are not deleted from all of Amazon’s servers, even after a user has deleted a recording of his or her voice,” Coons said. “What’s more, the extent to which this data is shared with third parties, and how those third parties use and control that information, is still unclear.”

Source: Amazon Confirms It Keeps Alexa Transcripts You Can’t Delete

Facebook, Instragram, Whatsapp, Oculus, Google Cloud go down and Cloudflare reroutes large portions of the internet to nothing – twice

Facebook resolves day-long outages across Instagram, WhatsApp, and Messenger

Facebook had problems loading images, videos, and other data across its apps today, leaving some people unable to load photos in the Facebook News Feed, view stories on Instagram, or send messages in WhatsApp. Facebook said earlier today it was aware of the issues and was “working to get things back to normal as quickly as possible.” It blamed the outage on an error that was triggered during a “routine maintenance operation.”

As of 7:49PM ET, Facebook posted a message to its official Twitter account saying the “issue has since been resolved and we should be back at 100 percent for everyone. We’re sorry for any inconvenience.” Instagram similarly said its issues were more or less resolved, too.

Earlier today, some people and businesses experienced trouble uploading or sending images, videos and other files on our apps. The issue has since been resolved and we should be back at 100% for everyone. We’re sorry for any inconvenience.— Facebook Business (@FBBusiness) July 3, 2019

We’re back! The issue has been resolved and we should be back at 100% for everyone. We’re sorry for any inconvenience. pic.twitter.com/yKKtHfCYMA— Instagram (@instagram) July 3, 2019

The issues started around 8AM ET and began slowly clearing up after a couple hours, according to DownDetector, which monitors website and app issues. The errors aren’t affecting all images; many pictures on Facebook and Instagram still load, but others are appearing blank. DownDetector has also received reports of people being unable to load messages in Facebook Messenger.

The outage persisted through mid-day, with Facebook releasing a second statement, where it apologized “for any inconvenience.” Facebook’s platform status website still lists a “partial outage,” with a note saying that the company is “working on a fix that will go out shortly.”

Apps and websites are always going to experience occasional disruptions due to the complexity of services they’re offering. But even when they’re brief, they can become a real problem due to the huge number of users many of these services have. A Facebook outage affects a suite of popular apps, and those apps collectively have billions of users who rely on them. That’s a big deal when those services have become critical for business and communications, and every hour they’re offline or acting strange can mean real inconveniences or lost money.

We’re aware that some people are having trouble uploading or sending images, videos and other files on our apps. We’re sorry for the trouble and are working to get things back to normal as quickly as possible. #facebookdown— Facebook (@facebook) July 3, 2019

The issue caused some images and features to break across all of Facebook’s apps

Source: The Verge – Facebook resolves day-long outages across Instagram, WhatsApp, and Messenger

Facebook and Instagram Can’t Seem to Keep Their Shit Together

Well, folks, Facebook and its “family of apps” has experienced yet another crash. A nice respite moving into the long holiday weekend if you ask me.

Problems that appear to have started early Wednesday morning were still being reported as of the afternoon, with Instagram, Facebook, WhatsApp, Oculus, and Messenger all experiencing issues. According to DownDetector, issues first started cropping up on Facebook at around 8am ET.

“We’re aware that some people are having trouble uploading or sending images, videos and other files on our apps. We’re sorry for the trouble and are working to get things back to normal as quickly as possible,” Facebook tweeted just after noon on Wednesday. A similar statement was shared from Instagram’s Twitter account.

You know what we definitely need more of on social media? Influencers and ads. And lucky for us,…Read more

Oculus, Facebook’s VR property, separately tweeted that it was experiencing “issues around downloading software.”

Facebook’s crash was still well underway as of 1pm ET on Wednesday, primarily affecting images. Where users typically saw uploaded images, such as their profile pictures or in their photo albums, they instead saw a string of terms describing Facebook’s interpretation of the image. Like this:

TechCrunch’s Zack Whittaker noted on Twitter that all of those image tags you may have seen were Facebook’s machine learning at work.

This week’s crash is just the latest in what has become a near semi-frequent occurrence of outages. The first occurred back in March in an incident that Facebook later blamed on “a server configuration change.” Facebook and its subsidiaries went down again about a month later, though the previous incident was much worse, with millions of reports on DownDetector.

Two weeks ago, Instagram was bricked and experienced ongoing issues with refreshing feeds, loading profiles, and liking images. While the feed refresh issue was quickly patched, it was hours before the company confirmed that Instagram had been fully restored.

We’ve reached out to Facebook for more information about the issues and will update this post if we hear back.

Source: Gizmodo

Code crash? Russian hackers? Nope. Good ol’ broken fiber cables borked Google Cloud’s networking today

Fiber-optic cables linking Google Cloud servers in its us-east1 region physically broke today, slowing down or effectively cutting off connectivity with the outside world.

For at least the past nine hours, and counting, netizens and applications have struggled to connect to systems and services hosted in the region, located on America’s East Coast. Developers and system admins have been forced to migrate workloads to other regions, or redirect traffic, in order to keep apps and websites ticking over amid mitigations deployed by the Silicon Valley giant.

Starting at 0755 PDT (1455 UTC) today, according to Google, the search giant “experiencing external connectivity loss for all us-east1 zones and traffic between us-east1, and other regions has approximately 10% loss.” I got 502 problems, and Cloudflare sure is one: Outage interrupts your El Reg-reading pleasure for almost half an hour READ MORE

By 0900 PDT, Google revealed the extent of the blunder: its cloud platform had “lost multiple independent fiber links within us-east1 zone.” The fiber provider, we’re told, “has been notified and are currently investigating the issue. In order to restore service, we have reduced our network usage and prioritised customer workloads.”

By that, we understand, Google means it redirected traffic destined for its Google.com services hosted in the data center region, to other locations, allowing the remaining connectivity to carry customer packets.

By midday, Pacific Time, Google updated its status pages to note: “Mitigation work is currently underway by our engineering team to address the issue with Google Cloud Networking and Load Balancing in us-east1. The rate of errors is decreasing, however some users may still notice elevated latency.”

However, at time of writing, the physically damaged cabling is not yet fully repaired, and US-east1 networking is thus still knackered. In fact, repairs could take as much as 24 hours to complete. The latest update, posted 1600 PDT, reads as follows:

The disruptions with Google Cloud Networking and Load Balancing have been root caused to physical damage to multiple concurrent fiber bundles serving network paths in us-east1, and we expect a full resolution within the next 24 hours.

In the meantime, we are electively rerouting traffic to ensure that customers’ services will continue to operate reliably until the affected fiber paths are repaired. Some customers may observe elevated latency during this period.

Customers using Google Cloud’s Load Balancing service will automatically fall over to other regions, if configured, minimizing impact on their workloads, it is claimed. They can also migrate to, say US-east4, though they may have to rejig their code and scripts to reference the new region.

The Register asked Google for more details about the damaged fiber, such as how it happened. A spokesperson told us exactly what was already on the aforequoted status pages.

Meanwhile, a Google Cloud subscriber wrote a little ditty about the outage to the tune of Pink Floyd’s Another Brick in the Wall. It starts: “We don’t need no cloud computing…” ®

Source: The Register

This major Cloudflare internet routing blunder took A WEEK to fix. Why so long? It was IPv6 – and no one really noticed

Last week, an internet routing screw-up propagated by Verizon for three hours sparked havoc online, leading to significant press attention and industry calls for greater network security.

A few weeks before that, another packet routing blunder, this time pushed by China Telecom, lasted two hours, caused significant disruption in Europe and prompted some to wonder whether Beijing’s spies were abusing the internet’s trust-based structure to carry out surveillance.

In both cases, internet engineers were shocked at how long it took to fix traffic routing errors that normally only last minutes or even seconds. Well, that was nothing compared to what happened this week.

Cloudflare’s director of network engineering Jerome Fleury has revealed that the routing for a big block of IP addresses was wrongly announced for an ENTIRE WEEK and, just as amazingly, the company that caused it didn’t notice until the major blunder was pointed out by another engineer at Cloudflare. (This cock-up is completely separate to today’s Cloudflare outage.)

How is it even possible for network routes to remain completely wrong for several days? Because, folks, it was on IPv6.

“So Airtel AS9498 announced the entire IPv6 block 2400::/12 for a week and no-one notices until Tom Strickx finds out and they confirm it was a typo of /127,” Fleury tweeted over the weekend, complete with graphic showing the massive routing error.

That /12 represents 83 decillion IP addresses, or four quadrillion /64 networks. The /127 would be 2. Just 2 IP addresses. Slight difference. And while this demonstrates the expansiveness of IPv6’s address space, and perhaps even its robustness seeing as nothing seems to have actually broken during the routing screw-up, it also hints at just how sparse IPv6 is right now.

To be fair to Airtel, it often takes someone else to notice a network route error – typically caused by simple typos like failing to add a “7” – because the organization that messes up the tables tends not to see or feel the impact directly.

But if ever there was a symbol of how miserably the transition from IPv4 to IPv6 is going, it’s in the fact that a fat IPv6 routing error went completely unnoticed for a week while an IPv4 error will usually result in phone calls, emails, and outcry on social media within minutes.

And sure, IPv4 space is much, much more dense than IPv6 so obviously people will spot errors much faster. But no one at all noticed the advertisement of a /12 for days? That may not bode well for the future, even though, yes, this particular /127 typo had no direct impact.

Source: The Register

I got 502 problems, and Cloudflare sure is one: Outage interrupts your El Reg-reading pleasure for almost half an hour

Updated Cloudflare, the outfit noted for the slogan “helping build a better internet”, had another wobble today as “network performance issues” rendered websites around the globe inaccessible.

The US tech biz updated its status page at 1352 UTC to indicate that it was aware of issues, but things began tottering quite a bit earlier. Since Cloudflare handles services used by a good portion of the world’s websites, such as El Reg, including content delivery, DNS and DDoS protection, when it sneezes, a chunk of the internet has to go and have a bit of a lie down. That means netizens were unable to access many top sites globally.

A stumble last week was attributed to the antics of Verizon by CTO John Graham-Cumming. As for today’s shenanigans? We contacted the company, but they’ve yet to give us an explanation.

While Cloudflare implemented a fix by 1415 UTC and declared things resolved by 1457 UTC, a good portion of internet users noticed things had gone very south for many, many sites.

The company’s CEO took to Twitter to proffer an explanation for why things had fallen over, fingering a colossal spike in CPU usage as the cause while gently nudging the more wild conspiracy theories away from the whole DDoS thing.

However, the outage was a salutary reminder of the fragility of the internet as even Firefox fans found their beloved browser unable to resolve URLs.

Ever keen to share in the ups and downs of life, even Cloudflare’s site also reported the dread 502 error.

As with the last incident, users who endured the less-than-an-hour of disconnection would do well to remember that the internet is a brittle thing. And Cloudflare would do well to remember that its customers will be pondering if maybe they depend on its services just a little too much.

Updated to add at 1702 BST

Following publication of this article, Cloudflare released a blog post stating the “CPU spike was caused by a bad software deploy that was rolled back. Once rolled back the service returned to normal operation and all domains using Cloudflare returned to normal traffic levels.”

Naturally it then added….

“We are incredibly sorry that this incident occurred. Internal teams are meeting as I write performing a full post-mortem to understand how this occurred and how we prevent this from ever occurring again.” ®

Source: The Register

Cloudflare gave everyone a 30-minute break from a chunk of the internet yesterday: Here’s how they did it

Internet services outfit Cloudflare took careful aim and unloaded both barrels at its feet yesterday, taking out a large chunk of the internet as it did so.

In an impressive act of openness, the company posted a distressingly detailed post-mortem on the cockwomblery that led to the outage. The Register also spoke to a weary John Graham-Cumming, CTO of the embattled company, to understand how it all went down.

This time it wasn’t Verizon wot dunnit; Cloudflare engineered this outage all by itself.

In a nutshell, what happened was that Cloudflare deployed some rules to its Web Application Firewall (WAF). The gang deploys these rules to servers in a test mode – the rule gets fired but doesn’t take any action – in order to measure what happens when real customer traffic runs through it.

We’d contend that an isolated test environment into which one could direct traffic would make sense, but Graham-Cumming told us: “We do this stuff all the time. We have a sequence of ways in which we deploy stuff. In this case, it didn’t happen.”

It all sounds a bit like the start of a Who, Me?

In a frank admission that should send all DevOps enthusiasts scurrying to look at their pipelines, Graham-Cumming told us: “We’re really working on understanding how the automated test suite which runs internally didn’t pick up the fact that this was going to blow up our service.”

The CTO elaborated: “We push something out, it gets approved by a human, and then it goes through a testing procedure, and then it gets pushed out to the world. And somehow in that testing procedure, we didn’t spot that this was going to blow things up.”

He went on to explain how things should happen. After some internal dog-fooding, the updates are pushed out to a small group of customers “who tend to be a little bit cheeky with us” and “do naughty things” before it is progressively rolled out to the wider world. Cloudflare hits the deck, websites sink from sight after the internet springs yet another BGP leak READ MORE

“And that didn’t happen in this instance. This should have been caught easily.”

Alas, two things went wrong. Firstly, one of the rules (designed to block nefarious inline JavaScript) contained a regular expression that would send CPU usage sky high. Secondly, the new rules were accidentally deployed globally in one go.

The result? “One of these rules caused the CPU spike to 100 per cent, on all of our machines.” And because Cloudflare’s products are distributed over all its servers, every service was starved of CPU while the offending regular expression did its thing.

Source: The Register

The Secret To The World’s Lightest Gaming Mouse Model O Is Lots Of Holes

In order to create what it calls “the world’s lightest gaming mouse,” the engineers at peripheral maker Glorious PC Gaming Race took a mouse and put holes all in it. The result is the Model O, a very good gaming mouse that weighs only 67 grams and may trigger trypophobia.

“You’ll barely feel the holes,” reads the copy on the Model O’s product page, answering the question I imagine most people have when looking at the honeycombed plastic shell. I’ve used the ultra-light accessory for a couple weeks now, and the product page is correct. It feels slightly bumpy under the palm.

Only when I look directly at the Model O do I feel mildly disturbed by the pattern of holes covering the top and its underside. The effect is less jarring when the RGB lighting is cycling. While I’m actively using the mouse, my giant hands cover it completely. Glorious PC Gaming Race says the holes allow for better airflow, keeping hands cool, but my massive paws negate that benefit. I worry about dirt getting in the holes, but that’s nothing I can’t avoid by not being a total slob. Perhaps it’s time.

The Model O slides over my mouse pad effortlessly thanks to its ridiculously low weight and the rounded plastic feet, which Glorious PC Gaming Race calls “G-Skates.” I particularly enjoy the mouse’s cable, a proprietary braided affair that feels like a normal thin wire wrapped in a shoelace. It doesn’t tangle, which is an issue with many mice and one of the main reasons I prefer a stationary trackball.

Beneath the unique design and proprietary bits, the Model O is a very nice six-button gaming mouse. It’s got a Pixart sensor that can be adjusted as sensitive as 12,000 DPI (dots per inch), with more sensible presets of 400, 800, 1,600, and 3,200 cyclable via a button on the bottom of the unit (software is required to go higher). It’s fast and responsive.

Glorious PC Gaming Race Model O Specs

  • Sensor: Pixart PMW-3360 Sensor
  • Switch Type (Main): Omron Mechanical Rated For 20 Million Clicks
  • Number of Buttons: 6
  • Max Tracking Speed: 250+ IPS
  • Weight: 67grams (Matte) and 68 grams (Glossy)
  • Acceleration: 50G
  • Max DPI: 12,000
  • Polling Rate: 1000hz (1ms)
  • Lift off Distance: ~0.7mm
  • Price: $50 Matte, $60 Glossy.

Note that the Model O comes in four styles: black or white matte finish and black or white glossy. The glossy versions cost $10 more than the $50 matte versions and weigh 68 grams instead of 67. In other words, the glossy versions are not the “world’s lightest gaming mouse” and should be exiled.

The Glorious PC Gaming Race Model O is the lightest gaming mouse I’ve used. I’m not sure I’m the type of hardcore mouse user that would benefit from the reduced weight. In fact, many of the gaming mice I’ve evaluated over the past several years have come packaged with weights to make them heavier. If you prefer a more lightweight pointing device and don’t mind all the holes, the Model O could be for you. And if not, you can probably fill it with clay or something to weigh it down.

Source: The Secret To The World’s Lightest Gaming Mouse Is Lots Of Holes

YouTube mystery ban on hacking videos has content creators puzzled, looks like they want you to not learn about cybersecurity

YouTube, under fire since inception for building a business on other people’s copyrights and in recent years for its vacillating policies on irredeemable content, recently decided it no longer wants to host instructional hacking videos.

The written policy first appears in the Internet Wayback Machine’s archive of web history in an April 5, 2019 snapshot. It forbids: “Instructional hacking and phishing: Showing users how to bypass secure computer systems or steal user credentials and personal data.”

Lack of clarity about the permissibility of cybersecurity-related content has been an issue for years. In the past, hacking videos in years past could be removed if enough viewers submitted reports objecting to them or if moderators found the videos violated other articulated policies.

Now that there’s a written rule, there’s renewed concern about how the policy is being applied.

Kody Kinzie, a security researcher and educator who posts hacking videos to YouTube’s Null Byte channel, on Tuesday said a video created for the US July 4th holiday to demonstrate launching fireworks over Wi-Fi couldn’t be uploaded because of the rule.

“I’m worried for everyone that teaches about infosec and tries to fill in the gaps for people who are learning,” he said via Twitter. “It is hard, often boring, and expensive to learn cybersecurity.”

In an email to The Register, Kinzie clarified that YouTube had problems with three previous videos, which got flagged and are either in the process of review or have already been appealed and restored. They involved Wi-Fi hacking. One of the Wi-Fi hacking videos got a strike on Tuesday and that disabled uploading for the account, preventing the fireworks video from going up.

The Register asked Google’s YouTube for comment but we’ve not heard back.

Security professionals find the policy questionable. “Very simply, hacking is not a derogatory term and shouldn’t be used in a policy about what content is acceptable,” said Tim Erlin, VP of product management and strategy at cybersecurity biz Tripwire, in an email to The Register.

“Google’s intention here might be laudable, but the result is likely to stifle valuable information sharing in the information security community.”

Source: YouTube mystery ban on hacking videos has content creators puzzled • The Register

Spotify shuts down direct music uploading for independent artists, forces them to 3rd parties and also allows these 3rd parties into your personal account

Spotify has changed the way artists can upload music, now prohibiting individual musicians from putting their songs on the streaming service directly.

The new move requires a third party to be involved in the business of uploads.

The company announced the change on Monday, saying it will close the beta program and stop accepting direct uploads by the end of July.

“The most impactful way we can improve the experience of delivering music to Spotify for as many artists and labels as possible is to lean into the great work our distribution partners are already doing to serve the artist community,” Spotify said in a statement on its blog. “Over the past year, we’ve vastly improved our work with distribution partners to ensure metadata quality, protect artists from infringement, provide their users with instant access to Spotify for Artists, and more.”

“The best way for us to serve artists and labels is to focus our resources on developing tools in areas where Spotify can uniquely benefit them — like Spotify for Artists (which more than 300,000 creators use to gain new insight into their audience) and our playlist submission tool (which more than 36,000 artists have used to get playlisted for the very first time since it launched a year ago). We have a lot more planned here in the coming months,” the post continued.

The direct upload function began last September, allowing independent artists to utilize the streaming site without distribution methods.

Smaller artists will now need to return to sites like Bandcamp, SoundCloud and others to upload their material.

Many people, especially artists, were upset about the decision. You can see what they had to say on Twitter below.
More Spotify news

Pre-saving an upcoming release from your favorite artists on Spotify could be causing you to share more personal data than you realize.

In a recent report from Billboard, it was revealed that Spotify users were giving a band’s label data use permissions that were much broader than typical permissions.

When a user pre-saves a track, it adds it to the user’s library the moment it comes out. In order to do this, Spotify users have to click through and approve certain permissions.

These permissions give the label more access to your account than Spotify normally gives. It allows them to track listening habits, change the artists they follow and potentially control their streaming remotely.

Source: Spotify shuts down direct music uploading for independent artists

What. The. Fuck.

Dutch ING Bank wants to use customer payment data for direct marketing, privacy watchdog says NO! whilst Dutch Gov wants more banking data sharing with everyone!

The authority on personal data has reprimanded the ING Bank over plans to use payment data for advertising. The authority has told other banks to examine their policies for direct marketing. ING Bank recently changed their privacy statement, stating that the bank will use payment data for direct marketing offers. As an example they said being able to offer specific product offers after child support payments had come in. Many ING customers caught this and emailed and called the authority about this angrily.

This is the second time the ING has tried this: in 2014 they tried to do this, but then also sharing the payment data with third parties.

Source: AP: Banken mogen betaalgegevens niet zomaar gebruiken voor reclame – Emerce

In the meantime, the Dutch government is trying to find a way to prohibit cash payments of over EUR 3000,- and insiduously in the same law allowing banks and government to share client banking data more easily.

source: Kabinet gaat contante betaling boven de 3000 euro verbieden

Zipato Zipamicro smart home hub totally pwned

In new research published Tuesday and shared with TechCrunch, Dardaman and Wheeler found three security flaws which, when chained together, could be abused to open a front door with a smart lock.

Smart home technology has come under increasing scrutiny in the past year. Although convenient to some, security experts have long warned that adding an internet connection to a device increases the attack surface, making the devices less secure than their traditional counterparts. The smart home hubs that control a home’s smart devices, like water meters and even the front door lock, can be abused to allow landlords entry to a tenant’s home whenever they like.

[…]

he researchers found they could extract the hub’s private SSH key for “root” — the user account with the highest level of access — from the memory card on the device. Anyone with the private key could access a device without needing a password, said Wheeler.

They later discovered that the private SSH key was hardcoded in every hub sold to customers — putting at risk every home with the same hub installed.

Using that private key, the researchers downloaded a file from the device containing scrambled passwords used to access the hub. They found that the smart hub uses a “pass-the-hash” authentication system, which doesn’t require knowing the user’s plaintext password, only the scrambled version. By taking the scrambled password and passing it to the smart hub, the researchers could trick the device into thinking they were the homeowner.

Source: Security flaws in a popular smart home hub let hackers unlock front doors | TechCrunch

Silicon Valley’s Hottest Email App Superhuman sends emails that track you and your location without your knowledge

Superhuman is one of the most talked about new apps in Silicon Valley. Why? The product — a $30 per month email app for power users hoping for greater productivity— is a good alternative to many popular and stale email apps, nearly everyone who has used it says so. Even better is the company’s publicity strategy: The service invite only and posting on social media is the quickest way to get in the door. So it gets some local buzz, a $33 million dollar investment, bigger blog write-ups and then a New York Times article to top it all off last month.

After a peak, a roller coaster hits a downward slope.

Superhuman was criticized sharply on Tuesday when a blog post by Mike Davidson, previously the VP of design at Twitter, spread widely across social media. The post goes into detail about how one of Superhuman’s powerful features was actually just a run-of-the-mill privacy-violating tracking pixel with an option to turn it off or a notification for the recipient on the other end. If you use Superhuman, you’ll be able to see when someone opened your email, how many times they did it, what device they were using and what location they’re in.

Here’s Davidson:

It is disappointing then that one of the most hyped new email clients, Superhuman, has decided to embed hidden tracking pixels inside of the emails its customers send out. Superhuman calls this feature “Read Receipts” and turns it on by default for its customers, without the consent of its recipients.

Tracking pixels are not new. If you get an email newsletter, for instance, it’s probably got a tracking pixel feeding this kind of data back to advertisers, senders, and a whole host of other trackers interested in collecting everything they can about you.

Let me put it this way: I send an email to your mother. She opens it. Now I know a ton of information about her including her whereabouts without ever her ever being informed or consenting to this tracking. What does this kind of behavior mean for nosy advertisers? What about abusive spouses? A stalker? Pushy salespeople? Intrusive co-workers and bosses?

Davidson sums it up in his blog:

They’ve identified a feature that provides value to some of their customers (i.e. seeing if someone has opened your email yet) and they’ve trampled the privacy of every single person they send email to in order to achieve that. Superhuman never asks the person on the other end if they are OK with sending a read receipt (complete with timestamp and geolocation). Superhuman never offers a way to opt out. Just as troublingly, Superhuman teaches its user to surveil by default. I imagine many users sign up for this, see the feature, and say to themselves “Cool! Read receipts! I guess that’s one of the things my $30 a month buys me.”

Tracking emails is a tried-and-true tactic used by a ton of companies. That doesn’t make it ethical or irreversible. There has been plenty of criticism of the strategy — and there is a technical workaround that we’ll talk about momentarily — but since the tech has been, until now, mainly visible to businesses, the conversation has paled in comparison to some of the other big privacy issues arising in recent years.

Superhuman is a consumer app. It’s targeted at power users, yes, but the potential audience is big and the buzz is real. Combined with the increasing public distaste for privacy violations in the name of building a more powerful app, Twitter has been awash this week and especially on Tuesday with criticism of Superhuman: Why does it need to take so much information without an option or notification?

We emailed Superhuman but did not get a response.

A tracking pixel works by embedding a small and hidden image in an email. The image is able to report back information including when the email is opened and where the reader is located. It’s hidden for a reason: The spy is not trying to ask permission.

If you’re willing to put in a little work, you can spot who among your contacts is using Superhuman by following these instructions.

The workaround is to disable images by default in email. The method varies in different email apps but will typically be located somewhere in the settings.

Apps like Gmail have tried for years to scrub tracking pixels. Marketers and other users sending these tracking tools out have been battling, sometimes successfully, to continue to track Gmail’s billion users without their permission.

In that case, disabling images by default is the only sure-fire way to go. When you do allow images in an email, know that you may be instantly giving up a small fortune of information to the sender — and whoever they’re working with — without even realizing it.

Source: Silicon Valley’s Hottest Email App Raises Ethical Questions About the Future of Email

Turns out Apple’s Memoji is another product copy, this time from Xiaomi and Samsung. If you can’t create, duplicate.

Image Credit: Gizmochina

Apple’s Memoji may have become the more popular 3D avatar feature for smartphones, but Xiaomi wants people to know that its similarly named version — Mimoji — came first, despite increasingly confusing overlap between the apps’ names and features. Moreover, it’s apparently threatening legal action against writers who call it a copycat without providing proof.

In September 2017, Apple introduced Animoji as an iPhone X-exclusive component of Messages, enabling the high-end smartphone’s users to see their facial expressions rendered in augmented reality as one of 12 animated emoji glyphs, including pig, fox, rabbit, panda, and poop icons. On June 4, 2018, it added user-customizable Memoji faces to Animoji — notably without changing the Messages component’s name — which hit all iPhone X, XR, and XS models with a final public release in September 2018.

By contrast, Xiaomi notes that its own feature was originally called “Mi Meng” when it hit China in late May 2018, but had the English name Mimoji, as evidenced by the package name of its Android application. While the company’s Mimoji generally looked like second-rate Animoji — including a pig, fox, panda, and rabbit-ish mascot — there weren’t any human figures. Until now.

Above: Xiaomi’s initial Mimoji.

The new version of Mimoji is arriving with Xiaomi’s CC9 phones, adding user-customizable human faces complete with the same basic facial, hair, and clothing elements, albeit rendered with various small changes. Writers in China found the similarities similar enough to call Xiaomi’s version a clone, but after a day of “internal self-examination,” the company challenged that on the Weibo social network. As Gizmochina notes, PR head Xu Jieyun posted the app’s naming timeline, and said that the “functional logic difference between the two products is huge.” It also promised “the next phase of action” against people who said it was copying Apple’s Memoji without proof.

Neither Apple nor Xiaomi can reasonably claim to be first with either the 3D animal or 3D human avatar concept; the ideas have been found in third-party apps for years, and Samsung’s AR Emoji beat both companies to market with OS-integrated human avatars in February 2018. Even the Memoji name dates back to at least early 2017, and not from Apple.

But there’s no question that Apple’s specific implementation of Memoji, complete with TrueDepth face tracking, was something special, and now Mimoji offers something similar. Apple has already announced a host of new customizations for Memoji in iOS 13, and each company will likely iterate on its system — under whatever name — for years to come.

Source: Xiaomi threatens writers over Mimoji app’s overlap with Apple’s Memoji

We are shocked to learn that China, an oppressive surveillance state, injects spyware into visitors’ phones

The New York Times reported today that guards working the border with Krygyzstan in the Xinjiang region have insisted on putting an app called Fengcai on the Android devices of visitors – including tourists, journalists, and other foreigners.

The Android app is said to harvest details from the handset ranging from text messages and call records to contacts and calendar entries. It also apparently checks to see if the device contains any of 73,000 proscribed documents, including missives from terrorist groups, including ISIS recruitment fliers and bomb-making instructions. China being China, it also looks for information on the Dalai Lama and – bizarrely – mentions of a Japanese grindcore band.

Visitors using iPhones had their mobes connected to a different, hardware-based device that is believed to install similar spyware.

This is not the first report of Chinese authorities using spyware to keep tabs on people in the Xinjiang region, though it is the first time tourists are believed to have been the primary target. The app doesn’t appear to be used at any other border crossings into the Middle Kingdom.

In May, researchers with German security company Cure53 described how a similar app known as BXAG that was not only collecting data from Android phones, but also sending that harvested information via an insecure HTTP connection, putting visitors in even more danger from third parties who might be eavesdropping.

The remote region in northwest China has for decades seen conflict between the government and local Muslim and ethnic Uighur communities, with reports of massive reeducation camps beign set up in the area. Beijing has also become increasingly reliant on digital surveillance tools to maintain control over its population, and use of intrusive software in Xinjiang to monitor the locals has become more common.

Human Rights Watch also reported that those living in the region sometimes had their phones spied on by a police-installed app called IJOP, while in 2018 word emerged that a mandatory spyware tool called Jing Wang was being pushed to citizens in the region

Source: We are shocked to learn that China, an oppressive surveillance state, injects spyware into visitors’ phones • The Register

The Americans just force you to unlock the phone for them…

Cop a load of this: 1TB of police body camera videos found lounging around public databases

In yet another example of absent security controls, troves of police body camera footage were left open to the world for anyone to siphon off, according to an infosec biz.

Jasun Tate, CEO of Black Alchemy Solutions Group, told The Register on Monday he and his team had identified about a terabyte of officer body cam videos, stored in unprotected internet-facing databases, belonging to the Miami Police Department, and cops in other US cities as well as places aboard. The operators of these databases – Tate suggests there are five service providers involved – work with various police departments. The footage apparently dates from 2018 to present.

“Vendors that provide services to police departments are insecure,” said Tate, adding that he could not at present identify the specific vendors responsible for leaving the archive freely accessible to the public. Below is an example body-cam video from the internet-facing data silo Tate shared on Twitter.

Tate said he came across the files while doing online intelligence work for a client. While searching the internet, he said his firm came across a dark-web hacker forum thread that pointed out the body cam material sitting prone on the internet. Following the forum’s links led Tate to police video clips that had been stored insecurely in what he described as a few open MongoDB and mySQL databases.

For at least the past few days, the footage was publicly accessible, we’re told. Tate reckons the videos will have been copied from the databases by the hacker forum’s denizens, and potentially sold on by now.

According to Tate, the Miami Police Department was notified of the findings. A spokesperson for Miami PD said the department is still looking into these claims, and won’t comment until the review is completed.

Tate posted about his findings on Saturday via Twitter. The links to databases he provided to The Register as evidence of his findings now return errors, indicating the systems’ administrators have taken steps to remove the files from public view.

The incident echoes the hacking of video surveillance biz Perceptics in terms of the sensitivity of the exposed data. The Perceptics hack appears to be more severe because so much of its internal data was stolen and posted online. But that could change if it turns out that much of the once accessible Miami body cam footage was copied and posted on other servers.

Source: Cop a load of this: 1TB of police body camera videos found lounging around public databases • The Register

Sting Catches Another Ransomware Firm Negotiating With “Hackers” when claiming to decrypt

ProPublica recently reported that two U.S. firms, which professed to use their own data recovery methods to help ransomware victims regain access to infected files, instead paid the hackers.

Now there’s new evidence that a U.K. firm takes a similar approach. Fabian Wosar, a cyber security researcher, told ProPublica this month that, in a sting operation he conducted in April, Scotland-based Red Mosquito Data Recovery said it was “running tests” to unlock files while actually negotiating a ransom payment. Wosar, the head of research at anti-virus provider Emsisoft, said he posed as both hacker and victim so he could review the company’s communications to both sides.

Red Mosquito Data Recovery “made no effort to not pay the ransom” and instead went “straight to the ransomware author literally within minutes,” Wosar said.

[…]

On its website, Red Mosquito Data Recovery calls itself a “one-stop data recovery and consultancy service” and says it has dealt with hundreds of ransomware cases worldwide in the past year. It advertised last week that its “international service” offers “experts who can offer honest, free advice.” It said it offers a “professional alternative” to paying a ransom, but cautioned that “paying the ransom may be the only viable option for getting your files decrypted.”

It does “not recommend negotiating directly with criminals since this can further compromise security,” it added.

Red Mosquito Data Recovery did not respond to emailed questions, and hung up when we called the number listed on its website. After being contacted by ProPublica, the company removed the statement from its website that it provides an alternative to paying hackers. It also changed “honest, free advice” to “simple free advice,” and the “hundreds” of ransomware cases it has handled to “many.”

[…]

documents show, Lairg wrote to Wosar’s victim email address, saying he was “pleased to confirm that we can recover your encrypted files” for $3,950 — four times as much as the agreed-upon ransom.

Source: Sting Catches Another Ransomware Firm — Red Mosquito — Negotiating With “Hackers” — ProPublica

ISS is home to super-tough molds that laugh in the face of deadly radiation

Mold spores commonly found aboard the International Space Station (ISS) turn out to be radiation resistant enough to survive 200 times the X-ray dose needed to kill a human being. Based on experiments by a team of researchers led by Marta Cortesão, a microbiologist at the German Aerospace Center (DLR) in Cologne, the new study indicates that sterilizing interplanetary spacecraft may be much more difficult than previously thought.

[…]

The ISS is a collection of sealed cans inhabited by people who spend every minute of the day sweating, touching things, and exhaling moist air. Even with regular cleaning and a life support system designed to keep things under control, the result is a constant battle against mold and bacteria.

[…]

The researchers exposed samples of Aspergillus and Pennicillium spores to X-rays, heavy ions, and high-frequency ultraviolet light of the kinds and intensities found in space. Such radiation damages DNA and breaks down cell structures, but the spores survived X-rays up to 1,000 gray, heavy ions at 500 gray, and UV rays up to 3,000 joules per meter squared.

Gray is a measurement of radiation exposure based on the absorption of one joule of radiation energy per kilogram of matter. To place the results into perspective, five gray will kill a person and 0.7 gray is how much radiation the crew of a Mars mission would receive on a 180-day mission.

Since mold spores can already survive heat, cold, chemicals, and drying out, being able to take on radiation as well poses new challenges. It means that not only will manned missions have to put a lot of effort into keeping the ship clean and healthy, it also means that unmanned planetary missions, which must be free of terrestrial organisms to prevent contaminating other worlds, will be harder to sterilize.

But according to Cortesão there is a positive side to this resiliency. Since fungal spores are hard to kill, they’d be easier to carry along and grow under controlled conditions in space, so they can be used as raw materials or act as biological factories.

“Mold can be used to produce important things, compounds like antibiotics and vitamins, says Cortesão. “It’s not only bad, a human pathogen and a food spoiler, it also can be used to produce antibiotics or other things needed on long missions.”

Since the present study only looked at radiation, orbital experiments are scheduled for later this year that will test their ability to withstand the combination of radiation, vacuum, cold, and low gravity found in space.

The results of the team’s study were presented at the 2019 Astrobiology Science Conference.

Source: ISS is home to super-tough molds that laugh in the face of deadly radiation

And of course, it would be nice if we could figure out how this works and genetically enhance people to be so resilient as well…

Boeing falsified records for 787 jet sold to Air Canada. It developed a fuel leak

Boeing staff falsified records for a 787 jet built for Air Canada which developed a fuel leak ten months into service in 2015.

In a statement to CBC News, Boeing said it self-disclosed the problem to the U.S. Federal Aviation Administration after Air Canada notified them of the fuel leak.

The records stated that manufacturing work had been completed when it had not.

Boeing said an audit concluded it was an isolated event and “immediate corrective action was initiated for both the Boeing mechanic and the Boeing inspector involved.”

Boeing is under increasing scrutiny in the U.S. and abroad following two deadly crashes that claimed 346 lives and the global grounding of its 737 Max jets.

On the latest revelations related to falsifying records for the Air Canada jet, Mike Doiron of Moncton-based Doiron Aviation Consulting said: “Any falsification of those documents which could basically cover up a safety issue is a major problem.”

In the aviation industry, these sorts of documents are crucial for ensuring the safety of aircraft and the passengers onboard, he said.

Source: Boeing falsified records for 787 jet sold to Air Canada. It developed a fuel leak | CBC News

Does this mean we need to avoid 787s too?

Germany and the Netherlands to build the first ever joint military internet, some contractor wins huge and achieves massive vendor lock in

Government officials from Germany and the Netherlands have signed an agreement this week to build the first-ever joint military internet.

The accord was signed on Wednesday in Brussels, Belgium, where NATO defense ministers met this week.

The name of this new Dutch-German military internet is the Tactical Edge Networking, or TEN, for short.

This is the first time when two nations merge parts of their military network, and the project is viewed as a test for unifying other NATO members’ military networks in the future.

The grand master plan is to have NATO members share military networks, so new and improved joint standards can be developed and deployed across all NATO states.

TEN will be headquartered in Koblenz, Germany, and there will also be a design and prototype center at the Bernard Barracks in Amersfoort, the Netherlands.

For starters, TEN will merge communications between the German army’s (Bundeswehr) land-based operations (D-LBO) and the Dutch Ministry of Defence’s ‘FOXTROT’ tactical communications program, used by the Dutch military.

Troops operating on top of the TEN network will use identical computers, radios, tablets, and telephones, regardless of the country of origin.

TEN’s deployment is expected to cost the two countries millions of euros in costs to re-equip tens of thousands of soldiers and vehicles with new compatible equipment.

Source: Germany and the Netherlands to build the first ever joint military internet | ZDNet

Wow, I thought we didn’t do that kind of thing any more!

This weekend all Microsoft e-books will stop working. A gentle reminder that through DRM you don’t own what you think you own.

If you bought an ebook through Microsoft’s online store, now’s the time to give it a read, or reread, because it will stop working early July.

That’s right, the books you paid for will be literally removed from your electronic bookshelf because, um, Microsoft decided in April it no longer wanted to sell books. It will turn off the servers that check whether your copy was bought legitimately – using the usual anti-piracy digital-rights-management (DRM) tech – and that means your book can’t be verified as being in the hands of its purchaser, and so won’t be displayed.

Even the free-to-download ebooks will fail. According to Redmond, “You can continue to read free books you’ve downloaded until July 2019 when they will no longer be accessible.” And the paid-for ones? “You can continue to read books you’ve purchased until July 2019 when they will no longer be available, and you will receive a full refund of the original purchase price.”

Why has Microsoft done this? We don’t know. All the Windows giant said was that it was “streamlining the strategic focus” of its store. But how much can a DRM server possibly cost? And why is that cost too high for an American corporation with $110bn in annual revenue that makes $16.5bn in profit?

Source: This weekend you better read those ebooks you bought from Microsoft – because they’ll be dead come next week • The Register

New property of light discovered, plus recently discovered properties you probably didn’t know about

Scientists have long known about such properties of light as wavelength. More recently, researchers have found that light can also be twisted, a property called . Beams with highly structured angular momentum are said to have orbital angular momentum (OAM), and are called . They appear as a helix surrounding a common center, and when they strike a flat surface, they appear as doughnut-shaped. In this new effort, the researchers were working with OAM beams when they found the light behaving in a way that had never been seen before.

The experiments involved firing two lasers at a cloud of argon gas—doing so forced the beams to overlap, and they joined and were emitted as a single beam from the other side of the argon cloud. The result was a type of vortex beam. The researchers then wondered what would happen if the lasers had different orbital angular momentum and if they were slightly out of sync. This resulted in a beam that looked like a corkscrew with a gradually changing twist. And when the beam struck a , it looked like a crescent moon. The researchers noted that looked at another way, a at the front of the beam was orbiting around its center more slowly than a photon at the back of the . The researchers promptly dubbed the new property self-torque—and not only is it a newly discovered property of light, it is also one that has never even been predicted.

00:00
00:00
A new property of light beams, the self-torque of light, which is associated to a temporal variation of the orbital angular momentum. Extreme-ultraviolet ultrafast pulses with self-torque are generated through high harmonic generation. Credit: JILA (USA) Rebecca Jacobson, Servicio de Produccion e Innovacion Digital – Universidad de Salamanca (Spain)

The researchers suggest that it should be possible to use their technique to modulate the of light in ways very similar to modulating frequencies in communications equipment. This could lead to the development of novel devices that make use of manipulating extremely tiny materials.

Source: New property of light discovered

Researchers teleport information within a diamond

Researchers from the Yokohama National University have teleported quantum information securely within the confines of a diamond. The study has big implications for quantum information technology—the future of sharing and storing sensitive information. The researchers published their results on June 28, 2019, in Communications Physics.

“Quantum teleportation permits the transfer of into an otherwise inaccessible space,” said Hideo Kosaka, a professor of engineering at Yokohama National University and an author on the study. “It also permits the transfer of information into a quantum memory without revealing or destroying the stored quantum information.”

The inaccessible space, in this case, consisted of in diamond. Made of linked, yet individually contained, carbon atoms, a diamond holds the perfect conditions for .

A carbon atom holds six protons and six neutrons in its nucleus, surrounded by six spinning electrons. As the atoms bond into a diamond, they form a notably strong lattice. However, can have complex defects, such as when a nitrogen atom exists in one of two adjacent vacancies where carbon atoms should be. This defect is called a nitrogen vacancy center.

Surrounded by carbon atoms, the nucleus structure of the creates what Kosaka calls a nanomagnet.

To manipulate an electron and a carbon isotope in the vacancy, Kosaka and the team attached a wire about a quarter the width of a human hair to the surface of a diamond. They applied a microwave and a radio wave to the wire to build an oscillating magnetic field around the diamond. They shaped the microwave to create the optimal, controlled conditions for the transfer of quantum information within the diamond.

Kosaka then used the nitrogen nanomagnet to anchor an electron. Using the microwave and radio waves, Kosaka forced the to entangle with a carbon nuclear spin—the angular momentum of the electron and the nucleus of a carbon atom. The electron spin breaks down under a created by the nanomagnet, making it susceptible to entanglement. Once the two pieces are entangled, meaning their physical characteristics are so intertwined they cannot be described individually, a photon that holds quantum information is introduced, and the electron absorbs the photon. The absorption allows the polarization state of the photon to be transferred into the carbon, which is mediated by the entangled electron, demonstrating a teleportation of information at the quantum level.

“The success of the photon storage in the other node establishes the entanglement between two adjacent nodes,” Kosaka said. Called quantum repeaters, the process can take individual chunks of information from node to node, across the quantum field.

“Our ultimate goal is to realize scalable quantum repeaters for long-haul quantum communications and distributed quantum computers for large-scale quantum computation and metrology,” Kosaka said.

Source: Researchers teleport information within a diamond

That this AI can simulate universes in 30ms is not the scary part. It’s that its creators don’t know why it works so well

The accuracy of the neural network is judged by how similar its outputs were to the ones created by two more traditional N-body simulation systems, FastPM and 2LPT, when all three are given the same inputs. When D3M was tasked with producing 1,000 simulations from 1,000 sets of input data, it had a relative error of 2.8 per cent compared to FastPM, and a 9.3 per cent compared to 2LPT for the same inputs. That’s not too bad, considering it takes the model just 30 milliseconds to crank out a simulation. Not only does that save time, but it’s also cheaper too since less compute power is needed.

To their surprise, the researchers also noticed that D3M seemed to be able to produce simulations of the universe from conditions that weren’t specifically included in the training data. During inference tests, the team tweaked input variables such as the amount of dark matter in the virtual universes, and the model still managed to spit out accurate simulations despite not being specifically trained for these changes.

“It’s like teaching image recognition software with lots of pictures of cats and dogs, but then it’s able to recognize elephants,” said Shirley Ho, first author of the paper and a group leader at the Flatiron Institute. “Nobody knows how it does this, and it’s a great mystery to be solved.

“We can be an interesting playground for a machine learner to use to see why this model extrapolates so well, why it extrapolates to elephants instead of just recognizing cats and dogs. It’s a two-way street between science and deep learning.”

The source code for the neural networks can be found here.

Source: That this AI can simulate universes in 30ms is not the scary part. It’s that its creators don’t know why it works so well • The Register

EU should ban AI-powered citizen scoring and mass surveillance, say experts

A group of policy experts assembled by the EU has recommended that it ban the use of AI for mass surveillance and mass “scoring of individuals”; a practice that potentially involves collecting varied data about citizens — everything from criminal records to their behavior on social media — and then using it to assess their moral or ethical integrity.

The recommendations are part of the EU’s ongoing efforts to establish itself as a leader in so-called “ethical AI.” Earlier this year, it released its first guidelines on the topic, stating that AI in the EU should be deployed in a trustworthy and “human-centric” manner.

The new report offers more specific recommendations. These include identifying areas of AI research that require funding; encouraging the EU to incorporate AI training into schools and universities; and suggesting new methods to monitor the impact of AI. However, the paper is only a set of recommendations at this point, and not a blueprint for legislation.

Notably, the suggestions that the EU should ban AI-enabled mass scoring and limit mass surveillance are some of the report’s relatively few concrete recommendations. (Often, the report’s authors simply suggest that further investigation is needed in this or that area.)

Source: EU should ban AI-powered citizen scoring and mass surveillance, say experts – The Verge

Google’s new reCaptcha forces page admins to put it on EVERY page so Google can track you everywhere

According to tech statistics website Built With, more than 650,000 websites are already using reCaptcha v3; overall, there are at least 4.5 million websites use reCaptcha, including 25% of the top 10,000 sites. Google is also now testing an enterprise version of reCaptcha v3, where Google creates a customized reCaptcha for enterprises that are looking for more granular data about users’ risk levels to protect their site algorithms from malicious users and bots.

But this new, risk-score based system comes with a serious trade-off: users’ privacy.

According to two security researchers who’ve studied reCaptcha, one of the ways that Google determines whether you’re a malicious user or not is whether you already have a Google cookie installed on your browser. It’s the same cookie that allows you to open new tabs in your browser and not have to re-log in to your Google account every time. But according to Mohamed Akrout, a computer science PhD student at the University of Toronto who has studied reCaptcha, it appears that Google is also using its cookies to determine whether someone is a human in reCaptcha v3 tests. Akrout wrote in an April paper about how reCaptcha v3 simulations that ran on a browser with a connected Google account received lower risk scores than browsers without a connected Google account. “If you have a Google account it’s more likely you are human,” he says. Google did not respond to questions about the role that Google cookies play in reCaptcha.

With reCaptcha v3, technology consultant Marcos Perona and Akrout’s tests both found that their reCaptcha scores were always low risk when they visited a test website on a browser where they were already logged into a Google account. Alternatively, if they went to the test website from a private browser like Tor or a VPN, their scores were high risk.

To make this risk-score system work accurately, website administrators are supposed to embed reCaptcha v3 code on all of the pages of their website, not just on forms or log-in pages. Then, reCaptcha learns over time how their website’s users typically act, helping the machine learning algorithm underlying it to generate more accurate risk scores. Because reCaptcha v3 is likely to be on every page of a website,  if you’re signed into your Google account there’s a chance Google is getting data about every single webpage you go to that is embedded with reCaptcha v3—and there many be no visual indication on the site that it’s happening, beyond a small reCaptcha logo hidden in the corner.

Source: Google’s new reCaptcha has a dark side

8 of worlds top tech companies pwned for years by China

Eight of the world’s biggest technology service providers were hacked by Chinese cyber spies in an elaborate and years-long invasion, Reuters found. The invasion exploited weaknesses in those companies, their customers, and the Western system of technological defense.

[…]

The hacking campaign, known as “Cloud Hopper,” was the subject of a U.S. indictment in December that accused two Chinese nationals of identity theft and fraud. Prosecutors described an elaborate operation that victimized multiple Western companies but stopped short of naming them. A Reuters report at the time identified two: Hewlett Packard Enterprise and IBM.

Yet the campaign ensnared at least six more major technology firms, touching five of the world’s 10 biggest tech service providers.

Also compromised by Cloud Hopper, Reuters has found: Fujitsu, Tata Consultancy Services, NTT Data, Dimension Data, Computer Sciences Corporation and DXC Technology. HPE spun-off its services arm in a merger with Computer Sciences Corporation in 2017 to create DXC.

Waves of hacking victims emanate from those six plus HPE and IBM: their clients. Ericsson, which competes with Chinese firms in the strategically critical mobile telecoms business, is one. Others include travel reservation system Sabre, the American leader in managing plane bookings, and the largest shipbuilder for the U.S. Navy, Huntington Ingalls Industries, which builds America’s nuclear submarines at a Virginia shipyard.

“This was the theft of industrial or commercial secrets for the purpose of advancing an economy,” said former Australian National Cyber Security Adviser Alastair MacGibbon. “The lifeblood of a company.”

[…]

The corporate and government response to the attacks was undermined as service providers withheld information from hacked clients, out of concern over legal liability and bad publicity, records and interviews show. That failure, intelligence officials say, calls into question Western institutions’ ability to share information in the way needed to defend against elaborate cyber invasions. Even now, many victims may not be aware they were hit.

The campaign also highlights the security vulnerabilities inherent in cloud computing, an increasingly popular practice in which companies contract with outside vendors for remote computer services and data storage.

[…]

For years, the company’s predecessor, technology giant Hewlett Packard, didn’t even know it had been hacked. It first found malicious code stored on a company server in 2012. The company called in outside experts, who found infections dating to at least January 2010.

Hewlett Packard security staff fought back, tracking the intruders, shoring up defenses and executing a carefully planned expulsion to simultaneously knock out all of the hackers’ known footholds. But the attackers returned, beginning a cycle that continued for at least five years.

The intruders stayed a step ahead. They would grab reams of data before planned eviction efforts by HP engineers. Repeatedly, they took whole directories of credentials, a brazen act netting them the ability to impersonate hundreds of employees.

The hackers knew exactly where to retrieve the most sensitive data and littered their code with expletives and taunts. One hacking tool contained the message “FUCK ANY AV” – referencing their victims’ reliance on anti-virus software. The name of a malicious domain used in the wider campaign appeared to mock U.S. intelligence: “nsa.mefound.com”

Then things got worse, documents show.

After a 2015 tip-off from the U.S. Federal Bureau of Investigation about infected computers communicating with an external server, HPE combined three probes it had underway into one effort called Tripleplay. Up to 122 HPE-managed systems and 102 systems designated to be spun out into the new DXC operation had been compromised, a late 2016 presentation to executives showed.

[…]

According to Western officials, the attackers were multiple Chinese government-backed hacking groups. The most feared was known as APT10 and directed by the Ministry of State Security, U.S. prosecutors say. National security experts say the Chinese intelligence service is comparable to the U.S. Central Intelligence Agency, capable of pursuing both electronic and human spying operations.

[…]

It’s impossible to say how many companies were breached through the service provider that originated as part of Hewlett Packard, then became Hewlett Packard Enterprise and is now known as DXC.

[…]

HP management only grudgingly allowed its own defenders the investigation access they needed and cautioned against telling Sabre everything, the former employees said. “Limiting knowledge to the customer was key,” one said. “It was incredibly frustrating. We had all these skills and capabilities to bring to bear, and we were just not allowed to do that.”

[…]

The threat also reached into the U.S. defense industry.

In early 2017, HPE analysts saw evidence that Huntington Ingalls Industries, a significant client and the largest U.S. military shipbuilder, had been penetrated by the Chinese hackers, two sources said. Computer systems owned by a subsidiary of Huntington Ingalls were connecting to a foreign server controlled by APT10.

During a private briefing with HPE staff, Huntington Ingalls executives voiced concern the hackers could have accessed data from its biggest operation, the Newport News, Va., shipyard where it builds nuclear-powered submarines, said a person familiar with the discussions. It’s not clear whether any data was stolen.

[…]

Like many Cloud Hopper victims, Ericsson could not always tell what data was being targeted. Sometimes, the attackers appeared to seek out project management information, such as schedules and timeframes. Another time they went after product manuals, some of which were already publicly available.

[…]

much of Cloud Hopper’s activity has been deliberately kept from public view, often at the urging of corporate victims.

In an effort to keep information under wraps, security staff at the affected managed service providers were often barred from speaking even to other employees not specifically added to the inquiries.

In 2016, HPE’s office of general counsel for global functions issued a memo about an investigation codenamed White Wolf. “Preserving confidentiality of this project and associated activity is critical,” the memo warned, stating without elaboration that the effort “is a sensitive matter.” Outside the project, it said, “do not share any information about White Wolf, its effect on HPE, or the activities HPE is taking.”

The secrecy was not unique to HPE. Even when the government alerted technology service providers, the companies would not always pass on warnings to clients, Jeanette Manfra, a senior cybersecurity official with the U.S. Department of Homeland Security, told Reuters.

Source: Stealing Clouds

Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites (note, there’s lots of them influencing your unconsious to buy!)

Dark patterns are user interface design choices that benefit an online service by coercing, steering, or deceivingusers into making unintended and potentially harmful decisions. We present automated techniques that enableexperts to identify dark patterns on a large set of websites. Using these techniques, we study shoppingwebsites, which often use dark patterns these to influence users into making more purchases or disclosingmore information than they would otherwise. Analyzing∼53K product pages from∼11K shopping websites,we discover 1,841 dark pattern instances, together representing 15 types and 7 categories. We examine theunderlying influence of these dark patterns, documenting their potential harm on user decision-making. Wealso examine these dark patterns for deceptive practices, and find 183 websites that engage in such practices.Finally, we uncover 22 third-party entities that offer dark patterns as a turnkey solution. Based on our findings,we make recommendations for stakeholders including researchers and regulators to study, mitigate, andminimize the use of these patterns.

Dark patterns [31,47] are user interface design choices that benefit an online service by coercing,steering, or deceiving users into making decisions that, if fully informed and capable of selectingalternatives, they might not make. Such interface design is an increasingly common occurrence ondigital platforms including social media [45] and shopping websites [31], mobile apps [5,30], and video games [83]. At best, dark patterns annoy and frustrate users. At worst, dark patterns userscan mislead and deceive users, e.g., by causing financial loss [1,2], tricking users into giving upvast amounts of personal data [45], or inducing compulsive and addictive behavior in adults [71]and children [20].While prior work [30,31,37,47] has provided a starting point for describing the types ofdark patterns, there is no large-scale evidence documenting the prevalence of dark patterns, or asystematic and descriptive investigation of how the various different types of dark patterns harmusers. If we are to develop countermeasures against dark patterns, we first need to examine where,how often, and the technical means by which dark patterns appear, and second, we need to be ableto compare and contrast how various dark patterns influence user decision-making. By doing so,we can both inform users about and protect them from such patterns, and given that many of thesepatterns are unlawful, aid regulatory agencies in addressing and mitigating their use.In this paper, we present an automated approach that enables experts to identify dark patternsat scale on the web. Our approach relies on (1) a web crawler, built on top of OpenWPM [24,39]—aweb privacy measurement platform—to simulate a user browsing experience and identify userinterface elements; (2) text clustering to extract recurring user interface designs from the resultingdata; and (3) inspecting the resulting clusters for instances of dark patterns. We also develop a noveltaxonomy of dark pattern characteristics so that researchers and regulators can use descriptive andcomparative terminology to understand how dark patterns influence user decision-making.While our automated approach generalizes, we focus this study on shopping websites. Darkpatterns are especially common on shopping websites, used by an overwhelming majority of theAmerican public [75], where they trick users into signing up for recurring subscriptions and makingunwanted purchases, resulting in concrete financial loss. We use our web crawler to visit the∼11Kmost popular shopping websites worldwide, and from the resulting analysis create a large data setof dark patterns and document their prevalence. In doing so, we discover several new instancesand variations of previously documented dark patterns [31,47]. We also classify the dark patternswe encounter using our taxonomy of dark pattern characteristics.

We have five main findings:

•We discovered 1,841 instances of dark patterns on shopping websites, which together repre-sent 15 types of dark patterns and 7 broad categories.

•These 1,841 dark patterns were present on 1,267 of the∼11K shopping websites (∼11.2%) inour data set. Shopping websites that were more popular, according to Alexa rankings [9],were more likely to feature dark patterns. This represents a lower bound on the number ofdark patterns on these websites, since our automated approach only examined text-baseduser interfaces on a sample of products pages per website.

•Using our taxonomy of dark pattern characteristics, we classified the dark patterns wediscover on the basis whether they lead to anasymmetryof choice, arecovertin their effect,aredeceptivein nature,hide informationfrom users, andrestrictchoice. We also map the darkpatterns in our data set to the cognitive biases they exploit. These biases collectively describethe consumer psychology underpinnings of the dark patterns we identified.

•In total, we uncovered 234 instances of deceptive dark patterns across 183 websites. Wehighlight the types of dark patterns we discovered that rely on consumer deception.

•We identified 22 third-party entities that provide shopping websites with the ability to createdark patterns on their sites. Two of these entities openly advertised practices that enabledeceptive messages

[…]

We developed a taxonomy of dark pattern characteristics that allows researchers, policy-makers and journalists to have a descriptive, comprehensive, and comparative terminology for understand-ing the potential harm and impact of dark patterns on user decision-making. Our taxonomy is based upon the literature on online manipulation [33,74,81] and dark patterns highlighted in previous work [31,47], and it consists of the following five dimensions, each of which poses a possible barrier to user decision-making:

•Asymmetric: Does the user interface design impose unequal weights or burdens on theavailable choices presented to the user in the interface3? For instance, a website may presenta prominent button to accept cookies on the web but hide the opt-out button in another page.

•Covert: Is the effect of the user interface design choice hidden from users? A websitemay develop interface design to steer users into making specific purchases without theirknowledge. Often, websites achieve this by exploiting users’ cognitive biases, which aredeviations from rational behavior justified by some “biased” line of reasoning [50]. In aconcrete example, a website may leverage the Decoy Effect [51] cognitive bias, in whichan additional choice—the decoy—is introduced to make certain other choices seem moreappealing. Users may fail to recognize the decoy’s presence is merely to influence theirdecision making, making its effect hidden from users.

•Deceptive: Does the user interface design induce false beliefs either through affirmativemisstatements, misleading statements, or omissions? For example, a website may offer adiscount to users that appears to be limited-time, but actually repeats when they visit the siteagain. Users may be aware that the website is trying to offer them a deal or sale; however,they may not realize that the influence is grounded in a false belief—in this case, becausethe discount is recurring. This false belief affects users decision-making i.e., they may actdifferently if they knew that this sale is repeated.

•Hides Information: Does the user interface obscure or delay the presentation of neces-sary information to the user? For example, a website may not disclose, hide, or delay thepresentation of information about charges related to a product from users.3We narrow the scope of asymmetry to only refer to explicit choices in the interface.

•Restrictive: Does the user interface restrict the set of choices available to users? For instance,a website may only allow users to sign up for an account with existing social media accountssuch as Facebook or Google so they can gather more information about them.
In Section 5, we also draw an explicit connection between each dark pattern we discover and thecognitive biases they exploit. The biases we refer to in our findings are:
(1)Anchoring Effect [77]: The tendency for individuals to overly rely on an initial piece ofinformation—the “anchor”—on future decisions
(2)Bandwagon Effect [72]: The tendency for individuals to value something more because othersseem to value it.
(3)Default Effect [53]: The tendency of individuals to stick with options that are assigned tothem by default, due to inertia in the effort required to change the option.
(4)Framing Effect [78]: A phenomenon that individuals may reach different decisions from thesame information depending on how it is presented or “framed”.
(5)Scarcity Bias [62]: The tendency of individuals to place a higher value on things that arescarce.
(6)Sunk Cost Fallacy [28]: The tendency for individuals to continue an action if they haveinvested resource (e.g., time and money) into it, even if that action would make them worse off.
[…]
We discovered a total of 22 third-party entities, embedded in 1,066of the 11K shopping websites in our data set, and in 7,769 of the Alexa top million websites. Wenote that the prevalence figures from the Princeton Web Census Project data should be taken as a

24A. Mathur et al.lower bound since their crawls are limited to home pages of websites. […] we discovered that many shopping websites only embedded them intheir product—and not home—pages, presumably for functionality and performance reasons.

[…]
Many of the third-parties advertised practices that appeared to be—and sometimes unambiguouslywere—manipulative: “[p]lay upon [customers’] fear of missing out by showing shoppers whichproducts are creating a buzz on your website” (Fresh Relevance), “[c]reate a sense of urgency toboost conversions and speed up sales cycles with Price Alert Web Push” (Insider), “[t]ake advantageof impulse purchases or encourage visitors over shipping thresholds” (Qubit). Further, Qubit alsoadvertised Social Proof Activity Notifications that could be tailored to users’ preferences andbackgrounds.
In some instances, we found that third parties openly advertised the deceptive capabilities of theirproducts. For example, Boost dedicated a web page—titled “Fake it till you make it”—to describinghow it could help create fake orders [12]. Woocommerce Notification—a Woocommerce platformplugin—also advertised that it could create fake social proof messages: “[t]he plugin will create fakeorders of the selected products” [23]. Interestingly, certain third parties (Fomo, Proof, and Boost)used Social Proof Activity Messages on their own websites to promote their products.
[…]
These practices are unambiguously unlawful in the United States(under Section 5 of the Federal Trade Commission Act and similar state laws [43]), the EuropeanUnion (under the Unfair Commercial Practices Directive and similar member state laws [40]), andnumerous other jurisdictions.We also find practices that are unlawful in a smaller set of jurisdictions. In the European Union,businesses are bound by an array of affirmative disclosure and independent consent requirements inthe Consumer Rights Directive [41]. Websites that use the Sneaking dark patterns (Sneak into Basket,Hidden Subscription, and Hidden Costs) on European Union consumers are likely in violation ofthe Directive. Furthermore, user consent obtained through Trick Questions and Visual Interferencedark patterns do not constitute freely given, informed and active consent as required by the GeneralData Protection Regulation (GDPR) [42]. In fact, the Norwegian Consumer Council filed a GDPRcomplaint against Google in 2018, arguing that Google used dark patterns to manipulate usersinto turning on the “Location History” feature on Android, and thus enabling constant locationtracking [46

Source: Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites Draft: June 25, 2019 – dark-patterns.pdf