The Linkielist

Linking ideas with the world

The Linkielist

EU: These are scary times – let’s backdoor encryption and make everyone unsafe!

The EU has shared its plans to ostensibly keep the continent’s denizens secure – and among the pages of bureaucratese are a few worrying sections that indicate the political union wants to backdoor encryption by 2026, or even sooner.

While the superstate has made noises about backdooring encryption before, the ProtectEU plan [PDF], launched on Monday, says the European Commission wants to develop a roadmap to allow “lawful and effective access to data for law enforcement in 2025” and a technology roadmap to do so by the following year.

“We are working on a roadmap now, and we will look at what is technically also possible,” said Henna Virkkunen, executive vice-president of the EC for tech sovereignty, security and democracy. “The problem is now that our law enforcement, they have been losing ground on criminals because our police investigators, they don’t have access to data,” she added.

“Of course, we want to protect the privacy and cyber security at the same time; and that’s why we have said here that now we have to prepare a technical roadmap to watch for that, but it’s something that we can’t tolerate, that we can’t take care of the security because we don’t have tools to work in this digital world.”

She claimed that in “85 percent” of police cases, law enforcement couldn’t access the data it needed. The proposal is to amend the existing Cybersecurity Act to allow these changes. You can watch the response below.

According to the document, the EC will set up a Security Research & Innovation Campus at its Joint Research Centre in 2026 to, somehow, work out the technical details. Since it’s impossible to backdoor encryption in a way that can’t be potentially exploited by others, it seems a very odd move to make if security’s your goal.

China, Russia, and the US certainly would spend a huge amount of time and money to find the backdoor. Even American law enforcement has given up on the cause of backdooring, although the UK still seems to be wedded to the idea.

In the meantime, for critical infrastructure (and presumably government communications), the EC wants to deploy quantum cryptography across the state. They want to get this in place by 2030 at the latest.

[…]

Source: EU: These are scary times – let’s backdoor encryption! • The Register

Proton may roll away from the Swiss

The EC’s not alone in proposing changes to privacy – new laws outlined in Switzerland could force privacy-focused groups such as Proton out of the country.

Under today’s laws, police can obtain data from services like Proton if they can get a court order for some crimes. But under the proposed laws a court order would not be required and that means Proton would leave the country, said cofounder Andy Yen.

“Swiss surveillance would be significantly stricter than in the US and the EU, and Switzerland would lose its competitiveness as a business location,” Proton’s cofounder told Swiss title Der Bund. “We feel compelled to leave Switzerland if the partial revision of the surveillance law planned by the Federal Council comes into force.”

The EU keeps banging away at this. They tried in 2018, 2020, 2021, 2023, 2024. And fortunately they keep getting stopped by people with enough brains to realise that you cannot have a safe backdoor. For security to be secure it needs to be unbreakable.

https://www.linkielist.com/?s=eu+encryption

 

T-Mobile SyncUP Bug Reveals Names, Images, and Locations of Random Children

T-Mobile sells a little-known GPS service called SyncUP, which allows users who are parents to monitor the locations of their children. This week, an apparent glitch in the service’s system obscured the locations of users’ own children while sending them detailed information and the locations of other, random children.

404 Media first reported on the extremely creepy bug, which appears to have impacted a large number of users. The outlet notes an outpouring of consternation and concern from web users on social platforms like Reddit and X, many of which claimed to have been impacted. 404 also interviewed one specific user, “Jenna,” who explained her ordeal with the bug:

Jenna, a parent who uses SyncUP to keep track of her three-year-old and six-year-old children, logged in Tuesday and instead of seeing if her kids had left school yet, was shown the exact, real-time locations of eight random children around the country, but not the locations of her own kids. 404 Media agreed to use a pseudonym for Jenna to protect the privacy of her kids.

“I’m not comfortable giving my six-year-old a phone, but he takes a school bus and I just want to be able to see where he is in real time,” Jenna said. “I had put a 500 meter boundary around his school, so I get an alert when he’s leaving.”

Jenna sent 404 Media a series of screenshots that show her logged into the app, as well as the locations of children located in other states. In the screenshots, the address-level location of the children are available, as is their name and the last time the location was updated.

Even more alarmingly, the woman interviewed by 404 claims that the company didn’t show much concern for the bug. “Jenna” says she called the company and was referred to an employee who told her that a ticket had been filed in the system on the issue’s behalf. A follow-up email from the concerned mother produced no response, she said.

[…]

When reached for comment by Gizmodo, a T-Mobile spokesperson told us: “Yesterday we fully resolved a temporary system issue with our SyncUP products that resulted from a planned technology update. We are in the process of understanding potential impacts to a small number of customers and will reach out to any as needed. We apologize for any inconvenience.”

The privacy implications of such a glitch are obvious and not really worth extrapolating on. That said, it’s also a good reminder that the more digital access you give a company, the more potential there is for that access to fall into the wrong hands.

Source: T-Mobile Bug Reveals Names, Images, and Locations of Random Children

Your TV is watching you watch and selling that data

[…]Your TV wants your data

The TV business traditionally included three distinct entities. There’s the hardware, namely the TV itself; the entertainment, like movies and shows; and the ads, usually just commercials that interrupt your movies and shows. In the streaming era, tech companies want to control all three, a setup also known as vertical integration. If, say, Roku makes the TV, supplies the content, and sells the ads, then it stands to control the experience, set the rates, and make the most money. That’s business!

Roku has done this very well. Although it was founded in 2002, Roku broke into the market in 2008 after Netflix invested $6 million in the company to make a set-top box that enabled any TV to stream Netflix content. It was literally called the Netflix Player by Roku. Over the course of the next 15 years, Roku would grow its hardware business to include streaming sticks, which are basically just smaller set-top-boxes; wireless soundbars, speakers, and subwoofers; and after licensing its operating system to third-party TV makers, its own affordable, Roku-branded smart TVs

[…]

The shift toward ad-supported everything has been happening across the TV landscape. People buy new TVs less frequently these days, so TV makers want to make money off the TVs they’ve already sold. Samsung has Samsung Ads, LG has LG Ad Solutions, Vizio has Vizio Ads, and so on and so forth. Tech companies, notably Amazon and Google, have gotten into the mix too, not only making software and hardware for TVs but also leveraging the massive amount of data they have on their users to sell ads on their TV platforms. These companies also sell data to advertisers and data brokers, all in the interest of knowing as much about you as possible in the interest of targeting you more effectively. It could even be used to train AI.

[…]

Is it possible to escape the ads?

Breaking free from this ad prison is tough. Most TVs on the market today come with a technology called automatic content recognition (ACR) built in. This is basically Shazam for TV — Shazam itself helped popularize the tech — and gives smart TV platforms the ability to monitor what you’re watching by either taking screenshots or capturing audio snippets while you’re watching. (This happens at the signal level, not from actual microphone recordings from the TV.)

Advertisers and TV companies use ACR tech to collect data about your habits that are otherwise hard to track, like if you watch live TV with an antenna. They use that data to build out a profile of you in order to better target ads. ACR also works with devices, like gaming consoles, that you plug into your TV through HDMI cables.

Yash Vekaria, a PhD candidate at UC Davis, called the HDMI spying “the most egregious thing we found” in his research for a paper published last year on how ACR technology works. And I have to admit that I had not heard of ACR until I came across Vekaria’s research.

[…]

Unfortunately, you don’t have much of a choice when it comes to ACR on your TV. You probably enabled the technology when you first set up your TV and accepted its privacy policy. If you refuse to do this, a lot of the functions on your TV won’t work. You can also accept the policy and then disable ACR on your TV’s settings, but that could disable certain features too. In 2017, Vizio settled a class-action lawsuit for tracking users by default. If you want to turn off this tracking technology, here’s a good guide from Consumer Reports that explains how for most types of smart TVs.

[…]

it does bug me, just on principle, that I have to let a tech company wiretap my TV in order to enjoy all of the device’s features.

[…]

Source: Roku’s Moana 2 controversy is part of a bigger ad problem | Vox

A Win for human rights: France Rejects Backdoor Mandate

In a moment of clarity after initially moving forward a deeply flawed piece of legislation, the French National Assembly has done the right thing: it rejected a dangerous proposal that would have gutted end-to-end encryption in the name of fighting drug trafficking. Despite heavy pressure from the Interior Ministry, lawmakers voted Thursday night (article in French) to strike down a provision that would have forced messaging platforms like Signal and WhatsApp to allow hidden access to private conversations.

The vote is a victory for digital rights, for privacy and security, and for common sense.

The proposed law was a surveillance wishlist disguised as anti-drug legislation. Tucked into its text was a resurrection of the widely discredited “ghost” participant model—a backdoor that pretends not to be one. Under this scheme, law enforcement could silently join encrypted chats, undermining the very idea of private communication. Security experts have condemned the approach, warning it would introduce systemic vulnerabilities, damage trust in secure communication platforms, and create tools ripe for abuse.

The French lawmakers who voted this provision down deserve credit. They listened—not only to French digital rights organizations and technologists, but also to basic principles of cybersecurity and civil liberties. They understood that encryption protects everyone, not just activists and dissidents, but also journalists, medical professionals, abuse survivors, and ordinary citizens trying to live private lives in an increasingly surveilled world.

A Global Signal

France’s rejection of the backdoor provision should send a message to legislatures around the world: you don’t have to sacrifice fundamental rights in the name of public safety. Encryption is not the enemy of justice; it’s a tool that supports our fundamental human rights, including the right to have a private conversation. It is a pillar of modern democracy and cybersecurity.

As governments in the U.S., U.K., Australia, and elsewhere continue to flirt with anti-encryption laws, this decision should serve as a model—and a warning. Undermining encryption doesn’t make society safer. It makes everyone more vulnerable.

[…]

Source: A Win for Encryption: France Rejects Backdoor Mandate | Electronic Frontier Foundation

China bans facial recognition without consent and in all public places. And it needs to be encrypted.

China’s Cyberspace Administration and Ministry of Public Security has outlawed the use of facial recognition without consent.

The two orgs last Friday published new rules on facial recognition and an explainer that spell out how orgs that want to use facial recognition must first conduct a “personal information protection impact assessment” that considers whether using the tech is necessary, impacts on individuals’ privacy, and risks of data leakage.

Organizations that decide to use facial recognition must data encrypt biometric data, and audit the information security techniques and practices they use to protect facial scans.

Chinese that go through that process and decide they want to use facial recognition can only do so after securing individuals’ consent.

The rules also ban the use of facial recognition equipment in public places such as hotel rooms, public bathrooms, public dressing rooms, and public toilets.

The measures don’t apply to researchers or to what machine translation of the rules describes as “algorithm training activities” – suggesting images of citizens’ faces are fair game when used to train AI models.

The documents linked to above don’t mention whether government agencies are exempt from the new rules. The Register fancies Beijing will keep using facial recognition whenever it wants to as its previously expressed interest in a national identity scheme that uses the tech, and used it to identify members of ethnic minorities.

Source: China bans facial recognition in hotels, bathrooms • The Register

23andMe files for bankruptcy: How to delete your data before it’s sold off

23andMe has capped off a challenging few years by filing for Chapter 11 bankruptcy today. Given the uncertainty around the future of the DNA testing company and what will happen to all of the genetic data it has collected, now is a critical time for customers to protect their privacy. California Attorney General Rob Bonta has recommended that past customers of the genetic testing business delete their information as a precautionary measure. Here are the steps to deleting your records with 23andMe.

  1. Log into your 23andMe account.
  2. Go to the “Settings” tab of your profile.
  3. Click View on the section called “23andMe Data.”
  4. If you want to retain a copy for your own records, download your data now.
  5. Go to the “Delete Data” section
  6. Click “Permanently Delete Data.”
  7. You will receive an email from 23andMe confirming the action. Click the link in that email to complete the process.

While the majority of an individual’s personal information will be deleted, 23andMe does keep some information for legal compliance. The details are in the company’s privacy policy.

There are a few other privacy-minded actions customers can take. First, anyone who opted to have 23andMe store their saliva and DNA can request that the sample be destroyed. That choice can be made from the Preferences tab of the account settings menu. Second, you can review whether you granted permission for your genetic data and sample to be used in scientific research. The allowance can also be checked, and revoked if you wish, from the account settings page; it’s listed under Research and Product Consents.

Source: How to delete your 23andMe data

Amazon annihilates Alexa privacy settings, turns on continuous, nonconsensual audio uploading

Even by Amazon standards, this is extraordinarily sleazy: starting March 28, each Amazon Echo device will cease processing audio on-device and instead upload all the audio it captures to Amazon’s cloud for processing, even if you have previously opted out of cloud-based processing:

https://arstechnica.com/gadgets/2025/03/everything-you-say-to-your-echo-will-be-sent-to-amazon-starting-on-march-28/

It’s easy to flap your hands at this bit of thievery and say, “surveillance capitalists gonna surveillance capitalism,” which would confine this fuckery to the realm of ideology (that is, “Amazon is ripping you off because they have bad ideas”). But that would be wrong. What’s going on here is a material phenomenon, grounded in specific policy choices and by unpacking the material basis for this absolutely unforgivable move, we can understand how we got here – and where we should go next.

Start with Amazon’s excuse for destroying your privacy: they want to do AI processing on the audio Alexa captures, and that is too computationally intensive for on-device processing. But that only raises another question: why does Amazon want to do this AI processing, even for customers who are happy with their Echo as-is, at the risk of infuriating and alienating millions of customers?

For Big Tech companies, AI is part of a “growth story” – a narrative about how these companies that have already saturated their markets will still continue to grow.

[…]

every growth stock eventually stops growing. For Amazon to double its US Prime subscriber base, it will have to establish a breeding program to produce tens of millions of new Americans, raising them to maturity, getting them gainful employment, and then getting them to sign up for Prime. Almost by definition, a dominant firm ceases to be a growing firm, and lives with the constant threat of a stock revaluation as investors belief in future growth crumbles and they punch the “sell” button, hoping to liquidate their now-overvalued stock ahead of everyone else.

[…]

The hype around AI serves an important material need for tech companies. By lumping an incoherent set of poorly understood technologies together into a hot buzzword, tech companies can bamboozle investors into thinking that there’s plenty of growth in their future.

[…]

let’s look at the technical dimension of this rug-pull.

How is it possible for Amazon to modify your Echo after you bought it? After all, you own your Echo. It is your property. Every first year law student learns this 18th century definition of property, from Sir William Blackstone:

That sole and despotic dominion which one man claims and exercises over the external things of the world, in total exclusion of the right of any other individual in the universe.

If the Echo is your property, how come Amazon gets to break it? Because we passed a law that lets them. Section 1201 of 1998’s Digital Millennium Copyright Act makes it a felony to “bypass an access control” for a copyrighted work:

https://pluralistic.net/2024/05/24/record-scratch/#autoenshittification

That means that once Amazon reaches over the air to stir up the guts of your Echo, no one is allowed to give you a tool that will let you get inside your Echo and change the software back. Sure, it’s your property, but exercising sole and despotic dominion over it requires breaking the digital lock that controls access to the firmware, and that’s a felony punishable by a five-year prison sentence and a $500,000 fine for a first offense.

[…]

Giving a manufacturer the power to downgrade a device after you’ve bought it, in a way you can’t roll back or defend against is an invitation to run the playbook of the Darth Vader MBA, in which the manufacturer replies to your outraged squawks with “I am altering the deal. Pray I don’t alter it any further”

[…]

Amazon says that the recordings your Echo will send to its data-centers will be deleted as soon as it’s been processed by the AI servers. Amazon’s made these claims before, and they were lies. Amazon eventually had to admit that its employees and a menagerie of overseas contractors were secretly given millions of recordings to listen to and make notes on:

https://archive.is/TD90k

And sometimes, Amazon just sent these recordings to random people on the internet:

https://www.washingtonpost.com/technology/2018/12/20/amazon-alexa-user-receives-audio-recordings-stranger-through-human-error/

Fool me once, etc. I will bet you a testicle* that Amazon will eventually have to admit that the recordings it harvests to feed its AI are also being retained and listened to by employees, contractors, and, possibly, randos on the internet.

*Not one of mine

Source: Pluralistic: Amazon annihilates Alexa privacy settings, turns on continuous, nonconsensual audio uploading (15 Mar 2025) – Pluralistic: Daily links from Cory Doctorow

How to stop Android from scanning your phone pictures for content and interpreting them

process called Android System SafetyCore – which arrived in a recent update for devices running Android 9 and later. It scans a user’s photo library for explicit images and displays content warnings before viewing them. Google says “the classification of content runs exclusively on your device and the results aren’t shared with Google.”

Naturally, it will also bring similar tech to Google Messages down the line to prevent certain unsolicited images from affecting a receiver.

Google started installing SafetyCore on user devices in November 2024, and there’s no way of opting out or managing the installation. One day, it’s just there.

Users have vented their frustrations about SafetyCore ever since and despite being able to uninstall and opt out of image scanning, the consent-less approach that runs throughout Android nevertheless left some users upset. It can be uninstalled on Android forks like Xiaomi’s MIUI using Settings>Apps>Android System SafetyCore>Uninstall or on Android using Apps/Apps & Notifications>Show System Apps>Show system apps>Locate SafetyCore>Uninstall or Disable. Reviewers report that in some cases the uninstall option is grayed out, and it can only be disabled, while others complain that it reinstalls on the next update.

The app’s Google Play page is littered with negative reviews, many of which cite its installation without consent.

“In short, it is spyware. We were not informed. It feels like the right to privacy is secondary to Google’s corporate interests,” one reviewer wrote.

Source: Google’s ‘consent-less’ Android tracking probed by academics • The Register

Android tracks you before you start an app – no consent required. Also, it scans your photos.

Research from a leading academic shows Android users have advertising cookies and other gizmos working to build profiles on them even before they open their first app.

Doug Leith, professor and chair of computer systems at Trinity College Dublin, who carried out the research, claims in his write up that no consent is sought for the various identifiers and there is no way of opting out from having them run.

He found various mechanisms operating on the Android system which were then relaying the data back to Google via pre-installed apps such as Google Play Services and the Google Play store, all without users ever opening a Google app.

One of these is the “DSID” cookie, which Google explains in its documentation is used to identify a “signed in user on non-Google websites so that the user’s preference for personalized advertising is respected accordingly.” The “DSID” cookie lasts for two weeks.

Speaking about Google’s description in its documentation, Leith’s research states the explanation was still “rather vague and not as helpful as it might be,” and the main issue is that there’s no consent sought from Google before dropping the cookie and there’s no opt-out feature either.

Leith says the DSID advertising cookie is created shortly after the user logs into their Google account – part of the Android startup process – with a tracking file linked to that account placed into the Google Play Service’s app data folder.

This DSID cookie is “almost certainly” the primary method Google uses to link analytics and advertising events, such as ad clicks, to individual users, Leith writes in his paper [PDF].

Another tracker which cannot be removed once created is the Google Android ID, a device identifier that’s linked to a user’s Google account and created after the first connection made to the device by Google Play Services.

It continues to send data about the device back to Google even after the user logs out of their Google account and the only way to remove it, and its data, is to factory-reset the device.

Leith said he wasn’t able to ascertain the purpose of the identifier but his paper notes a code comment, presumably made by a Google dev, acknowledging that this identifier is considered personally identifiable information (PII), likely bringing it into the scope of European privacy law GDPR – still mostly intact in British law as UK GDPR.

The paper details the various other trackers and identifiers dropped by Google onto Android devices, all without user consent and according to Leith, in many cases it presents possible violations of data protection law.

Leith approached Google for a response before publishing his findings, which he delayed allowing time for a dialogue.

[…]

The findings come amid something of a recent uproar about another process called Android System SafetyCore – which arrived in a recent update for devices running Android 9 and later. It scans a user’s photo library for explicit images and displays content warnings before viewing them. Google says “the classification of content runs exclusively on your device and the results aren’t shared with Google.”

Naturally, it will also bring similar tech to Google Messages down the line to prevent certain unsolicited images from affecting a receiver.

Google started installing SafetyCore on user devices in November 2024, and there’s no way of opting out or managing the installation. One day, it’s just there.

Users have vented their frustrations about SafetyCore ever since and despite being able to uninstall and opt out of image scanning, the consent-less approach that runs throughout Android nevertheless left some users upset. It can be uninstalled on Android forks like Xiaomi’s MIUI using Settings>Apps>Android System SafetyCore>Uninstall or on Android using Apps/Apps & Notifications>Show System Apps>Show system apps>Locate SafetyCore>Uninstall or Disable. Reviewers report that in some cases the uninstall option is grayed out, and it can only be disabled, while others complain that it reinstalls on the next update.

The app’s Google Play page is littered with negative reviews, many of which cite its installation without consent.

“In short, it is spyware. We were not informed. It feels like the right to privacy is secondary to Google’s corporate interests,” one reviewer wrote.

Source: Google’s ‘consent-less’ Android tracking probed by academics • The Register

Mozilla updates updated TOS for Firefox and is now more confusing but does not look private

On Wednesday we shared that we’re introducing a new Terms of Use (TOU) and Privacy Notice for Firefox. Since then, we’ve been listening to some of our community’s concerns with parts of the TOU, specifically about licensing. Our intent was just to be as clear as possible about how we make Firefox work, but in doing so we also created some confusion and concern. With that in mind, we’re updating the language to more clearly reflect the limited scope of how Mozilla interacts with user data.

Here’s what the new language will say:

You give Mozilla the rights necessary to operate Firefox. This includes processing your data as we describe in the Firefox Privacy Notice. It also includes a nonexclusive, royalty-free, worldwide license for the purpose of doing as you request with the content you input in Firefox. This does not give Mozilla any ownership in that content. 

In addition, we’ve removed the reference to the Acceptable Use Policy because it seems to be causing more confusion than clarity.

Privacy FAQ

We also updated our Privacy FAQ to better address legal minutia around terms like “sells.” While we’re not reverting the FAQ, we want to provide more detail about why we made the change in the first place.

TL;DR Mozilla doesn’t sell data about you (in the way that most people think about “selling data”), and we don’t buy data about you. We changed our language because some jurisdictions define “sell” more broadly than most people would usually understand that word. Firefox has built-in privacy and security features, plus options that let you fine-tune your data settings.

 


 

The reason we’ve stepped away from making blanket claims that “We never sell your data” is because, in some places, the LEGAL definition of “sale of data” is broad and evolving. As an example, the California Consumer Privacy Act (CCPA) defines “sale” as the “selling, renting, releasing, disclosing, disseminating, making available, transferring, or otherwise communicating orally, in writing, or by electronic or other means, a consumer’s personal information by [a] business to another business or a third party” in exchange for “monetary” or “other valuable consideration.”

[…]

Source: An update on our Terms of Use

So this legal definition rhymes with what I would expect “sell” to mean. Don’t transfer my data to a third party – even better, don’t collect my data at all.

It’s a shame, as Firefox is my preferred browser, it’s not based on Google’s browser. So I am looking at the Zen browser and the Floorp browser now.

After Snowden and now Trump, Europe  Finally begins to worry about US-controlled clouds

In a recent blog post titled “It is no longer safe to move our governments and societies to US clouds,” Bert Hubert, an entrepreneur, software developer, and part-time technical advisor to the Dutch Electoral Council, articulated such concerns.

“We now have the bizarre situation that anyone with any sense can see that America is no longer a reliable partner, and that the entire large-scale US business world bows to Trump’s dictatorial will, but we STILL are doing everything we can to transfer entire governments and most of our own businesses to their clouds,” wrote Hubert.

Hubert didn’t offer data to support that statement, but European Commission stats shows that close to half of European enterprises rely on cloud services, a market led by Amazon, Microsoft, Google, Oracle, Salesforce, and IBM – all US-based companies.

While concern about cloud data sovereignty became fashionable back in 2013 when former NSA contractor Edward Snowden disclosed secrets revealing the scope of US signals intelligence gathering and fled to Russia, data privacy worries have taken on new urgency in light of the Trump administration’s sudden policy shifts.

In the tech sphere those moves include removing members of the US Privacy and Civil Liberties Oversight Board that safeguards data under the EU-US Data Privacy Framework, alleged flouting of federal data rules to advance policy goals. Europeans therefore have good reason to wonder how much they can trust data privacy assurances from US cloud providers amid their shows of obsequious deference to the new regime.

And there’s also a practical impetus for the unrest: organizations that use Microsoft Office 2016 and 2019 have to decide whether they want to move to Microsoft’s cloud come October 14, 2025, when support officially ends. Microsoft is encouraging customers to move to Microsoft 365 which is tied to the cloud. But that looks riskier now than it did under less contentious transatlantic relations.

The Register spoke with Hubert about his concerns and the situation in which Europe now finds itself.

[…]

Source: Europe begins to worry about US-controlled clouds • The Register

It was truly unbelievable that EU was using US cloud in the first place for many reasons ranging from technical to cost to privacy but they just keep blundering on.

Google pulls plug on Ad blockers such as uBlock Origin by killing Manifest v2

Google’s purge of Manifest v2-based extensions from its Chrome browser is underway, as many users over the past few days may have noticed.

Popular content-blocking add-on (v2-based) uBlock Origin is now automatically disabled for many in the ubiquitous browser as it continues the V3 rollout.

[…]

According to the company, Google’s decision to shift to V3 is all in the name of improving its browser’s security, privacy, and performance. However, the transition to the new specification also means that some extensions will struggle due to limitations in the new API.

In September 2024, the team behind uBlock Origin noted that one of the most significant changes was around the webRequest API, used to intercept and modify network requests. Extensions such as uBlock Origin extensively use the API to block unwanted content before it loads.

[…]

Ad-blockers and privacy tools are the worst hit by the changes, and affected users – because let’s face it, most Chrome users won’t be using an ad-blocker – can switch to an alternative browser for something like the original experience, or they can switch to a different extension which is unlikely to have the same capabilities.

In its post, uBlock recommends a move to Firefox and use of the extension uBlock Origin, a switch to a browser that will support Manifest v2

[…]

Source: Google continues pulling the plug on Manifest v2 • The Register

Gravy Analytics sued for data breach containing location data of millions of smartphones

Gravy Analytics has been sued yet again for allegedly failing to safeguard its vast stores of personal data, which are now feared stolen. And by personal data we mean information including the locations of tens of millions of smartphones, coordinates of which were ultimately harvested from installed apps.

A complaint [PDF], filed in federal court in northern California yesterday, is at least the fourth such lawsuit against Gravy since January, when an unidentified criminal posted screenshots to XSS, a Russian cybercrime forum, to support claims that 17 TB of records had been pilfered from the American analytics outfit’s AWS S3 storage buckets.

The suit this week alleges that massive archive contains the geo-locations of people’s phones.

Gravy Analytics subsequently confirmed it suffered some kind of data security breach, which was discovered on January 4, 2025, in a non-compliance report [PDF] filed with the Norwegian Data Protection Authority and obtained by Norwegian broadcaster NRK.

Three earlier lawsuits – filed in New Jersey on January 14 and 30, and in Virginia on January 31 in the US – make similar allegations.

Gravy Analytics and its subsidiary Venntel were banned from selling sensitive location data by the FTC in December 2024, under a proposed order [PDF] to resolve the agency’s complaint against the companies that was finalized on January 15, 2025.

The FTC complaint alleged the firms “used geofencing, which creates a virtual geographical boundary, to identify and sell lists of consumers who attended certain events related to medical conditions and places of worship and sold additional lists that associate individual consumers to other sensitive characteristics.”

[…]

Source: Gravy Analytics soaks up another sueball over data breach • The Register

Unions Sue to Block Elon Musk’s Access to Americans’ Tax and Benefits Records

A coalition of labor organizations representing federal workers and retirees has sued the Department of the Treasury to block it from giving the newly created Department of Government Efficiency, controlled by Elon Musk, access to the federal government’s sensitive payment systems.

After forcing out a security official who opposed the move, Treasury Secretary Scott Bessent granted DOGE workers access to the system last week, according to The New York Times. Despite its name, DOGE is not a government department but rather an ad-hoc group formed by President Trump purportedly tasked with cutting government spending.

The labor organizations behind the lawsuit filed Monday argue that Bessent broke federal privacy and tax confidentiality laws by giving unauthorized DOGE workers, including people like Musk who are not government employees, the ability to view the private information of anyone who pays taxes or receives money from federal agencies.

With access to the Treasury systems, DOGE representatives can potentially view the names, social security numbers, birth dates, mailing addresses, email addresses, and bank information of tens of millions of people who receive tax refunds, social security and disability payments, veterans benefits, or salaries from the federal government, according to the lawsuit.

“The scale of the intrusion into individuals’ privacy is massive and unprecedented,” according to the complaint filed by the Alliance for Retired Americans, the American Federation of Government Employees, and the Service Employees International Union.

[…]

In their lawsuit, the labor organizations argue that federal law prohibits the disclosure of taxpayer information to anyone except Treasury employees who require it for their official duties unless the disclosure is authorized by a specific law, which DOGE’s access to the system is not. DOGE’s access also violates the Privacy Act of 1974, which prohibits disclosure of personal information to unauthorized people and lays out strict procedures for changing those authorizations, which the Trump administration has not followed, according to the suit.

The plaintiffs have asked the Washington, D.C. district court to grant an injunction preventing unauthorized people from accessing the payment systems and to rule the Treasury’s actions unlawful.

Source: Unions Sue to Block Elon Musk’s Access to Americans’ Tax and Benefits Records

Phone Metadata Suddenly Not So ‘Harmless’ When It’s The FBI’s Data Being Harvested

[…] While trying to fend off attacks on Section 215 collections (most of which are governed [in the loosest sense of the word] by the Third Party Doctrine), the NSA and its domestic-facing remora, the FBI, insisted collecting and storing massive amounts of phone metadata was no more a constitutional violation than it was a privacy violation.

Suddenly — thanks to the ongoing, massive compromising of major US telecom firms by Chinese state-sanctioned hackers — the FBI is getting hot and bothered about the bulk collection of its own phone metadata by (gasp!) a government agency. (h/t Kevin Collier on Bluesky)

FBI leaders have warned that they believe hackers who broke into AT&T Inc.’s system last year stole months of their agents’ call and text logs, setting off a race within the bureau to protect the identities of confidential informants, a document reviewed by Bloomberg News shows.

[…]

The data was believed to include agents’ mobile phone numbers and the numbers with which they called and texted, the document shows. Records for calls and texts that weren’t on the AT&T network, such as through encrypted messaging apps, weren’t part of the stolen data.

The agency (quite correctly!) believes the metadata could be used to identify agents, as well as their contacts and confidential sources. Of course it can.

[…]

The issue, of course, is that the Intelligence Community consistently downplayed this exact aspect of the bulk collection, claiming it was no more intrusive than scanning every piece of domestic mail (!) or harvesting millions of credit card records just because the Fourth Amendment (as interpreted by the Supreme Court) doesn’t say the government can’t.

There are real risks to real people who are affected by hacks like these. The same thing applies when the US government does it. It’s not just a bunch of data that’s mostly useless. Harvesting metadata in bulk allows the US government to do the same thing Chinese hackers are doing with it: identifying individuals, sussing out their personal networks, and building from that to turn numbers into adversarial actions — whether it’s the arrest of suspected terrorists or the further compromising of US government agents by hostile foreign forces.

The takeaway isn’t the inherent irony. It’s that the FBI and NSA spent years pretending the fears expressed by activists and legislators were overblown. Officials repeatedly claimed the information was of almost zero utility, despite mounting several efforts to protect this collection from being shut down by the federal government. In the end, the phone metadata program (at least as it applies to landlines) was terminated. But there’s more than a hint of egregious hypocrisy in the FBI’s sudden concern about how much can be revealed by “just” metadata.

Source: Phone Metadata Suddenly Not So ‘Harmless’ When It’s The FBI’s Data Being Harvested | Techdirt

Venezuela’s Internet Censorship Sparks Surge in VPN Demand

What’s Important to Know:

  • Venezuela’s Supreme Court fined TikTok USD$10 million for failing to prevent viral video challenges that resulted in the deaths of three Venezuelan children.
  • TikTok faced temporary blockades by Internet Service Providers (ISPs) in Venezuela for not paying the fine.
  • ISPs used IP, HTTP, and DNS blocks to restrict access to TikTok and other platforms in early January 2025.
  • While this latest round of blockades was taking place, protests against Nicolás Maduro’s attempt to retain the presidency of Venezuela were happening across the country. The riot police were deployed in all major cities, looking to quell any protesters.
  • A significant surge in demand for VPN services has been observed in Venezuela since the beginning of 2025. Access to some VPN providers’ websites has also been restricted in the country.

In November 2024, Nicolás Maduro announced that two children had died after participating in challenges on TikTok. After a third death was announced by Education Minister Héctor Rodriguez, Venezuela’s Supreme Court issued a $10 million fine against the social media platform for failing to implement measures to prevent such incidents.

The court also ordered TikTok to open an office in Venezuela to oversee content compliance with local laws, giving the platform eight days to comply and pay the fine. TikTok failed to meet the court’s deadline to pay the fine or open an office in the country. As a result, ISPs in Venezuela, including CANTV — the state’s internet provider — temporarily blocked access to TikTok.

The blockades happened on January 7 and later on January 8, lasting several hours each. According to Netblocks.org, various methods were used to restrict access to TikTok, including IP, HTTP, and DNS blocks.

This screenshot shows Netblocks.org report, indicating zero reachability on TikTok using different Venezuelan ISPs.

On January 9, under orders of CONATEL (Venezuela’s telecommunications regulator), CANTV and other private ISPs in the country implemented further blockades to restrict access to TikTok. For instance, they blocked 21 VPN providers along with 33 public DNS services as reported by VeSinFiltro.org.

[…]

vpnMentor’s Research Team first observed a significant surge in the demand for VPN services in the country back in 2024, when X was first blocked. Since then, VPN usage has continued to rise in Venezuela, reaching another remarkable surge in the beginning of 2025. VPN demand grew over 200% only from January 7th to the 8th, totaling a 328% growth from January 1st to January 8th. This upward trend shows signs of further growth according to partial data from January 9th.

The increased demand for VPN services indicates a growing interest in circumventing censorship and accessing restricted content online. This trend suggests that Venezuelan citizens are actively seeking ways to bypass government-imposed restrictions on social media platforms and maintain access to a free flow of information.

[…]

Other Recent VPN Demand Growths

Online platforms are no strangers to geoblocks in different parts of the world. In fact, there have been cases where platforms themselves impose location-based access restrictions to users. For instance, Aylo/Pornhub previously geo-blocked 17 US states in response to age-verification laws that the adult site deemed unjust.

vpnMentor’s Research Team recently published a report about a staggering 1,150% VPN demand surge in Florida following the IP-block of Pornhub in the state.

Source: Venezuela’s Internet Censorship Sparks Surge in VPN Demand

VPN Demand Surge in Florida after Adult Sites Age Restriction Kicks In

What’s important to know:

  • On March 25, 2024 Florida’s Gov. Ron DeSantis signed a law requiring age verification for accessing pornographic sites. This law, known as House Bill 3 (HB3), passed with bipartisan support and has caused quite a stir in the online community.
  • HB3 was set to come into effect on January 1, 2025. It allows hefty fines of up to $50,000 for websites that fail to comply with the regulations.
  • In response to this new legislation, Aylo, the parent company of Pornhub confirmed on December 18, 2024 that it will deny access for all users geo-located in the state as a form of protest to the new age verification requirements imposed by a state law.
  • Pornhub, which registered 3 billion visits from the United States in January 2024, had previously imposed access restrictions in Kentucky, Indiana, Idaho, Kansas, Nebraska, Texas, North Carolina, Montana, Mississippi, Virginia, Arkansas, and Utah. This makes Florida the 13th state without access to their website.

The interesting development following Aylo’s geo-block on Florida IP addresses is the dramatic increase in the demand for Virtual Private Network (VPN) services in the state. A VPN allows users to mask their IP addresses and encrypt their internet traffic, providing an added layer of privacy and security while browsing online.

The vpnMentor Research Team observed a significant surge in VPN usage across the state of Florida, with a staggering increase noted in the first hours of January 1st increasing consistently since the last minutes of 2024 and reaching its peak of 1150% only four hours after the HB3 law came into effect.
Additionally, there was a noteworthy 51% spike in demand for VPN services in the state on December 19, 2024, the day after Aylo released their statement of geo-blocking Florida IP addresses to access their website.

Florida’s new law on pornographic websites and the consequent rise of VPN usage emphasize the intricate interplay between technology, privacy, and regulatory frameworks. With laws pertaining to online activities constantly changing, it is imperative for users and website operators alike to remain knowledgeable about regulations and ensure compliance.

Past VPN Demand Growths

Aylo/Pornhub has previously geo-blocked 12 states all of which have enforced age-verification laws that the adult site deemed unjust.

In May 2023, Pornhub’s banning of Utah-based users caused a 967% spike in VPN demand in the state and last year, the passing of adult-site-related age restriction laws in Texas caused a surge in demand of 234.8% in the state.

Source: VPN Demand Surge in Florida after Adult Sites Age Restriction Kicks In

Google brings back digital fingerprinting to track users for advertising

Google is tracking your online behavior in the name of advertising, reintroducing a data collection process that ingests all of your online signals (from IP address to complex browser information) and pinpoints unique users or devices, also known as “digital fingerprinting.”

The company’s updated platform program policies include relaxed restrictions on advertisers and personalized ad targeting across a range of devices, an outcome of a larger “advertising ecosystem shift” and the advancement of privacy-enhancing technologies (PETs) like on-device processing and trusted execution environments, in the words of the company.

A departure from its longstanding pledge to user choice and privacy, Google argues these technologies offer enough protection for users while also creating “new ways for brands to manage and activate their data safely and securely.” The new feature will be available to advertisers beginning Feb. 16, 2025.

[…]

Contrary to other data collection tools like cookies, digital fingerprinting is difficult to spot, and thus even harder for even privacy-conscious users to erase or block. On Dec. 19, the UK’s Information Commissioner’s Office (ICO) — a data protection and privacy regulator — labeled Google “irresponsible” for the policy change, saying the shift to fingerprinting is an unfair means of tracking users, reducing choice and control over their personal information. The watchdog also warned that the move could encourage riskier advertiser behavior.

“Google itself has previously said that fingerprinting does not meet users’ expectations for privacy, as users cannot easily consent to it as they would cookies. This in turn means they cannot control how their information is collected. To quote Google’s own position on fingerprinting from 2019: ‘We think this subverts user choice and is wrong,'” wrote ICO executive director of regulatory risk Stephen Almond.

The ICO warned that it will intervene if Google cannot demonstrate existing legal requirements for such tech, including options to secure freely-given consent, ensure fair processing, and uphold the right to erasure: “Businesses should not consider fingerprinting a simple solution to the loss of third-party cookies and other cross-site tracking signals.”

Source: Google brings back digital fingerprinting to track users for advertising | Mashable

Google goes to court for collecting data on users who opted out… again…

A federal judge this week rejected Google’s motion to throw out a class-action lawsuit alleging that it invaded the privacy of users who opted out of functionality that records a users’ web and app activities. A jury trial is scheduled for August 2025 in US District Court in San Francisco.

The lawsuit concerns Google’s Web & App Activity (WAA) settings, with the lead plaintiff representing two subclasses of people with Android and non-Android phones who opted out of tracking. “The WAA button is a Google account setting that purports to give users privacy control of Google’s data logging of the user’s web app and activity, such as a user’s searches and activity from other Google services, information associated with the user’s activity, and information about the user’s location and device,” wrote US District Judge Richard Seeborg, the chief judge in the Northern District Of California.

Google says that Web & App Activity “saves your activity on Google sites and apps, including associated info like location, to give you faster searches, better recommendations, and more personalized experiences in Maps, Search, and other Google services.” Google also has a supplemental Web App and Activity setting that the judge’s ruling refers to as “(s)WAA.”

“The (s)WAA button, which can only be switched on if WAA is also switched on, governs information regarding a user’s ‘[Google] Chrome history and activity from sites, apps, and devices that use Google services.’ Disabling WAA also disables the (s)WAA button,” Seeborg wrote.

Google sends data to developers

But data is still sent to third-party app developers through the Google Analytics for Firebase (GA4F), “a free analytical tool that takes user data from the Firebase kit and provides app developers with insight on app usage and user engagement,” the ruling said. GA4F “is integrated in 60 percent of the top apps” and “works by automatically sending to Google a user’s ad interactions and certain identifiers regardless of a user’s (s)WAA settings, and Google will, in turn, provide analysis of that data back to the app developer.”

Plaintiffs have brought claims of privacy invasion under California law. Plaintiffs “present evidence that their data has economic value,” and “a reasonable juror could find that Plaintiffs suffered damage or loss because Google profited from the misappropriation of their data,” Seeborg wrote.

[…]

In a proposed settlement of a different lawsuit, Google last year agreed to delete records reflecting users’ private browsing activities in Chrome’s Incognito mode.

[…]

Google contends that its system is harmless to users. “Google argues that its sole purpose for collecting (s)WAA-off data is to provide these analytic services to app developers. This data, per Google, consists only of non-personally identifiable information and is unrelated (or, at least, not directly related) to any profit-making objectives,” Seeborg wrote.

On the other side, plaintiffs say that Google’s tracking contradicts its “representations to users because it gathers exactly the data Google denies saving and collecting about (s)WAA-off users,” Seeborg wrote. “Moreover, Plaintiffs insist that Google’s practices allow it to personalize ads by linking user ad interactions to any later related behavior—information advertisers are likely to find valuable—leading to Google’s lucrative advertising enterprise built, in part, on (s)WAA-off data unlawfully retrieved.”

[…]

Google, as the judge writes, purports to treat user data as pseudonymous by creating a randomly generated identifier that “permits Google to recognize the particular device and its later ad-related behavior… Google insists that it has created technical barriers to ensure, for (s)WAA-off users, that pseudonymous data is delinked to a user’s identity by first performing a ‘consent check’ to determine a user’s (s)WAA settings.”

Whether this counts as personal information under the law is a question for a jury, the judge wrote. Seeborg pointed to California law that defines personal information to include data that “is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.” Given the legal definition, “a reasonable juror could view the (s)WAA-off data Google collected via GA4F, including a user’s unique device identifiers, as comprising a user’s personal information,” he wrote.

[…]

Source: Google loses in court, faces trial for collecting data on users who opted out – Ars Technica

Siri “unintentionally” recorded private convos on phone, watch, then sold them to advertisers; yes those ads are very targeted Apple agrees to pay $95M, laughs to the bank

Apple has agreed to pay $95 million to settle a lawsuit alleging that its voice assistant Siri routinely recorded private conversations that were then shared with third parties and used for targeted ads.

In the proposed class-action settlement—which comes after five years of litigation—Apple admitted to no wrongdoing. Instead, the settlement refers to “unintentional” Siri activations that occurred after the “Hey, Siri” feature was introduced in 2014, where recordings were apparently prompted without users ever saying the trigger words, “Hey, Siri.”

Sometimes Siri would be inadvertently activated, a whistleblower told The Guardian, when an Apple Watch was raised and speech was detected. The only clue that users seemingly had of Siri’s alleged spying was eerily accurate targeted ads that appeared after they had just been talking about specific items like Air Jordans or brands like Olive Garden, Reuters noted (claims which remain disputed).

[…]

It’s currently unknown how many customers were affected, but if the settlement is approved, the tech giant has offered up to $20 per Siri-enabled device for any customers who made purchases between September 17, 2014, and December 31, 2024. That includes iPhones, iPads, Apple Watches, MacBooks, HomePods, iPod touches, and Apple TVs, the settlement agreement noted. Each customer can submit claims for up to five devices.

A hearing when the settlement could be approved is currently scheduled for February 14. If the settlement is certified, Apple will send notices to all affected customers. Through the settlement, customers can not only get monetary relief but also ensure that their private phone calls are permanently deleted.

While the settlement appears to be a victory for Apple users after months of mediation, it potentially lets Apple off the hook pretty cheaply. If the court had certified the class action and Apple users had won, Apple could’ve been fined more than $1.5 billion under the Wiretap Act alone, court filings showed.

But lawyers representing Apple users decided to settle, partly because data privacy law is still a “developing area of law imposing inherent risks that a new decision could shift the legal landscape as to the certifiability of a class, liability, and damages,” the motion to approve the settlement agreement said. It was also possible that the class size could be significantly narrowed through ongoing litigation, if the court determined that Apple users had to prove their calls had been recorded through an incidental Siri activation—potentially reducing recoverable damages for everyone.

“The percentage of those who experienced an unintended Siri activation is not known,” the motion said. “Although it is difficult to estimate what a jury would award, and what claims or class(es) would proceed to trial, the Settlement reflects approximately 10–15 percent of Plaintiffs expected recoverable damages.”

Siri’s unintentional recordings were initially exposed by The Guardian in 2019, plaintiffs’ complaint said. That’s when a whistleblower alleged that “there have been countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on. These recordings are accompanied by user data showing location, contact details, and app data.”

[…]

Meanwhile, Google faces a similar lawsuit in the same district from plaintiffs represented by the same firms over its voice assistant, Reuters noted. A win in that suit could affect anyone who purchased “Google’s own smart home speakers, Google Home, Home Mini, and Home Max; smart displays, Google Nest Hub, and Nest Hub Max; and its Pixel smartphones” from approximately May 18, 2016 to today, a December court filing noted. That litigation likely won’t be settled until this fall.

Source: Siri “unintentionally” recorded private convos; Apple agrees to pay $95M – Ars Technica

Android will let you find unknown Bluetooth trackers instead of just warning you about them

The advent of Bluetooth trackers has made it a lot easier to find your bag or keys when they’re lost, but it has also put inconspicuous tracking tools in the hands of people who might misuse them. Apple and Google have both implemented tracker alerts to let you know if there’s an unknown Bluetooth tracker nearby, and now as part of a new update, Google is letting Android users actually locate those trackers, too.

The feature is one of two new tools Google is adding to Find My Device-compatible trackers. The first, “Temporarily Pause Location” is what you’re supposed to enable when you first receive an unknown tracker notification. It blocks your phone from updating its location with trackers for 24 hours. The second, “Find Nearby,” helps you pinpoint where the tracker is if you can’t see it or easily hear it.

By clicking on an unknown tracker notification you’ll be able to see a map of where the tracker was last spotted moving with you. From there, you can play a sound to see if you can locate it (Google says the owner won’t be notified). If you can’t find it, Find Nearby will connect your phone to the tracker over Bluetooth and display a shape that fills in the closer you get to it.

The Find Nearby button and interface from Google's Find My Device network.
Google / Engadget

The tool is identical to what Google offers for locating trackers and devices you actually own, but importantly, you don’t need to use Find My Device or have your own tracker to benefit. Like Google’s original notifications feature, any device running Android 6.0 and up can deal with unknown Bluetooth trackers safely.

Expanding Find Nearby seems like the final step Google needed to take to tamp down Bluetooth tracker misuse, something Apple already does with its Precision Finding tool for AirTags. The companies released a shared standard for spotting unknown Bluetooth trackers regardless of whether you use Android or iOS in May 2024, following the launch of Google’s Find My Device network in April. Both Google and Apple offered their own methods of dealing with unknown trackers before then to prevent trackers from being used for everything from robbery to stalking.

Source: Android will let you find unknown Bluetooth trackers instead of just warning you about them

Singapore to increase road capacity by GPS tracking all vehicles. Because location data is not sensitive and will never be hacked *cough*

Singapore’s Land Transport Authority (LTA) estimated last week that by tracking all vehicles with GPS it will be able to increase road capacity by 20,000 over the next few years.

The densely populated island state is moving from what it calls Electric Road Pricing (ERP) 1.0 to ERP 2.0. The first version used gantries – or automatic tolls – to charge drivers a fee through an in-car device when they used specific roadways during certain hours.

ERP 2.0 sees the vehicle instead tracked through GPS, which can tell where a vehicle is at all operating times.

“ERP 2.0 will provide more comprehensive aggregated traffic information and will be able to operate without physical gantries. We will be able to introduce new ‘virtual gantries,’ which allow for more flexible and responsive congestion management,” explained the LTA.

But the island’s government doesn’t just control inflow into urban areas through toll-like charging – it also aggressively controls the total number of cars operating within its borders.

Singapore requires vehicle owners to bid for a set number of Certificates of Entitlement – costly operating permits valid for only ten years. The result is an increase of around SG$100,000 ($75,500) every ten years, depending on that year’s COE price, on top of a car’s usual price. The high total price disincentivizes mass car ownership, which helps the government manage traffic and emissions.

[…]

Source: Singapore to increase road capacity by GPS tracking vehicles • The Register

Google changes Terms Of Service, now spies on your AI prompts

The new terms come in on November 15th.

4.3 Generative AI Safety and Abuse. Google uses automated safety tools to detect abuse of Generative AI Services. Notwithstanding the “Handling of Prompts and Generated Output” section in the Service Specific Terms, if these tools detect potential abuse or violations of Google’s AUP or Prohibited Use Policy, Google may log Customer prompts solely for the purpose of reviewing and determining whether a violation has occurred. See the Abuse Monitoring documentation page for more information about how logging prompts impacts Customer’s use of the Services.

Source: Google Cloud Platform Terms Of Service

If You Ever Rented From Redbox, Your Private Info Is Up for Grabs

If you’ve ever opted to rent a movie through a Redbox kiosk, your private info is out there waiting for any tinkerer to get their hands on it. One programmer who reverse-engineered a kiosk’s hard drive proved the Redbox machines can cough up transaction histories featuring customers’ names, emails, and rentals going back nearly a decade. It may even have part of your credit card number stored on-device.

[…]

a California-based programmer named Foone Turing, managed to grab an unencrypted file from the internal hard drive containing a file that showed the emails, home addresses, and the rental history for either a fraction or the whole of those who previously used the kiosk.

[…]

Turing told Lowpass that the Redbox stored some financial information on those drives, including the first six and last four digits of each credit card used and “some lower-level transaction details.” The devices did apparently connect to a secure payment system through Redbox’s servers, but the systems stored financial information on a log in a different folder than the rental records. She told us that it’s likely the system only stored the last month of transaction logs.

[…]

Source: If You Ever Rented From Redbox, Your Private Info Is Up for Grabs

Which is a great illustration why there needs to be some regulations about what happens to personal data when a company is sold or goes bust.

Face matching now available on GSA’s login.gov, however it still doesn’t work in minimum 10% of the time

The US government’s General Services Administration’s (GSA) facial matching login service is now generally available to the public and other federal agencies, despite its own recent report admitting the tech is far from perfect.

The GSA announced general availability of remote identity verification (RiDV) technology through login.gov, and the service’s availability to other federal government agencies yesterday. According to the agency, the technology behind the offering is “a new independently certified” solution that complies with the National Institute of Standards and Technology’s (NIST) 800-63 identity assurance level 2 (IAL2) standard.

IAL2 identity verification involves using either remote or in-person verification of a person’s identity via biometric data along with some physical element, like an ID photograph, access to a cellphone number, for example.

“This new IAL2-compliant offering adds proven one-to-one facial matching technology that allows Login.gov to confirm that a live selfie taken by a user matches the photo on a photo ID, such as a driver’s license, provided by the user,” the GSA said.

The Administration noted that the system doesn’t use “one-to-many” face matching technology to compare users to others in its database, and doesn’t use the images for any purpose other than verifying a user’s identity.

[…]

In a report issued by the GSA’s Office of the Inspector General in early 2023, the Administration was called out for saying it implemented IAL2-level identity verification as early as 2018, but never actually supporting the requirements to meet the standard.

“GSA knowingly billed customer agencies over $10 million for services, including alleged IAL2 services that did not meet IAL2 standards,” the report claimed.

[…]

Fast forward to October of last year, and the GSA said it was embracing facial recognition tech on login.gov with plans to test it this year – a process it began in April.  Since then, however, the GSA has published pre-press findings of a study it conducted of five RiDV technologies, finding that they’re still largely unreliable.

The study anonymized the results of the five products, making it unclear which were included in the final pool or how any particular one performed. Generally, however, the report found that the best-performing product still failed 10 percent of the time, and the worst had a false negative rate of 50 percent, meaning its ability to properly match a selfie to a government ID was no better than chance.

Higher rejection rates for people with darker skin tones were also noted in one product, while another was more accurate for people of AAPI descent, but less accurate for everyone else – hardly the equitability the GSA said it wanted in an RiDV product last year.

[…]

It’s unclear what solution has been deployed for use on login.gov. The only firm we can confirm has been involved though the process is LexisNexis, which previously acknowledged to The Register that it has worked with the GSA on login.gov for some time.

That said, LexisNexis’ CEO for government risk solutions told us recently that he’s not convinced the GSA’s focus on adopting IAL2 RiDV solutions at the expense of other biometric verification methods is the best approach.

“Any time you rely on a single tool, especially in the modern era of generative AI and deep fakes … you are going to have this problem,” Haywood “Woody” Talcove told us during a phone interview last month. “I don’t think NIST has gone far enough with this workflow.”

Talcove told us that facial recognition is “pretty easy to game,” and said he wants a multi-layered approach – one that it looks like GSA has declined to pursue given how quickly it’s rolling out a solution.

“What this study shows is that there’s a level of risk being injected into government agencies completely relying on one tool,” Talcove said. “We’ve gotta go further.”

Along with asking the GSA for more details about its chosen RiDV solution, we also asked for some data about its performance. We didn’t get an answer to that question, either.

Source: Face matching now available on GSA’s login.gov • The Register