Russia to enforce location tracking app on all foreigners in Moscow

The Russian government has introduced a new law that makes installing a tracking app mandatory for all foreign nationals in the Moscow region.

The new proposal was announced by the chairman of the State Duma, Vyacheslav Volodin, who presented it as a measure to tackle migrant crimes.

“The adopted mechanism will allow, using modern technologies, to strengthen control in the field of migration and will also contribute to reducing the number of violations and crimes in this area,” stated Volodin.

Using a mobile application that all foreigners will have to install on their smartphones, the Russian state will receive the following information:

  • Residence location
  • Fingerprint
  • Face photograph
  • Real-time geo-location monitoring

“If migrants change their actual place of residence, they will be required to inform the Ministry of Internal Affairs (MVD) within three working days,” the high-ranking politician explained.

The measures will not apply to diplomats of foreign countries or citizens of Belarus.

Foreigners attempting to avoid their obligation in relation to the new law will be added to a registry of monitored individuals and deported from Russia.

Russian internet freedom observatory Roskomsvoboda’s reactions to this proposal reflect skepticism and concern.

Lawyer Anna Minushkina noted that the proposal violates Articles 23 and 24 of the Russian Constitution, guaranteeing the right to privacy.

President of the Uzbek Community in Moscow, Viktor Teplyankov, characterized the initiative as “ill-conceived and difficult to implement,” expressing doubts about its feasibility.

Finally, PSP Foundation’s Andrey Yakimov warned that such aggressive measures are bound to deter potential labor migrants, creating a different problem in the country.

The proposal hasn’t reached its final form yet, and specifics like what happens in the case of device theft/loss or similar technical or practical obstacles are to be addressed in the upcoming period during meetings between the Ministry and regional authorities.

The mass-surveillance experiment will run until September 2029, and if deemed successful, the mechanism will extend to cover more parts of the country.

Source: Russia to enforce location tracking app on all foreigners in Moscow

Google found not compliant with AVG when registering new accounts – sends the data to 70 services without user knowledge

According to a ruling by the Berlin Regional Court, Google must disclose to its users which of its more than 70 services process their data when they register for an account. The civil chamber thus upheld a lawsuit filed by the German Association of Consumer Organizations (vzbv). The consumer advocates had complained that neither the “express personalization” nor the alternative “manual personalization” complied with the legal requirements of the European General Data Protection Regulation (GDPR).
The ruling against Google Ireland Ltd. was handed down on March 25, 2025, but was only published on Friday (case number 15 O 472/22). The decision is not yet legally binding because the internet company has appealed the ruling. Google stated that it disagrees with the Regional Court’s decision.
What does Google process data for?
The consumer advocates argued that consumers must know what Google processes their data for when registering. Users must be able to freely decide how their data is processed. The judges at the Berlin Regional Court confirmed this legal opinion. The ruling states: “In this case, transparency is lacking simply because the defendant does not provide information about the individual Google services, Google apps, Google websites, or Google partners for which the data is to be used.” For this reason, the scope of consent is completely unknown to the user.
Google: Account creation has changed
Google stated that the ruling concerned an old account creation process that had since been changed. “What hasn’t changed is our commitment to enabling our users to use Google on their terms, with clear choices and control options based on extensive research, testing, and guidelines from European data protection authorities,” it stated. In the proceedings, Google argued that listing all services would result in excessively long text and harm transparency. This argument was rejected by the court. In the court’s view, information about the scope of consent is among the minimum details required by law. The regional court was particularly concerned that with “Express Personalization,” users only had the option of consenting to all data usage or canceling the process. A differentiated refusal was not possible. Even with “Manual Personalization,” consumers could not refuse the use of the German location.

Source: Landgericht Berlin: Google-Accounterstellung verletzte DSGVO | heise online

Google will pay Texas $1.4B to settle claims the company collected users’ data without permission

[…] “For years, Google secretly tracked people’s movements, private searches, and even their voiceprints and facial geometry through their products and services. I fought back and won.”

The agreement settles several claims Texas made against the search giant in 2022 related to geolocation, incognito searches and biometric data. The state argued Google was “unlawfully tracking and collecting users’ private data.”

Paxton claimed, for example, that Google collected millions of biometric identifiers, including voiceprints and records of face geometry, through such products and services as Google Photos and Google Assistant.

Google spokesperson José Castañeda said the agreement settles an array of “old claims,” some of which relate to product policies the company has already changed.

[…]

Texas previously reached two other key settlements with Google within the last two years, including one in December 2023 in which the company agreed to pay $700 million and make several other concessions to settle allegations that it had been stifling competition against its Android app store.

Meta has also agreed to a $1.4 billion settlement with Texas in a privacy lawsuit over allegations that the tech giant used users’ biometric data without their permission.

Source: Google will pay Texas $1.4B to settle claims the company collected users’ data without permission | AP News

US senator introduces bill calling for location-tracking on AI chips to limit China access

A U.S. senator introduced a bill on Friday that would direct the Commerce Department to require location verification mechanisms for export-controlled AI chips, in an effort to curb China’s access to advanced semiconductor technology.
Called the “Chip Security Act,” the bill calls for AI chips under export regulations, and products containing those chips, to be fitted with location-tracking systems to help detect diversion, smuggling or other unauthorized use of the product.
“With these enhanced security measures, we can continue to expand access to U.S. technology without compromising our national security,” Republican Senator Tom Cotton of Arkansas said.
The bill also calls for companies exporting the AI chips to report to the Bureau of Industry and Security if their products have been diverted away from their intended location or subject to tampering attempts.
The move comes days after U.S. President Donald Trump said he would rescind and modify a Biden-era rule that curbed the export of sophisticated AI chips with the goal of protecting U.S. leadership in AI and blocking China’s access.
U.S. Representative Bill Foster, a Democrat from Illinois, also plans to introduce a bill on similar lines in the coming weeks, Reuters reported on Monday.
Restricting China’s access to AI technology that could enhance its military capabilities has been a key focus for U.S. lawmakers and reports of widespread smuggling of Nvidia’s (NVDA.O)

Source: US senator introduces bill calling for location-tracking on AI chips to limit China access | Reuters

Of course it adds another layer of the US government spying on you if you want to buy a graphics card too. I’m not sure how anyone being able to track all your PCs does not compromise national security.

Neurotech companies are selling your brain data, senators warn

Three Democratic senators are sounding the alarm over brain-computer interface (BCI) technologies’ ability to collect — and potentially sell — our neural data. In a letter to the Federal Trade Commission (FTC), Sens. Chuck Schumer (D-NY), Maria Cantwell (D-IN), and Ed Markey (D-MA) called for an investigation into neurotechnology companies’ handling of user data, and for tighter regulations on their data-sharing policies.

“Unlike other personal data, neural data — captured directly from the human brain — can reveal mental health conditions, emotional states, and cognitive patterns, even when anonymized,” the letter reads. “This information is not only deeply personal; it is also strategically sensitive.”

While the concept of neural technologies may conjure up images of brain implants like Elon Musk’s Neuralink, there are far less invasive — and less regulated — neurotech products on the market, including headsets that help people meditate, purportedly trigger lucid dreaming, and promise to help users with online dating by helping them swipe through apps “based on your instinctive reaction.” These consumer products gobble up insights about users’ neurological data — and since they aren’t categorized as medical devices, the companies behind them aren’t barred from sharing that data with third parties.

“Neural data is the most private, personal, and powerful information we have—and no company should be allowed to harvest it without transparency, ironclad consent, and strict guardrails. Yet companies are collecting it with vague policies and zero transparency,” Schumer told The Verge via email.

The letter cites a 2024 report by the Neurorights Foundation, which found that most neurotech companies not only have few safeguards on user data but also have the ability to share sensitive information with third parties. The report looked at the data policies of 30 consumer-facing BCI companies and found that all but one “appear to have access to” users’ neural data, “and provide no meaningful limitations to this access.” The Neurorights Foundation only surveyed companies whose products are available to consumers without the help of a medical professional; implants like those made by Neuralink weren’t among them.

The companies surveyed by the Neurorights Foundation make it difficult for users to opt out of having their neurological data shared with third parties. Just over half the companies mentioned in the report explicitly let consumers revoke consent for data processing, and only 14 of the 30 give users the ability to delete their data. In some instances, user rights aren’t universal — for example, some companies only let users in the European Union delete their data but don’t grant the same rights to users elsewhere in the world.

To safeguard against potential abuses, the senators are calling on the FTC to:

  • investigate whether neurotech companies are engaging in unfair or deceptive practices that violate the FTC Act
  • compel companies to report on data handling, commercial practices, and third-party access
  • clarify how existing privacy standards apply to neural data
  • enforce the Children’s Online Privacy Protection Act as it relates to BCIs
  • begin a rulemaking process to establish safeguards for neural data, and setting limits on secondary uses like AI training and behavioral profiling
  • and ensure that both invasive and noninvasive neurotechnologies are subject to baseline disclosure and transparency standards, even when the data is anonymized

Though the senators’ letter calls out Neuralink by name, Musk’s brain implant tech is already subject to more regulations than other BCI technologies. Since Neuralink’s brain implant is considered a “medical” technology, it’s required to comply with the Health Insurance Portability and Accountability Act (HIPAA), which safeguards people’s medical data.

Stephen Damianos, the executive director of the Neurorights Foundation, said that HIPAA may not have entirely caught up to existing neurotechnologies, especially with regards to “informed consent” requirements.

“There are long-established and validated models for consent from the medical world, but I think there’s work to be done around understanding the extent to which informed consent is sufficient when it comes to neurotechnology,” Damianos told The Verge. “The analogy I like to give is, if you were going through my apartment, I would know what you would and wouldn’t find in my apartment, because I have a sense of what exactly is in there. But brain scans are overbroad, meaning they collect more data than what is required for the purpose of operating a device. It’s extremely hard — if not impossible — to communicate to a consumer or a patient exactly what can today and in the future be decoded from their neural data.”

Data collection becomes even trickier for “wellness” neurotechnology products, which don’t have to comply with HIPAA, even when they advertise themselves as helping with mental health conditions like depression and anxiety.

Damianos said there’s a “very hazy gray area” between medical devices and wellness devices.

“There’s this increasingly growing class of devices that are marketed for health and wellness as distinct from medical applications, but there can be a lot of overlap between those applications,” Damianos said. The dividing line is often whether a medical intermediary is required to help someone obtain a product, or whether they can “just go online, put in your credit card, and have it show up in a box a few days later.”

There are very few regulations on neurotechnologies advertised as being for “wellness.” In April 2024, Colorado passed the first-ever legislation protecting consumers’ neural data. The state updated its existing Consumer Protection Act, which protects users’ “sensitive data.” Under the updated legislation, “sensitive data” now includes “biological data” like biological, genetic, biochemical, physiological, and neural information. And in September, California amended its Consumer Privacy Act to protect neural data.

“We believe in the transformative potential of these technologies, and I think sometimes there’s a lot of doom and gloom about them,” Damianos told The Verge. “We want to get this moment right. We think it’s a really profound moment that has the potential to reshape what it means to be human. Enormous risks come from that, but we also believe in leveraging the potential to improve people’s lives.”

Source: Neurotech companies are selling your brain data, senators warn | The Verge

Employee monitoring app exposes 21M work screens​ to internet

A surveillance tool meant to keep tabs on employees is leaking millions of real-time screenshots onto the open web.

Your boss watching your screen isn’t the end of the story. Everyone else might be watching, too. Researchers at Cybernews have uncovered a major privacy breach involving WorkComposer, a workplace surveillance app used by over 200,000 people across countless companies.

The app, designed to track productivity by logging activity and snapping regular screenshots of employees’ screens, left over 21 million images exposed in an unsecured Amazon S3 bucket, broadcasting how workers go about their day frame by frame.

[…]

WorkComposer is one of many time-tracking tools that have crept into modern work culture. Marketed as a way to keep teams “accountable,” the software logs keystrokes, tracks how long you spend on each app, and snaps desktop screenshots every few minutes.

The leak shows just how dangerous this setup becomes when basic security hygiene is ignored. A leak of this magnitude turns everyday work activity into a goldmine for cybercriminals.

[…]

The leak’s real-time nature only amplifies the danger, as threat actors could monitor unfolding business operations as they happen, giving them access to otherwise locked-down environments.

[…]

Source: Employee monitoring app exposes 21M work screens​ | Cybernews

Can you imagine the time lost by employers going over all this data?

Microsoft Windows new built in spying tool Microsoft Recall is a really great idea too. Not.

A Data Breach at Yale New Haven Health Compromised 5.5 Million Patients’ Information

[…]Yale New Haven Health (YNHHS), a massive nonprofit healthcare network in Connecticut. Hackers stole the data of more than 5.5 million individuals during an attack in March 2025.

[…]

According to a public notice on the YNHHS website, the organization discovered “unusual activity” on its system on March 8, 2025, which was later identified as unauthorized third-party access that allowed bad actors to copy certain patient data. While the information stolen varies by individual, it may include the following:

  • Name
  • Date of birth
  • Address
  • Phone number
  • Email address
  • Race
  • Ethnicity
  • Social Security number
  • Patient type
  • Medical record number

YNHHS says the breach did not include access to medical records, treatment information, or financial data (such as account and payment information).

[…]

Source: A Data Breach at Yale New Haven Health Compromised 5.5 Million Patients’ Information | Lifehacker

Wait – race and ethnicity?!

Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads

CEO Aravind Srinivas said this week on the TBPN podcast that one reason Perplexity is building its own browser is to collect data on everything users do outside of its own app. This so it can sell premium ads.

“That’s kind of one of the other reasons we wanted to build a browser, is we want to get data even outside the app to better understand you,” Srinivas said. “Because some of the prompts that people do in these AIs is purely work-related. It’s not like that’s personal.”

And work-related queries won’t help the AI company build an accurate-enough dossier.

“On the other hand, what are the things you’re buying; which hotels are you going [to]; which restaurants are you going to; what are you spending time browsing, tells us so much more about you,” he explained.

Srinivas believes that Perplexity’s browser users will be fine with such tracking because the ads should be more relevant to them.

“We plan to use all the context to build a better user profile and, maybe you know, through our discover feed we could show some ads there,” he said.

The browser, named Comet, suffered setbacks but is on track to be launched in May, Srinivas said.

He’s not wrong, of course. Quietly following users around the internet helped Google become the roughly $2 trillion market cap company it is today.

[…]

Meta’s ad tracking technology, Pixels, which is embedded on websites across the internet, is how Meta gathers data, even on people that don’t have Facebook or Instagram accounts. Even Apple, which has marketed itself as a privacy protector, can’t resist tracking users’ locations to sell advertising in some of its apps by default.

On the other hand, this kind of thing has led people across the political spectrum in the U.S. and in Europe to distrust big tech.

The irony of Srinivas openly explaining his browser-tracking ad-selling ambitions this week also can’t be overstated.

Google is currently in court fighting the U.S. Department of Justice, which has alleged Google behaved in monopolistic ways to dominate search and online advertising. The DOJ wants the judge to order Google to divest Chrome.

Both OpenAI and Perplexity — not surprisingly, given Srinivas’ reasons — said they would buy the Chrome browser business if Google was forced to sell.

Source: Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads | TechCrunch

Yup, so even if Mozilla is making Firefox more invasive, it’s still better than these guys.

Blue Shield of California Exposed the Data of 4.7 Million People to Google for targeted advertising

Blue Shield of California shared the protected health information of 4.7 million individuals with Google over a nearly three-year period, a data breach that impacts the majority of its nearly 6 million members, according to reporting from Bleeping Computer.

This isn’t the only large data breach to affect a healthcare organization the last year alone. Community Health Center records were hacked in October 2024, compromising more than a million individuals’ data, along with an attack on lab testing company Lab Services Cooperative, which affected records of 1.6 million Planned Parenthood patients. UnitedHealth Group suffered a breach in February 2024, resulting in the leak of more than 100 million people’s data.

What happened with Blue Shield of California?

According to an April 9 notice posted on Blue Shield of California’s website, the company allowed certain data, including protected health information, to be shared with Google Ads through Google Analytics, which may have allowed Google to serve targeted ads back to members. While not discovered until Feb. 11, 2025, the leak occurred for several years, from April 2021 to January 2024, when the connection between Google Analytics and Google Ads was severed on Blue Shield websites.

The following Blue Shield member information may have been compromised:

  • Insurance plan name, type, and group number
  • City and zip code
  • Gender
  • Family size
  • Blue Shield assigned identifiers for online accounts
  • Medical claim service date and provider
  • Patient name
  • Patient financial responsibility
  • “Find a Doctor” search criteria and results

According to the notice, no additional personal data—Social Security numbers, driver’s license numbers, and banking and credit card information—were disclosed. Blue Shield also states that no bad actor was involved, nor have they confirmed that the information has been used maliciously.

[…]

Source: Blue Shield of California Exposed the Data of 4.7 Million People to Google | Lifehacker

Discord Wants Your Face: Begins Testing Facial Scans for Age Verification

Discord has begun requiring some users in the United Kingdom and Australia to verify their age through a facial scan before being permitted to access sensitive content. The chat app’s new process has been described as an “experiment,” and comes in response to laws passed in those countries that place guardrails on youth access to online platforms. Discord has also been the target of concerns that it does not sufficiently protect minors from sexual content.

Users may be asked to verify their age when encountering content that has been flagged by Discord’s systems as being sensitive in nature, or when they change their settings to enable access to sensitive content. The app will ask users to scan their face through a computer or smartphone webcam; alternatively, they can scan a driver’s license or other form of ID.

[…]

Source: Discord Begins Testing Facial Scans for Age Verification

Age verification is impossible to do correctly, incredibly privacy invasive and a really hacker tempting target. The UK and Australia and every other country considering age verification are seriously endangering their citizens.

Fortunately you can always hold up a picture from a magazine in front of the webcam.

Your TV is watching you better: LG TVs’ integrated ads get more personal with tech that analyzes viewer emotions

LG TVs will soon leverage an AI model built for showing advertisements that more closely align with viewers’ personal beliefs and emotions. The company plans to incorporate a partner company’s AI tech into its TV software in order to interpret psychological factors impacting a viewer, such as personal interests, personality traits, and lifestyle choices. The aim is to show LG webOS users ads that will emotionally impact them.

The upcoming advertising approach comes via a multi-year licensing deal with Zenapse, a company describing itself as a software-as-a-service marketing platform that can drive advertiser sales “with AI-powered emotional intelligence.” LG will use Zenapse’s technology to divide webOS users into hyper-specific market segments that are supposed to be more informative to advertisers. LG Ad Solutions, LG’s advertising business, announced the partnership on Tuesday.

The technology will be used to inform ads shown on LG smart TVs’ homescreens, free ad-supported TV (FAST) channels, and elsewhere throughout webOS, per StreamTV Insider. LG will also use Zenapse’s tech to “expand new software development and go-to-market products,” it said. LG didn’t specify the duration of its licensing deal with Zenapse.

[…]

With all this information, ZenVision will group LG TV viewers into highly specified market segments, such as “goal-driven achievers,” “social connectors,” or “emotionally engaged planners,” an LG spokesperson told StreamTV Insider. Zenapse’s website for ZenVision points to other potential market segments, including “digital adopters,” “wellness seekers,” “positive impact & environment,” and “money matters.”

Companies paying to advertise on LG TVs can then target viewers based on the ZenVision-specified market segments and deliver an “emotionally intelligent ad,” as Zenapse’s website puts it.

This type of targeted advertising aims to bring advertisers more in-depth information about TV viewers than demographic data or even contextual advertising (which shows ads based on what the viewer is watching) via psychographic data. Demographic data gives advertisers viewer information, like location, age, gender, ethnicity, marital status, and income. Psychographic data is supposed to go deeper and allow advertisers to target people based on so-called psychological factors, like personal beliefs, values, and attitudes. As Salesforce explains, “psychographic segmentation delves deeper into their psyche” than relying on demographic data.

[…]

With their ability to track TV viewers’ behavior, including what they watch and search for on their TVs, smart TVs are a growing obsession for advertisers. As LG’s announcement pointed out, CTVs represent “one of the fastest-growing ad segments in the US, expected to reach over $40 billion by 2027, up from $24.6 billion in 2023.”

However, as advertisers’ interest in appealing to streamers grows, so do their efforts to track and understand viewers for more targeted advertising. Both efforts could end up pushing the limits of user comfort and privacy.

[…]

 

Source: LG TVs’ integrated ads get more personal with tech that analyzes viewer emotions – Ars Technica

An LG TV is not exactly a cheap thing. I am paying for the whole product, not for a service. I bought a TV, not a marketing department.

Apple to Spy on User Emails and other Data on Devices to Bolster AI Technology

Apple Inc. will begin analyzing data on customers’ devices in a bid to improve its artificial intelligence platform, a move designed to safeguard user information while still helping it catch up with AI rivals.

Today, Apple typically trains AI models using synthetic data — information that’s meant to mimic real-world inputs without any personal details. But that synthetic information isn’t always representative of actual customer data, making it harder for its AI systems to work properly.

The new approach will address that problem while ensuring that user data remains on customers’ devices and isn’t directly used to train AI models. The idea is to help Apple catch up with competitors such as OpenAI and Alphabet Inc., which have fewer privacy restrictions.

The technology works like this: It takes the synthetic data that Apple has created and compares it to a recent sample of user emails within the iPhone, iPad and Mac email app. By using actual emails to check the fake inputs, Apple can then determine which items within its synthetic dataset are most in line with real-world messages.

These insights will help the company improve text-related features in its Apple Intelligence platform, such as summaries in notifications, the ability to synthesize thoughts in its Writing Tools, and recaps of user messages.

[…]

The company will roll out the new system in an upcoming beta version of iOS and iPadOS 18.5 and macOS 15.5. A second beta test of those upcoming releases was provided to developers earlier on Monday.

[…]

Already, the company has relied on a technology called differential privacy to help improve its Genmoji feature, which lets users create a custom emoji. It uses that system to “identify popular prompts and prompt patterns, while providing a mathematical guarantee that unique or rare prompts aren’t discovered,” the company said in the blog post.

The idea is to track how the model responds in situations where multiple users have made the same request — say, asking for a dinosaur carrying a briefcase — and improving the results in those cases.

The features are only for users who are opted in to device analytics and product improvement capabilities. Those options are managed in the Privacy and Security tab within the Settings app on the company’s devices.

[…]

Source: Apple to Analyze User Data on Devices to Bolster AI Technology

UK Effort to Keep Apple Encryption Fight Secret Is Blocked

A court has blocked a British government attempt to keep secret a legal case over its demand to access Apple Inc. user data in a victory for privacy advocates.

The UK Investigatory Powers Tribunal, a special court that handles cases related to government surveillance, said the authorities’ efforts were a “fundamental interference with the principle of open justice” in a ruling issued on Monday.

The development comes after it emerged in January that the British government had served Apple with a demand to circumvent encryption that the company uses to secure user data stored in its cloud services.

Apple challenged the request, while taking the unprecedented step of removing its advanced data protection feature for its British users. The government had sought to keep details about the demand — and Apple’s challenge of it — from being publicly disclosed.

[…]

Source: UK Effort to Keep Apple Encryption Fight Secret Is Blocked

EU: These are scary times – let’s backdoor encryption and make everyone unsafe!

The EU has shared its plans to ostensibly keep the continent’s denizens secure – and among the pages of bureaucratese are a few worrying sections that indicate the political union wants to backdoor encryption by 2026, or even sooner.

While the superstate has made noises about backdooring encryption before, the ProtectEU plan [PDF], launched on Monday, says the European Commission wants to develop a roadmap to allow “lawful and effective access to data for law enforcement in 2025” and a technology roadmap to do so by the following year.

“We are working on a roadmap now, and we will look at what is technically also possible,” said Henna Virkkunen, executive vice-president of the EC for tech sovereignty, security and democracy. “The problem is now that our law enforcement, they have been losing ground on criminals because our police investigators, they don’t have access to data,” she added.

“Of course, we want to protect the privacy and cyber security at the same time; and that’s why we have said here that now we have to prepare a technical roadmap to watch for that, but it’s something that we can’t tolerate, that we can’t take care of the security because we don’t have tools to work in this digital world.”

She claimed that in “85 percent” of police cases, law enforcement couldn’t access the data it needed. The proposal is to amend the existing Cybersecurity Act to allow these changes. You can watch the response below.

According to the document, the EC will set up a Security Research & Innovation Campus at its Joint Research Centre in 2026 to, somehow, work out the technical details. Since it’s impossible to backdoor encryption in a way that can’t be potentially exploited by others, it seems a very odd move to make if security’s your goal.

China, Russia, and the US certainly would spend a huge amount of time and money to find the backdoor. Even American law enforcement has given up on the cause of backdooring, although the UK still seems to be wedded to the idea.

In the meantime, for critical infrastructure (and presumably government communications), the EC wants to deploy quantum cryptography across the state. They want to get this in place by 2030 at the latest.

[…]

Source: EU: These are scary times – let’s backdoor encryption! • The Register

Proton may roll away from the Swiss

The EC’s not alone in proposing changes to privacy – new laws outlined in Switzerland could force privacy-focused groups such as Proton out of the country.

Under today’s laws, police can obtain data from services like Proton if they can get a court order for some crimes. But under the proposed laws a court order would not be required and that means Proton would leave the country, said cofounder Andy Yen.

“Swiss surveillance would be significantly stricter than in the US and the EU, and Switzerland would lose its competitiveness as a business location,” Proton’s cofounder told Swiss title Der Bund. “We feel compelled to leave Switzerland if the partial revision of the surveillance law planned by the Federal Council comes into force.”

The EU keeps banging away at this. They tried in 2018, 2020, 2021, 2023, 2024. And fortunately they keep getting stopped by people with enough brains to realise that you cannot have a safe backdoor. For security to be secure it needs to be unbreakable.

https://www.linkielist.com/?s=eu+encryption

 

T-Mobile SyncUP Bug Reveals Names, Images, and Locations of Random Children

T-Mobile sells a little-known GPS service called SyncUP, which allows users who are parents to monitor the locations of their children. This week, an apparent glitch in the service’s system obscured the locations of users’ own children while sending them detailed information and the locations of other, random children.

404 Media first reported on the extremely creepy bug, which appears to have impacted a large number of users. The outlet notes an outpouring of consternation and concern from web users on social platforms like Reddit and X, many of which claimed to have been impacted. 404 also interviewed one specific user, “Jenna,” who explained her ordeal with the bug:

Jenna, a parent who uses SyncUP to keep track of her three-year-old and six-year-old children, logged in Tuesday and instead of seeing if her kids had left school yet, was shown the exact, real-time locations of eight random children around the country, but not the locations of her own kids. 404 Media agreed to use a pseudonym for Jenna to protect the privacy of her kids.

“I’m not comfortable giving my six-year-old a phone, but he takes a school bus and I just want to be able to see where he is in real time,” Jenna said. “I had put a 500 meter boundary around his school, so I get an alert when he’s leaving.”

Jenna sent 404 Media a series of screenshots that show her logged into the app, as well as the locations of children located in other states. In the screenshots, the address-level location of the children are available, as is their name and the last time the location was updated.

Even more alarmingly, the woman interviewed by 404 claims that the company didn’t show much concern for the bug. “Jenna” says she called the company and was referred to an employee who told her that a ticket had been filed in the system on the issue’s behalf. A follow-up email from the concerned mother produced no response, she said.

[…]

When reached for comment by Gizmodo, a T-Mobile spokesperson told us: “Yesterday we fully resolved a temporary system issue with our SyncUP products that resulted from a planned technology update. We are in the process of understanding potential impacts to a small number of customers and will reach out to any as needed. We apologize for any inconvenience.”

The privacy implications of such a glitch are obvious and not really worth extrapolating on. That said, it’s also a good reminder that the more digital access you give a company, the more potential there is for that access to fall into the wrong hands.

Source: T-Mobile Bug Reveals Names, Images, and Locations of Random Children

Your TV is watching you watch and selling that data

[…]Your TV wants your data

The TV business traditionally included three distinct entities. There’s the hardware, namely the TV itself; the entertainment, like movies and shows; and the ads, usually just commercials that interrupt your movies and shows. In the streaming era, tech companies want to control all three, a setup also known as vertical integration. If, say, Roku makes the TV, supplies the content, and sells the ads, then it stands to control the experience, set the rates, and make the most money. That’s business!

Roku has done this very well. Although it was founded in 2002, Roku broke into the market in 2008 after Netflix invested $6 million in the company to make a set-top box that enabled any TV to stream Netflix content. It was literally called the Netflix Player by Roku. Over the course of the next 15 years, Roku would grow its hardware business to include streaming sticks, which are basically just smaller set-top-boxes; wireless soundbars, speakers, and subwoofers; and after licensing its operating system to third-party TV makers, its own affordable, Roku-branded smart TVs

[…]

The shift toward ad-supported everything has been happening across the TV landscape. People buy new TVs less frequently these days, so TV makers want to make money off the TVs they’ve already sold. Samsung has Samsung Ads, LG has LG Ad Solutions, Vizio has Vizio Ads, and so on and so forth. Tech companies, notably Amazon and Google, have gotten into the mix too, not only making software and hardware for TVs but also leveraging the massive amount of data they have on their users to sell ads on their TV platforms. These companies also sell data to advertisers and data brokers, all in the interest of knowing as much about you as possible in the interest of targeting you more effectively. It could even be used to train AI.

[…]

Is it possible to escape the ads?

Breaking free from this ad prison is tough. Most TVs on the market today come with a technology called automatic content recognition (ACR) built in. This is basically Shazam for TV — Shazam itself helped popularize the tech — and gives smart TV platforms the ability to monitor what you’re watching by either taking screenshots or capturing audio snippets while you’re watching. (This happens at the signal level, not from actual microphone recordings from the TV.)

Advertisers and TV companies use ACR tech to collect data about your habits that are otherwise hard to track, like if you watch live TV with an antenna. They use that data to build out a profile of you in order to better target ads. ACR also works with devices, like gaming consoles, that you plug into your TV through HDMI cables.

Yash Vekaria, a PhD candidate at UC Davis, called the HDMI spying “the most egregious thing we found” in his research for a paper published last year on how ACR technology works. And I have to admit that I had not heard of ACR until I came across Vekaria’s research.

[…]

Unfortunately, you don’t have much of a choice when it comes to ACR on your TV. You probably enabled the technology when you first set up your TV and accepted its privacy policy. If you refuse to do this, a lot of the functions on your TV won’t work. You can also accept the policy and then disable ACR on your TV’s settings, but that could disable certain features too. In 2017, Vizio settled a class-action lawsuit for tracking users by default. If you want to turn off this tracking technology, here’s a good guide from Consumer Reports that explains how for most types of smart TVs.

[…]

it does bug me, just on principle, that I have to let a tech company wiretap my TV in order to enjoy all of the device’s features.

[…]

Source: Roku’s Moana 2 controversy is part of a bigger ad problem | Vox

A Win for human rights: France Rejects Backdoor Mandate

In a moment of clarity after initially moving forward a deeply flawed piece of legislation, the French National Assembly has done the right thing: it rejected a dangerous proposal that would have gutted end-to-end encryption in the name of fighting drug trafficking. Despite heavy pressure from the Interior Ministry, lawmakers voted Thursday night (article in French) to strike down a provision that would have forced messaging platforms like Signal and WhatsApp to allow hidden access to private conversations.

The vote is a victory for digital rights, for privacy and security, and for common sense.

The proposed law was a surveillance wishlist disguised as anti-drug legislation. Tucked into its text was a resurrection of the widely discredited “ghost” participant model—a backdoor that pretends not to be one. Under this scheme, law enforcement could silently join encrypted chats, undermining the very idea of private communication. Security experts have condemned the approach, warning it would introduce systemic vulnerabilities, damage trust in secure communication platforms, and create tools ripe for abuse.

The French lawmakers who voted this provision down deserve credit. They listened—not only to French digital rights organizations and technologists, but also to basic principles of cybersecurity and civil liberties. They understood that encryption protects everyone, not just activists and dissidents, but also journalists, medical professionals, abuse survivors, and ordinary citizens trying to live private lives in an increasingly surveilled world.

A Global Signal

France’s rejection of the backdoor provision should send a message to legislatures around the world: you don’t have to sacrifice fundamental rights in the name of public safety. Encryption is not the enemy of justice; it’s a tool that supports our fundamental human rights, including the right to have a private conversation. It is a pillar of modern democracy and cybersecurity.

As governments in the U.S., U.K., Australia, and elsewhere continue to flirt with anti-encryption laws, this decision should serve as a model—and a warning. Undermining encryption doesn’t make society safer. It makes everyone more vulnerable.

[…]

Source: A Win for Encryption: France Rejects Backdoor Mandate | Electronic Frontier Foundation

China bans facial recognition without consent and in all public places. And it needs to be encrypted.

China’s Cyberspace Administration and Ministry of Public Security has outlawed the use of facial recognition without consent.

The two orgs last Friday published new rules on facial recognition and an explainer that spell out how orgs that want to use facial recognition must first conduct a “personal information protection impact assessment” that considers whether using the tech is necessary, impacts on individuals’ privacy, and risks of data leakage.

Organizations that decide to use facial recognition must data encrypt biometric data, and audit the information security techniques and practices they use to protect facial scans.

Chinese that go through that process and decide they want to use facial recognition can only do so after securing individuals’ consent.

The rules also ban the use of facial recognition equipment in public places such as hotel rooms, public bathrooms, public dressing rooms, and public toilets.

The measures don’t apply to researchers or to what machine translation of the rules describes as “algorithm training activities” – suggesting images of citizens’ faces are fair game when used to train AI models.

The documents linked to above don’t mention whether government agencies are exempt from the new rules. The Register fancies Beijing will keep using facial recognition whenever it wants to as its previously expressed interest in a national identity scheme that uses the tech, and used it to identify members of ethnic minorities.

Source: China bans facial recognition in hotels, bathrooms • The Register

23andMe files for bankruptcy: How to delete your data before it’s sold off

23andMe has capped off a challenging few years by filing for Chapter 11 bankruptcy today. Given the uncertainty around the future of the DNA testing company and what will happen to all of the genetic data it has collected, now is a critical time for customers to protect their privacy. California Attorney General Rob Bonta has recommended that past customers of the genetic testing business delete their information as a precautionary measure. Here are the steps to deleting your records with 23andMe.

  1. Log into your 23andMe account.
  2. Go to the “Settings” tab of your profile.
  3. Click View on the section called “23andMe Data.”
  4. If you want to retain a copy for your own records, download your data now.
  5. Go to the “Delete Data” section
  6. Click “Permanently Delete Data.”
  7. You will receive an email from 23andMe confirming the action. Click the link in that email to complete the process.

While the majority of an individual’s personal information will be deleted, 23andMe does keep some information for legal compliance. The details are in the company’s privacy policy.

There are a few other privacy-minded actions customers can take. First, anyone who opted to have 23andMe store their saliva and DNA can request that the sample be destroyed. That choice can be made from the Preferences tab of the account settings menu. Second, you can review whether you granted permission for your genetic data and sample to be used in scientific research. The allowance can also be checked, and revoked if you wish, from the account settings page; it’s listed under Research and Product Consents.

Source: How to delete your 23andMe data

Amazon annihilates Alexa privacy settings, turns on continuous, nonconsensual audio uploading

Even by Amazon standards, this is extraordinarily sleazy: starting March 28, each Amazon Echo device will cease processing audio on-device and instead upload all the audio it captures to Amazon’s cloud for processing, even if you have previously opted out of cloud-based processing:

https://arstechnica.com/gadgets/2025/03/everything-you-say-to-your-echo-will-be-sent-to-amazon-starting-on-march-28/

It’s easy to flap your hands at this bit of thievery and say, “surveillance capitalists gonna surveillance capitalism,” which would confine this fuckery to the realm of ideology (that is, “Amazon is ripping you off because they have bad ideas”). But that would be wrong. What’s going on here is a material phenomenon, grounded in specific policy choices and by unpacking the material basis for this absolutely unforgivable move, we can understand how we got here – and where we should go next.

Start with Amazon’s excuse for destroying your privacy: they want to do AI processing on the audio Alexa captures, and that is too computationally intensive for on-device processing. But that only raises another question: why does Amazon want to do this AI processing, even for customers who are happy with their Echo as-is, at the risk of infuriating and alienating millions of customers?

For Big Tech companies, AI is part of a “growth story” – a narrative about how these companies that have already saturated their markets will still continue to grow.

[…]

every growth stock eventually stops growing. For Amazon to double its US Prime subscriber base, it will have to establish a breeding program to produce tens of millions of new Americans, raising them to maturity, getting them gainful employment, and then getting them to sign up for Prime. Almost by definition, a dominant firm ceases to be a growing firm, and lives with the constant threat of a stock revaluation as investors belief in future growth crumbles and they punch the “sell” button, hoping to liquidate their now-overvalued stock ahead of everyone else.

[…]

The hype around AI serves an important material need for tech companies. By lumping an incoherent set of poorly understood technologies together into a hot buzzword, tech companies can bamboozle investors into thinking that there’s plenty of growth in their future.

[…]

let’s look at the technical dimension of this rug-pull.

How is it possible for Amazon to modify your Echo after you bought it? After all, you own your Echo. It is your property. Every first year law student learns this 18th century definition of property, from Sir William Blackstone:

That sole and despotic dominion which one man claims and exercises over the external things of the world, in total exclusion of the right of any other individual in the universe.

If the Echo is your property, how come Amazon gets to break it? Because we passed a law that lets them. Section 1201 of 1998’s Digital Millennium Copyright Act makes it a felony to “bypass an access control” for a copyrighted work:

https://pluralistic.net/2024/05/24/record-scratch/#autoenshittification

That means that once Amazon reaches over the air to stir up the guts of your Echo, no one is allowed to give you a tool that will let you get inside your Echo and change the software back. Sure, it’s your property, but exercising sole and despotic dominion over it requires breaking the digital lock that controls access to the firmware, and that’s a felony punishable by a five-year prison sentence and a $500,000 fine for a first offense.

[…]

Giving a manufacturer the power to downgrade a device after you’ve bought it, in a way you can’t roll back or defend against is an invitation to run the playbook of the Darth Vader MBA, in which the manufacturer replies to your outraged squawks with “I am altering the deal. Pray I don’t alter it any further”

[…]

Amazon says that the recordings your Echo will send to its data-centers will be deleted as soon as it’s been processed by the AI servers. Amazon’s made these claims before, and they were lies. Amazon eventually had to admit that its employees and a menagerie of overseas contractors were secretly given millions of recordings to listen to and make notes on:

https://archive.is/TD90k

And sometimes, Amazon just sent these recordings to random people on the internet:

https://www.washingtonpost.com/technology/2018/12/20/amazon-alexa-user-receives-audio-recordings-stranger-through-human-error/

Fool me once, etc. I will bet you a testicle* that Amazon will eventually have to admit that the recordings it harvests to feed its AI are also being retained and listened to by employees, contractors, and, possibly, randos on the internet.

*Not one of mine

Source: Pluralistic: Amazon annihilates Alexa privacy settings, turns on continuous, nonconsensual audio uploading (15 Mar 2025) – Pluralistic: Daily links from Cory Doctorow

How to stop Android from scanning your phone pictures for content and interpreting them

process called Android System SafetyCore – which arrived in a recent update for devices running Android 9 and later. It scans a user’s photo library for explicit images and displays content warnings before viewing them. Google says “the classification of content runs exclusively on your device and the results aren’t shared with Google.”

Naturally, it will also bring similar tech to Google Messages down the line to prevent certain unsolicited images from affecting a receiver.

Google started installing SafetyCore on user devices in November 2024, and there’s no way of opting out or managing the installation. One day, it’s just there.

Users have vented their frustrations about SafetyCore ever since and despite being able to uninstall and opt out of image scanning, the consent-less approach that runs throughout Android nevertheless left some users upset. It can be uninstalled on Android forks like Xiaomi’s MIUI using Settings>Apps>Android System SafetyCore>Uninstall or on Android using Apps/Apps & Notifications>Show System Apps>Show system apps>Locate SafetyCore>Uninstall or Disable. Reviewers report that in some cases the uninstall option is grayed out, and it can only be disabled, while others complain that it reinstalls on the next update.

The app’s Google Play page is littered with negative reviews, many of which cite its installation without consent.

“In short, it is spyware. We were not informed. It feels like the right to privacy is secondary to Google’s corporate interests,” one reviewer wrote.

Source: Google’s ‘consent-less’ Android tracking probed by academics • The Register

Android tracks you before you start an app – no consent required. Also, it scans your photos.

Research from a leading academic shows Android users have advertising cookies and other gizmos working to build profiles on them even before they open their first app.

Doug Leith, professor and chair of computer systems at Trinity College Dublin, who carried out the research, claims in his write up that no consent is sought for the various identifiers and there is no way of opting out from having them run.

He found various mechanisms operating on the Android system which were then relaying the data back to Google via pre-installed apps such as Google Play Services and the Google Play store, all without users ever opening a Google app.

One of these is the “DSID” cookie, which Google explains in its documentation is used to identify a “signed in user on non-Google websites so that the user’s preference for personalized advertising is respected accordingly.” The “DSID” cookie lasts for two weeks.

Speaking about Google’s description in its documentation, Leith’s research states the explanation was still “rather vague and not as helpful as it might be,” and the main issue is that there’s no consent sought from Google before dropping the cookie and there’s no opt-out feature either.

Leith says the DSID advertising cookie is created shortly after the user logs into their Google account – part of the Android startup process – with a tracking file linked to that account placed into the Google Play Service’s app data folder.

This DSID cookie is “almost certainly” the primary method Google uses to link analytics and advertising events, such as ad clicks, to individual users, Leith writes in his paper [PDF].

Another tracker which cannot be removed once created is the Google Android ID, a device identifier that’s linked to a user’s Google account and created after the first connection made to the device by Google Play Services.

It continues to send data about the device back to Google even after the user logs out of their Google account and the only way to remove it, and its data, is to factory-reset the device.

Leith said he wasn’t able to ascertain the purpose of the identifier but his paper notes a code comment, presumably made by a Google dev, acknowledging that this identifier is considered personally identifiable information (PII), likely bringing it into the scope of European privacy law GDPR – still mostly intact in British law as UK GDPR.

The paper details the various other trackers and identifiers dropped by Google onto Android devices, all without user consent and according to Leith, in many cases it presents possible violations of data protection law.

Leith approached Google for a response before publishing his findings, which he delayed allowing time for a dialogue.

[…]

The findings come amid something of a recent uproar about another process called Android System SafetyCore – which arrived in a recent update for devices running Android 9 and later. It scans a user’s photo library for explicit images and displays content warnings before viewing them. Google says “the classification of content runs exclusively on your device and the results aren’t shared with Google.”

Naturally, it will also bring similar tech to Google Messages down the line to prevent certain unsolicited images from affecting a receiver.

Google started installing SafetyCore on user devices in November 2024, and there’s no way of opting out or managing the installation. One day, it’s just there.

Users have vented their frustrations about SafetyCore ever since and despite being able to uninstall and opt out of image scanning, the consent-less approach that runs throughout Android nevertheless left some users upset. It can be uninstalled on Android forks like Xiaomi’s MIUI using Settings>Apps>Android System SafetyCore>Uninstall or on Android using Apps/Apps & Notifications>Show System Apps>Show system apps>Locate SafetyCore>Uninstall or Disable. Reviewers report that in some cases the uninstall option is grayed out, and it can only be disabled, while others complain that it reinstalls on the next update.

The app’s Google Play page is littered with negative reviews, many of which cite its installation without consent.

“In short, it is spyware. We were not informed. It feels like the right to privacy is secondary to Google’s corporate interests,” one reviewer wrote.

Source: Google’s ‘consent-less’ Android tracking probed by academics • The Register

Mozilla updates updated TOS for Firefox and is now more confusing but does not look private

On Wednesday we shared that we’re introducing a new Terms of Use (TOU) and Privacy Notice for Firefox. Since then, we’ve been listening to some of our community’s concerns with parts of the TOU, specifically about licensing. Our intent was just to be as clear as possible about how we make Firefox work, but in doing so we also created some confusion and concern. With that in mind, we’re updating the language to more clearly reflect the limited scope of how Mozilla interacts with user data.

Here’s what the new language will say:

You give Mozilla the rights necessary to operate Firefox. This includes processing your data as we describe in the Firefox Privacy Notice. It also includes a nonexclusive, royalty-free, worldwide license for the purpose of doing as you request with the content you input in Firefox. This does not give Mozilla any ownership in that content. 

In addition, we’ve removed the reference to the Acceptable Use Policy because it seems to be causing more confusion than clarity.

Privacy FAQ

We also updated our Privacy FAQ to better address legal minutia around terms like “sells.” While we’re not reverting the FAQ, we want to provide more detail about why we made the change in the first place.

TL;DR Mozilla doesn’t sell data about you (in the way that most people think about “selling data”), and we don’t buy data about you. We changed our language because some jurisdictions define “sell” more broadly than most people would usually understand that word. Firefox has built-in privacy and security features, plus options that let you fine-tune your data settings.

 


 

The reason we’ve stepped away from making blanket claims that “We never sell your data” is because, in some places, the LEGAL definition of “sale of data” is broad and evolving. As an example, the California Consumer Privacy Act (CCPA) defines “sale” as the “selling, renting, releasing, disclosing, disseminating, making available, transferring, or otherwise communicating orally, in writing, or by electronic or other means, a consumer’s personal information by [a] business to another business or a third party” in exchange for “monetary” or “other valuable consideration.”

[…]

Source: An update on our Terms of Use

So this legal definition rhymes with what I would expect “sell” to mean. Don’t transfer my data to a third party – even better, don’t collect my data at all.

It’s a shame, as Firefox is my preferred browser, it’s not based on Google’s browser. So I am looking at the Zen browser and the Floorp browser now.

After Snowden and now Trump, Europe  Finally begins to worry about US-controlled clouds

In a recent blog post titled “It is no longer safe to move our governments and societies to US clouds,” Bert Hubert, an entrepreneur, software developer, and part-time technical advisor to the Dutch Electoral Council, articulated such concerns.

“We now have the bizarre situation that anyone with any sense can see that America is no longer a reliable partner, and that the entire large-scale US business world bows to Trump’s dictatorial will, but we STILL are doing everything we can to transfer entire governments and most of our own businesses to their clouds,” wrote Hubert.

Hubert didn’t offer data to support that statement, but European Commission stats shows that close to half of European enterprises rely on cloud services, a market led by Amazon, Microsoft, Google, Oracle, Salesforce, and IBM – all US-based companies.

While concern about cloud data sovereignty became fashionable back in 2013 when former NSA contractor Edward Snowden disclosed secrets revealing the scope of US signals intelligence gathering and fled to Russia, data privacy worries have taken on new urgency in light of the Trump administration’s sudden policy shifts.

In the tech sphere those moves include removing members of the US Privacy and Civil Liberties Oversight Board that safeguards data under the EU-US Data Privacy Framework, alleged flouting of federal data rules to advance policy goals. Europeans therefore have good reason to wonder how much they can trust data privacy assurances from US cloud providers amid their shows of obsequious deference to the new regime.

And there’s also a practical impetus for the unrest: organizations that use Microsoft Office 2016 and 2019 have to decide whether they want to move to Microsoft’s cloud come October 14, 2025, when support officially ends. Microsoft is encouraging customers to move to Microsoft 365 which is tied to the cloud. But that looks riskier now than it did under less contentious transatlantic relations.

The Register spoke with Hubert about his concerns and the situation in which Europe now finds itself.

[…]

Source: Europe begins to worry about US-controlled clouds • The Register

It was truly unbelievable that EU was using US cloud in the first place for many reasons ranging from technical to cost to privacy but they just keep blundering on.

Google pulls plug on Ad blockers such as uBlock Origin by killing Manifest v2

Google’s purge of Manifest v2-based extensions from its Chrome browser is underway, as many users over the past few days may have noticed.

Popular content-blocking add-on (v2-based) uBlock Origin is now automatically disabled for many in the ubiquitous browser as it continues the V3 rollout.

[…]

According to the company, Google’s decision to shift to V3 is all in the name of improving its browser’s security, privacy, and performance. However, the transition to the new specification also means that some extensions will struggle due to limitations in the new API.

In September 2024, the team behind uBlock Origin noted that one of the most significant changes was around the webRequest API, used to intercept and modify network requests. Extensions such as uBlock Origin extensively use the API to block unwanted content before it loads.

[…]

Ad-blockers and privacy tools are the worst hit by the changes, and affected users – because let’s face it, most Chrome users won’t be using an ad-blocker – can switch to an alternative browser for something like the original experience, or they can switch to a different extension which is unlikely to have the same capabilities.

In its post, uBlock recommends a move to Firefox and use of the extension uBlock Origin, a switch to a browser that will support Manifest v2

[…]

Source: Google continues pulling the plug on Manifest v2 • The Register