Google Pixel Bug Turns Microphone on for Incoming Callers Leaving Voicemail

[…] Called “Take a Message,” the buggy feature was released last year and is supposed to automatically transcribe voicemails as they’re coming in, as well as detect and mark spam calls. Unfortunately, according to reports from multiple users on Reddit (as initially spotted by 9to5Google), the feature has started turning on the microphone while taking voicemails, allowing whoever is leaving you a voicemail to hear you.

[…]

The issue has been reported affecting Pixel devices ranging from the Pixel 4 to the Pixel 10, and on a recent support page, Google’s finally acknowledging it. However, the company’s action might not be enough, depending on how cautious you want to be.

According to Community Manager Siri Tejaswini, the company has “investigated this issue,” and has confirmed it “affects a very small subset of Pixel 4 and 5 devices under very specific and rare circumstances.” The post doesn’t go any further on the how and why of the diagnosis, but says that Google is now disabling Take a Message and “next-gen Call Screen features” on these devices.

[…]

While it’s encouraging that Google is taking action on the Take a Message bug, the company only seems to be acknowledging it for Pixel 4 and Pixel 5 models, at least for now. I’ve asked Google whether owners of other Pixel models should be worried, as user reports seem split on this. Still, because some have mentioned an issue with even the most up-to-date Pixel phone, if you want to practice your own abundance of caution, it might be worth disabling Take a Message on your device, regardless of its model number.

To do this, open your Phone app, then tap the three-lined menu icon at the top-left of the page. Navigate to Settings > Call Assist > Take a Message, and toggle the feature off.

Source: This Pixel Bug Leaked Audio to Incoming Callers, and Google’s Fix Might Not Be Enough | Lifehacker

ICE takes aim at data held by advertising and tech firms

Let us not forget that the reason Nazi Germany was so great at exporting Jews from the Netherlands was for a large part because of the great databases the Netherlands kept at that time containing religious and ethnic information on its’ population.

It’s not enough to have its agents in streets and schools; ICE now wants to see what data online ads already collect about you. The US Immigration and Customs Enforcement last week issued a Request for Information (RFI) asking data and ad tech brokers how they could help in its mission.

The RFI is not a solicitation for bids. Rather it represents an attempt to conduct market research into the spectrum of data – personal, financial, location, health, and so on – that ICE investigators can source from technology and advertising companies.

“[T]he Government is seeking to understand the current state of Ad Tech compliant and location data services available to federal investigative and operational entities, considering regulatory constraints and privacy expectations of support investigations activities,” the RFI explains.

Issued on Friday, January 23, 2026, one day prior to the shooting of VA nurse Alex Pretti by a federal immigration agent, two weeks after the shooting of Renée Good, and three weeks after the shooting of Keith Porter Jr, the RFI lands amid growing disapproval of ICE tactics and mounting pressure to withhold funding for the agency.

ICE did not immediately respond to a request to elaborate on how it might use ad tech data and to share whether any companies have responded to its invitation.

The RFI follows a similar solicitation published last October for a contractor capable of providing ICE with open source intelligence and social media information to assist the ICE Enforcement and Removal Operations (ERO) directorate’s Targeting Operations Division – tasked with finding and removing “aliens that pose a threat to public safety or national security.”

[…]

Tom Bowman, policy counsel with the Center for Democracy & Technology’s (CDT) Security & Surveillance Project, told The Register in a phone interview that ICE is attempting to rebrand surveillance as a commercial transaction.

“But that doesn’t make the surveillance any less intrusive or any less constitutionally suspect,” said Bowman. “This inquiry specifically underscores what really is a long-standing problem – that government agencies have been able to sidestep Fourth Amendment protections by purchasing data that would otherwise need a warrant to collect.”

The data derived from ad tech and various technology businesses, said Bowman, can reveal intimate details about people’s lives, including visits to medical facilities and places of worship.

[…]

“Ad tech compliance regimes were never designed to protect people from government surveillance or coercive enforcement,” he said. “Ad tech data is often collected via consent that is meaningless. The data flows are opaque. And then these types of downstream uses are really difficult to control.”

Bowman argues that while there’s been a broad failure to meaningfully regulate data brokers, legislative solutions are possible.

[…]

Source: ICE takes aim at data held by advertising and tech firms • The Register

Following Apple, now Google to pay $68m to settle lawsuit claiming it recorded and sold private conversations

Google has agreed to pay $68m (£51m) to settle a lawsuit claiming it secretly listened to people’s private conversations through their phones.

Users accused Google Assistant – a virtual assistant present on many Android devices – of recording private conversations after it was inadvertently triggered on their devices.

They claimed the recordings were then shared with advertisers in order to send them targeted advertising.

The BBC has contacted Google for comment. But in a filing seeking to settle the case, it denied wrongdoing and said it was seeking to avoid litigation.

Google Assistant is designed to wait in standby mode until it hears a particular phrase – typically “Hey Google” – which activates it.

The phone then records what it hears and sends the recording to Google’s servers where it can be analysed.

[…]

The claim has been brought as a class action lawsuit rather than an individual case – meaning if it is approved, the money will be paid out across many different claimants.

Those eligible for a payout will have owned Google devices dating back to May 2016.

But lawyers for the plaintiffs may ask for up to one-third of the settlement – amounting to about $22m in legal fees.

It follows a similar case in January where Apple agreed to pay $95m to settle a case alleging some of its devices were listening to people through its voice-activated assistant Siri without their permission.

The tech firm also denied any wrongdoing, as well as claims that it “recorded, disclosed to third parties, or failed to delete, conversations recorded as the result of a Siri activation” without consent.

Source: Google to pay $68m to settle lawsuit claiming it recorded private conversations

Microsoft will give the FBI your BitLocker keys if asked. Can do so because of cloud accounts.

Great target for hackers then, the server with unencrypted bitlocker keys on it.

Microsoft has confirmed in a statement to Forbes that the company will provide the FBI access to BitLocker encryption keys if a valid legal order is requested. These keys enable the ability to decrypt and access the data on a computer running Windows, giving law enforcement the means to break into a device and access its data.

The news comes as Forbes reports that Microsoft gave the FBI the BitLocker encryption keys to access a device in Guam that law enforcement believed to have “evidence that would help prove individuals handling the island’s Covid unemployment assistance program were part of a plot to steal funds” in early 2025.

Source: Microsoft gave FBI BitLocker keys, raising privacy fears | Windows Central

What Happened After Security Researchers Found 60 Flock Cameras Livestreaming to the Internet

A couple months ago, YouTuber Benn Jordan “found vulnerabilities in some of Flock’s license plate reader cameras,” reports 404 Media’s Jason Koebler. “He reached out to me to tell me he had learned that some of Flock’s Condor cameras were left live-streaming to the open internet.”

This led to a remarkable article where Koebler confirmed the breach by visiting a Flock surveillance camera mounted on a California traffic signal. (“On my phone, I am watching myself in real time as the camera records and livestreams me — without any password or login — to the open internet… Hundreds of miles away, my colleagues are remotely watching me too through the exposed feed.”) Flock left livestreams and administrator control panels for at least 60 of its AI-enabled Condor cameras around the country exposed to the open internet, where anyone could watch them, download 30 days worth of video archive, and change settings, see log files, and run diagnostics. Unlike many of Flock’s cameras, which are designed to capture license plates as people drive by, Flock’s Condor cameras are pan-tilt-zoom (PTZ) cameras designed to record and track people, not vehicles. Condor cameras can be set to automatically zoom in on people’s faces… The exposure was initially discovered by YouTuber and technologist Benn Jordan and was shared with security researcher Jon “GainSec” Gaines, who recently found numerous vulnerabilities in several other models of Flock’s automated license plate reader (ALPR) cameras.
Jordan appeared this week as a guest on Koebler’s own YouTube channel, while Jordan released a video of his own about the experience. titled “We Hacked Flock Safety Cameras in under 30 Seconds.” (Thanks to Slashdot reader beadon for sharing the link.) But together Jordan and 404 Media also created another video three weeks ago titled “The Flock Camera Leak is Like Netflix for Stalkers” which includes footage he says was “completely accessible at the time Flock Safety was telling cities that the devices are secure after they’re deployed.”

The video decries cities “too lazy to conduct their own security audit or research the efficacy versus risk,” but also calls weak security “an industry-wide problem.” Jordan explains in the video how he “very easily found the administration interfaces for dozens of Flock safety cameras…” — but also what happened next: None of the data or video footage was encrypted. There was no username or password required. These were all completely public-facing, for the world to see…. Making any modification to the cameras is illegal, so I didn’t do this. But I had the ability to delete any of the video footage or evidence by simply pressing a button. I could see the paths where all of the evidence files were located on the file system…

During and after the process of conducting that research and making that video, I was visited by the police and had what I believed to be private investigators outside my home photographing me and my property and bothering my neighbors. John Gaines or GainSec, the brains behind most of this research, lost employment within 48 hours of the video being released. And the sad reality is that I don’t view these things as consequences or punishment for researching security vulnerabilities. I view these as consequences and punishment for doing it ethically and transparently.

I’ve been contacted by people on or communicating with civic councils who found my videos concerning, and they shared Flock Safety’s response with me. The company claimed that the devices in my video did not reflect the security standards of the ones being publicly deployed. The CEO even posted on LinkedIn and boasted about Flock Safety’s security policies. So, I formally and publicly offered to personally fund security research into Flock Safety’s deployed ecosystem. But the law prevents me from touching their live devices. So, all I needed was their permission so I wouldn’t get arrested. And I was even willing to let them supervise this research.

I got no response.

So instead, he read Flock’s official response to a security/surveillance industry research group — while standing in front of one of their security cameras, streaming his reading to the public internet.

“Might as well. It’s my tax dollars that paid for it.”

” ‘Flock is committed to continuously improving security…'”

Source: What Happened After Security Researchers Found 60 Flock Cameras Livestreaming to the Internet | Slashdot

For more on why Flock cameras are problematic, read here

Signal Founder Creates Truly Private GPT: Confer

When you use an AI service, you’re handing over your thoughts in plaintext. The operator stores them, trains on them, and–inevitably–will monetize them. You get a response; they get everything.

Confer works differently. In the previous post, we described how Confer encrypts your chat history with keys that never leave your devices. The remaining piece to consider is inference—the moment your prompt reaches an LLM and a response comes back.

Traditionally, end-to-end encryption works when the endpoints are devices under the control of a conversation’s participants. However, AI inference requires a server with GPUs to be an endpoint in the conversation. Someone has to run that server, but we want to prevent the people who are running it (us) from seeing prompts or the responses.

Confidential computing

This is the domain of confidential computing. Confidential computing uses hardware-enforced isolation to run code in a Trusted Execution Environment (TEE). The host machine provides CPU, memory, and power, but cannot access the TEE’s memory or execution state.

LLMs are fundamentally stateless—input in, output out—which makes them ideal for this environment. For Confer, we run inference inside a confidential VM. Your prompts are encrypted from your device directly into the TEE using Noise Pipes, processed there, and responses are encrypted back. The host never sees plaintext.

But this raises an obvious concern: even if we have encrypted pipes in and out of an encrypted environment, it really matters what is running inside that environment. The client needs assurance that the code running is actually doing what it claims.

[…]

Source: Private inference | Confer Blog

Your smart TV is watching you and nobody’s stopping it

At the end of last year, Texas Attorney General Ken Paxton sued five of the largest TV companies, accusing them of excessive and deceptive surveillance of their customers.

Paxton reserved special venom for the two China-based members of the quintet. His argument is that unlike Sony, Samsung, and LG, if Hisense and TCL have conducted surveillance in the way the lawsuits accuse them of, they’d potentially be required to share all data with the Chinese Communist Party.

It is a rare pleasure to state that legal action against tech companies is cogent, timely, focused, and – if the allegations are true – deserves to succeed. It is less pleasant to predict that even if one, several, or all of these manufacturers did what they’re accused of, and were sanctioned for it, it would not put the safeguards in place to stop such practices from recurring.

At the heart of the cases is the fact that most smart TVs use Automatic Content Recognition (ACR) to send rapid-fire screenshots back to company servers, where they are analyzed to finely detail your TV usage. This sometimes covers not just streaming video, but whatever apps or external devices are displaying, and the allegations are that every other bit of personal data the set can scry is also pulled in. Installed apps can have trackers, data from other devices can be swept up.

These lawsuits aside, smart TV companies more generally boast of their prying prowess to the ecosystem of data exploiters from which they make their money. The companies are much less open about the mechanisms and amount of data collection, and deploy a barrage of defenses to entice customers into turning the stuff on and stop them from turning it off. You may have already seen massive on-screen Ts&Cs with only ACCEPT as an option, ACR controls buried in labyrinthine menu jails, features that stop working even if you complete the obstacle course – all this is old news.

How old are these practices? TV maker Vizio got hit by multiple suits between 2015 and 2017, and collected $2.2 million in fines from the Federal Trade Commission and the state of New Jersey, as well as settling related class actions to the tune of $17 million. The FTC said the fines settled claims the maker had used installed software on its TVs to collect viewing data on 11 million TVs without their owners’ knowledge or consent. A court order said the manufacturer had to delete data collected before 2016 and promise to “prominently disclose and obtain affirmative express consent” for data collection and sharing from then on.

Yet ten years on, the problem has only got worse. There is no law against data collection, and companies often eat the fines, adjust their behavior to the barest minimum compliance, and set about finding new ways to entomb your digital twin in their datacenters.

It’s not even as if more regulation helps. The European GDPR data protection and privacy regs give consumers powerful rights and companies strict obligations, which smart TV makers do not rush to observe. Researchers claim the problem is growing no matter which side of the Atlantic your TV is watching you on.

[…]

Source: Your smart TV is watching you and nobody’s stopping it • The Register

How Cops Are Using Flock’s license plate camera Network To Surveil Protesters And Activists

It’s no secret that 2025 has given Americans plenty to protest about. But as news cameras showed protesters filling streets of cities across the country, law enforcement officers—including U.S. Border Patrol agents—were quietly watching those same streets through different lenses: Flock Safety automated license plate readers (ALPRs) that tracked every passing car.

Through an analysis of 10 months of nationwide searches on Flock Safety’s servers, we discovered that more than 50 federal, state, and local agencies ran hundreds of searches through Flock’s national network of surveillance data in connection with protest activity. In some cases, law enforcement specifically targeted known activist groups, demonstrating how mass surveillance technology increasingly threatens our freedom to demonstrate.

Flock Safety provides ALPR technology to thousands of law enforcement agencies. The company installs cameras throughout their jurisdictions, and these cameras photograph every car that passes, documenting the license plate, color, make, model and other distinguishing characteristics. This data is paired with time and location, and uploaded to a massive searchable database. Flock Safety encourages agencies to share the data they collect broadly with other agencies across the country. It is common for an agency to search thousands of networks nationwide even when they don’t have reason to believe a targeted vehicle left the region.

Via public records requests, EFF obtained datasets representing more than 12 million searches logged by more than 3,900 agencies between December 2024 and October 2025. The data shows that agencies logged hundreds of searches related to the 50501 protests in February, the Hands Off protests in April, the No Kings protests in June and October, and other protests in between.

[…]

While EFF and other civil liberties groups argue the law should require a search warrant for such searches, police are simply prompted to enter text into a “reason” field in the Flock Safety system. Usually this is only a few words–or even just one.

In these cases, that word was often just “protest.”

Crime does sometimes occur at protests, whether that’s property damage, pick-pocketing, or clashes between groups on opposite sides of a protest. Some of these searches may have been tied to an actual crime that occurred, even though in most cases officers did not articulate a criminal offense when running the search. But the truth is, the only reason an officer is able to even search for a suspect at a protest is because ALPRs collected data on every single person who attended the protest.

[…]

In a few cases, police were using Flock’s ALPR network to investigate threats made against attendees or incidents where motorists opposed to the protests drove their vehicle into crowds. For example, throughout June 2025, an Arizona Department of Public Safety officer logged three searches for “no kings rock threat,” and a Wichita (Kan.) Police Department officer logged 22 searches for various license plates under the reason “Crime Stoppers Tip of causing harm during protests.”

Even when law enforcement is specifically looking for vehicles engaged in potentially criminal behavior such as threatening protesters, it cannot be ignored that mass surveillance systems work by collecting data on everyone driving to or near a protest—not just those under suspicion.

Border Patrol’s Expanding Reach

As U.S. Border Patrol (USBP), ICE, and other federal agencies tasked with immigration enforcement have massively expanded operations into major cities, advocates for immigrants have responded through organized rallies, rapid-response confrontations, and extended presences at federal facilities.

USBP has made extensive use of Flock Safety’s system for immigration enforcement, but also to target those who object to its tactics. In June, a few days after the No Kings Protest, USBP ran three searches for a vehicle using the descriptor “Portland Riots.”

[…]

Fighting Back Against ALPR

ALPR systems are designed to capture information on every vehicle that passes within view. That means they don’t just capture data on “criminals” but on everyone, all the time—and that includes people engaged in their First Amendment right to publicly dissent. Police are sitting on massive troves of data that can reveal who attended a protest, and this data shows they are not afraid to use it.

Our analysis only includes data where agencies explicitly mentioned protests or related terms in the “reason” field when documenting their search. It’s likely that scores more were conducted under less obvious pretexts and search reasons. According to our analysis, approximately 20 percent of all searches we reviewed listed vague language like “investigation,” “suspect,” and “query” in the reason field. Those terms could well be cover for spying on a protest, an abortion prosecution, or an officer stalking a spouse, and no one would be the wiser–including the agencies whose data was searched. Flock has said it will now require officers to select a specific crime under investigation, but that can and will also be used to obfuscate dubious searches.

For protestors, this data should serve as confirmation that ALPR surveillance has been and will be used to target activities protected by the First Amendment. Depending on your threat model, this means you should think carefully about how you arrive at protests, and explore options such as by biking, walking, carpooling, taking public transportation, or simply parking a little further away from the action. Our Surveillance Self-Defense project has more information on steps you could take to protect your privacy when traveling to and attending a protest.

[…]

Everyone should have the right to speak up against injustice without ending up in a database.

Source: How Cops Are Using Flock Safety’s ALPR Network To Surveil Protesters And Activists | Techdirt

New EU Jolla Phone Now Available for Pre-Order as an Independent No Spyware Linux Phone

Jolla kicked off a campaign for a new Jolla Phone, which they call the independent European Do It Together (DIT) Linux phone, shaped by the people who use it.

“The Jolla Phone is not based on Big Tech technology. It is governed by European privacy thinking and a community-led model.”

The new Jolla Phone is powered by a high-performing Mediatek 5G SoC, and features 12GB RAM, 256GB storage that can be expanded to up to 2TB with a microSDXC card, a 6.36-inch FullHD AMOLED display with ~390ppi, 20:9 aspect ratio, and Gorilla Glass, and a user-replaceable 5,500mAh battery.

The Linux phone also features 4G/5G support with dual nano-SIM and a global roaming modem configuration, Wi-Fi 6 wireless, Bluetooth 5.4, NFC, 50MP Wide and 13MP Ultrawide main cameras, front front-facing wide-lens selfie camera, fingerprint reader on the power key, a user-changeable back cover, and an RGB indication LED.

On top of that, the new Jolla Phone promises a user-configurable physical Privacy Switch that lets you turn off the microphone, Bluetooth, Android apps, or whatever you wish.

The device will be available in three colors, including Snow White, Kaamos Black, and The Orange. All the specs of the new Jolla Phone were voted on by Sailfish OS community members over the past few months.

Honouring the original Jolla Phone form factor and design, the new model ships with Sailfish OS (with support for Android apps), a Linux-based European alternative to dominating mobile operating systems that promises a minimum of 5 years of support, no tracking, no calling home, and no hidden analytics.

“Mainstream phones send vast amounts of background data. A common Android phone sends megabytes of data per day to Google even if the device is not used at all. Sailfish OS stays silent unless you explicitly allow connections,” said Jolla.

The new Jolla Phone is now available for pre-order for 99 EUR and will only be produced if at least 2000 pre-orders are reached in one month from today, until January 4th, 2026. The full price of the Linux phone will be 499 EUR (incl. local VAT), and the 99 EUR pre-order price will be fully refundable and deducted from the full price.

The device will be manufactured and sold in Europe, but Jolla says that it will design the cellular band configuration to enable global travelling as much as possible, including e.g. roaming in the U.S. carrier networks. The initial sales markets are the EU, the UK, Switzerland, and Norway.

Source: New Jolla Phone Now Available for Pre-Order as an Independent Linux Phone – 9to5Linux

New hotness in democracy: if the people say no to mass surveillance, do it again right after you have said you won’t do it. Not EU this time: it’s India

You know what they say: If at first you don’t succeed at mass government surveillance, try, try again. Only two days after India backpedaled on its plan to force smartphone makers to preinstall a state-run “cybersecurity” app, Reuters reports that the country is back at it. It’s said to be considering a telecom industry proposal with another draconian requirement. This one would require smartphone makers to enable always-on satellite-based location tracking (Assisted GPS).

The measure would require location services to remain on at all times, with no option to switch them off. The telecom industry also wants phone makers to disable notifications that alert users when their carriers have accessed their location.

[…]

Source: India is reportedly considering another draconian smartphone surveillance plan

Looks like the Indians took a page out of the Danish playbook for Chat Control and turning the EU into a 1984 Brave New World

India demands smartphone makers install government app

India’s government has issued a directive that requires all smartphone manufacturers to install a government app on every handset in the country and has given them 90 days to get the job done – and to ensure users can’t remove the code.

The app is called “Sanchar Saathi” and is a product of India’s Department of Telecommunications (DoT).

On Google Play and Apple’s App Store, the Department describes the app as “a citizen centric initiative … to empower mobile subscribers, strengthen their security and increase awareness about citizen centric initiatives.”

The app does those jobs by allowing users to report incoming calls or messages – even on WhatsApp – they suspect are attempts at fraud. Users can also report incoming calls for which caller ID reveals the +91 country code, as India’s government thinks that’s an indicator of a possible illegal telecoms operator.

Users can also block their device if they lose it or suspect it was stolen, an act that will prevent it from working on any mobile network in India.

Another function allows lookup of IMEI numbers so users can verify if their handset is genuine.

Spam and scams delivered by calls or TXTs are pervasive around the world, and researchers last year found that most Indian netizens receive three or more dodgy communiqués every day. This app has obvious potential to help reduce such attacks.

An announcement from India’s government states that cybersecurity at telcos is another reason for the requirement to install the app.

“Spoofed/ Tampered IMEIs in telecom network leads to situation where same IMEI is working in different devices at different places simultaneously and pose challenges in action against such IMEIs,” according to the announcement. “India has [a] big second-hand mobile device market. Cases have also been observed where stolen or blacklisted devices are being re-sold. It makes the purchaser abettor in crime and causes financial loss to them. The blocked/blacklisted IMEIs can be checked using Sanchar Saathi App.”

That motive is likely the reason India has required handset-makers to install Sanchar Saathi on existing handsets with a software update.

The directive also requires the app to be pre-installed, “visible, functional, and enabled for users at first setup.” Manufacturers may not disable or restrict its features and “must ensure the App is easily accessible during device setup.”

Those functions mean India’s government will soon have a means of accessing personal info on hundreds of millions of devices.

Apar Gupta, founder and director of India’s Internet Freedom Foundation, has criticized the directive on grounds that Sanchar Saathi isn’t fit for purpose. “Rather than resorting to coercion and mandating it to be installed the focus should be on improving it,” he wrote.

[…]

Source: India demands smartphone makers install government app • The Register

Canadian data order risks blowing a hole in EU sovereignty

A Canadian court has ordered French cloud provider OVHcloud to hand over customer data stored in Europe, potentially undermining the provider’s claims about digital sovereignty protections.

According to documents seen by The Register, the Royal Canadian Mounted Police (RCMP) issued a Production Order in April 2024 demanding subscriber and account data linked to four IP addresses on OVH servers in France, the UK, and Australia as part of a criminal investigation.

OVH has a Canadian arm, which was the jumping-off point for the courts, but OVH Group is a French company, so the data in France should be protected from prying eyes. Or perhaps not.

Rather than using established Mutual Legal Assistance Treaties (MLAT) between Canada and France, the RCMP sought direct disclosure through OVH’s Canadian subsidiary.

This puts OVH in an impossible position. French law prohibits such data sharing outside official treaties, with penalties up to €90,000 and six months imprisonment. But refusing the Canadian order risks contempt of court charges.

[…]

Under Trump 2.0, economic and geopolitical relations between Europe and the US have become increasingly volatile, something Microsoft acknowledged in April.

Against this backdrop, concerns about the US CLOUD Act are growing. Through the legislation, US authorities can request – via warrant or subpoena – access to data hosted by US corporations regardless of where in the world that data is stored. Hyperscalers claim they have received no such requests with respect to European customers, but the risk remains and European cloud providers have used this as a sales tactic by insisting digital information they hold is protected.

In the OVH case, if Canadian authorities are able to force access to data held on European servers rather than navigate official channels (for example, international treaties), the implications could be severe.

[…]

Earlier this week, GrapheneOS announced it no longer had active servers in France and was in the process of leaving OVH.

The privacy-focused mobile outfit said, “France isn’t a safe country for open source privacy projects. They expect backdoors in encryption and for device access too. Secure devices and services are not going to be allowed. We don’t feel safe using OVH for even a static website with servers in Canada/US via their Canada/US subsidiaries.”

In August, an OVH legal representative crowed over the admission by Microsoft that it could not guarantee data sovereignty.

It would be deeply ironic if OVH were unable to guarantee the same thing because the company has a subsidiary in Canada.

[…]

Source: Canadian data order risks blowing a hole in EU sovereignty • The Register

That didn’t take long: A few days after Chat Control, European Parliament implements Age Verification on Social Media, 16+

On Wednesday, MEPs adopted a non-legislative report by 483 votes in favour, 92 against and with 86 abstentions, expressing deep concern over the physical and mental health risks minors face online and calling for stronger protection against the manipulative strategies that can increase addiction and that are detrimental to children’s ability to concentrate and engage healthily with online content.


Minimum age for social media platforms

To help parents manage their children’s digital presence and ensure age-appropriate online engagement, Parliament proposes a harmonised EU digital minimum age of 16 for access to social media, video-sharing platforms and AI companions, while allowing 13- to 16-year-olds access with parental consent.

Expressing support for the Commission’s work to develop an EU age verification app and the European digital identity (eID) wallet, MEPs insist that age assurance systems must be accurate and preserve minors’ privacy. Such systems do not relieve platforms of their responsibility to ensure their products are safe and age-appropriate by design, they add.

To incentivise better compliance with the EU’s Digital Services Act (DSA) and other relevant laws, MEPs suggest senior managers could be made personally liable in cases of serious and persistent non-compliance, with particular respect to protection of minors and age verification.

[…]

According to the 2025 Eurobarometer, over 90% of Europeans believe action to protect children online is a matter of urgency, not least in relation to social media’s negative impact on mental health (93%), cyberbullying (92%) and the need for effective ways to restrict access to age-inappropriate content (92%).

Member states are starting to take action and responding with measures such as age limits and verification systems.

Source: Children should be at least 16 to access social media, say MEPs | News | European Parliament

Expect to see manadatory surveillance on social media (whatever they define that to be) soon as it is clearly “risky”.

The problem is real, but age verification is not the way to solve the problem. Rather, it will make it much, much worse as well as adding new problems entirely.

See also: https://www.linkielist.com/?s=age+verification&submit=Search

See also: Europen Council decides to implement Mass Surveillance and Age Verification through law protecting children from online abuse

Welcome to a new fascist thought controlled Europe, heralded by Denmark.

Chat Control: EU lawmakers finally agree on the “voluntary” scanning of your private chats

[…] The EU Council has finally reached an agreement on the controversial Child Sexual Abuse Regulation (CSAR) after more than three years of failed attempts.

Nicknamed Chat Control by its critics, the agreement has kept cryptographers, technologists, encrypted service providers, and privacy experts alike in turmoil since its inception.

Presidency after presidency, the bill has taken many shapes. But its most controversial feature is an obligation for all messaging service providers operating in the EU – including those using end-to-end-encryption – to scan their users’ private chats on the lookout for child sexual abuse material (CSAM).

At the beginning of the month, the Danish Presidency decided to change its approach with a new compromise text that makes the chat scanning voluntary, instead. That turned to be a winning move, with the proposal managing to reach an agreement in the Council on Wednesday, November 26, 2025.

Privacy experts are unlikely to celebrate, though. The decision came a few days after a group of scientists wrote yet another open letter warning that the latest text still “brings high risks to society.” That’s after other privacy experts deemed the new proposal a “political deception” rather than an actual fix.

The EU Council is now preparing to start negotiations with the European Parliament, hoping to agree on the final terms of the regulation.

What we know about the Council agreement

As per the EU Council announcement, the new law imposes a series of obligations on digital companies. Under the new rules, online service providers will be required to assess how their platforms could be misused and, based on the results, may need to “implement mitigating measures to counter that risk,” the Council notes.

Source: Chat Control: EU lawmakers finally agree on the voluntary scanning of your private chats | TechRadar

A “risk mitigation obligation” can be used to explain anything and obligate spying through whatever services the EU says there is “risk”

Considering the whole proposal was shot down several times in the past years and even past month, using a back door rush to push this through is not how a democracy is supposed to function at all. And this is how fascism grips it’s iron claws. What is going on in Demark?

Europen Council decides to implement Mass Surveillance and Age Verification through law protecting children from online abuse

[…]

Under the new rules, online service providers will be required to assess the risk that their services could be misused for the dissemination of child sexual abuse material or for the solicitation of children. On the basis of this assessment, they will have to implement mitigating measures to counter that risk. Such measures could include making available tools that enable users to report online child sexual abuse, to control what content about them is shared with others and to put in place default privacy settings for children.

Member states will designate national authorities (‘coordinating and other competent authorities’) responsible for assessing these risk assessments and mitigating measures, with the possibility of obliging providers to carry out mitigating measures.

[…]

The Council also wants to make permanent a currently temporary measure that allows companies to – voluntarily – scan their services for child sexual abuse. At present, providers of messaging services, for instance, may voluntarily check content shared on their platforms for online child sexual abuse material,

[Note here: if it is deemed “risky” then the voluntary part is scrubbed and it becomes mandatory. Anything can be called “risky” very easily (just look at the data slurping that goes on in Terms of Services through the text “improving our product”).]

The new law provides for the setting up of a new EU agency, the EU Centre on Child Sexual Abuse, to support the implementation of the regulation.

The EU Centre will assess and process the information supplied by the online providers about child sexual abuse material identified on services, and will create, maintain and operate a database for reports submitted to it by providers. It will further support the national authorities in assessing the risk that services could be used for spreading child sexual abuse material.

The Centre is also responsible for sharing companies’ information with Europol and national law enforcement bodies. Furthermore, it will establish a database of child sexual abuse indicators, which companies can use for their voluntary activities.

Source: Child sexual abuse: Council reaches position on law protecting children from online abuse – Consilium

The article does not mention how you can find out if someone is a child: that is age verification. Which comes with huge rafts of problems, such as censorship (there go the LGBTQ crowd!), hacks (Discord) stealing all the government IDs used to verify ages, and of course ways that people find to circumvent age verification (VPNs, which increase internet traffic, meme pictures of Donald Trump) which causes them to behave in a more unpredictable way, thus harming the kids this is supposed to protect.

Of course, this law has been shot down several times in the past 3 years by the EU, but that didn’t stop Denmark from finding a way to implement it nonetheless in a back door shotgun kind of way.

Age Verification, Estimation, Assurance, Oh My! A Guide To The Terminology

If you’ve been following the wave of age-gating laws sweeping across the country and the globe, you’ve probably noticed that lawmakers, tech companies, and advocates all seem to be using different terms for what sounds like the same thing. Age verification, age assurance, age estimation, age gating—they get thrown around interchangeably, but they technically mean different things. And those differences matter a lot when we’re talking about your rights, your privacy, your data, and who gets to access information online.

[click the source link below to read the different definitions – ed]

Why This Confusion Matters

Politicians and tech companies love using these terms interchangeably because it obscures what they’re actually proposing. A law that requires “age assurance” sounds reasonable and moderate. But if that law defines age assurance as requiring government ID verification, it’s not moderate at all—it’s mass surveillance. Similarly, when Instagram says it’s using “age estimation” to protect teens, that sounds privacy-friendly. But when their estimation fails and forces you to upload your driver’s license instead, the privacy promise evaporates.

Here’s the uncomfortable truth: most lawmakers writing these bills have no idea how any of this technology actually works. They don’t know that age estimation systems routinely fail for people of color, trans individuals, and people with disabilities. They don’t know that verification systems have error rates. They don’t even seem to understand that the terms they’re using mean different things. The fact that their terminology is all over the place—using “age assurance,” “age verification,” and “age estimation” interchangeably—makes this ignorance painfully clear, and leaves the onus on platforms to choose whichever option best insulates them from liability.

Language matters because it shapes how we think about these systems. “Assurance” sounds gentle. “Verification” sounds official. “Estimation” sounds technical and impersonal, and also admits its inherent imprecision. But they all involve collecting your data and create a metaphysical age gate to the internet. The terminology is deliberately confusing, but the stakes are clear: it’s your privacy, your data, and your ability to access the internet without constant identity checks. Don’t let fuzzy language disguise what these systems really do.

Republished from EFF’s Deeplinks blog.

Source: Age Verification, Estimation, Assurance, Oh My! A Guide To The Terminology | Techdirt

Danish manage to bypass democracy to implement mass EU surveillance, says it is “voluntary”

The EU states agree on a common position on chat control. Internet services should be allowed to read communication voluntarily, but will not be obliged [*cough – see bold and end of document: Ed*] to do so. We publish the classified negotiating protocol and bill. After the formal decision, the trilogue negotiations begin.

18.11.2025 at 14:03– Andre Meister – in surveillanceno additions

Man in suit at lectern, behind him flags.
Presidency of the Council: Danish Minister of Justice Hummelgaard. – CC-BY-NC-ND 4.0 Danish Presidency

The EU states have agreed on a common position on chat control. We publish the bill.

Last week, the Council working group discussed the law. We shall once again publish the classified minutes of the meeting.

Tomorrow, the Permanent Representatives want to officially decide on the position.

Update 19.10.: A Council spokesperson tells us, “The agenda item has been postponed until next week.”

Three years of dispute

For three and a half years, the EU institutions have been arguing over chat control. The Commission intends to oblige Internet services to search the content of their users without cause for information on criminal offences and to send them to authorities if suspected.

Parliament calls this mass surveillance and calls for only unencrypted content from suspects to be scanned.

A majority of EU countries want mandatory chat control. However, a blocking minority rejects this. Now the Council has agreed on a compromise. Internet services are not required to chat control, but may carry out a voluntary chat control.

Absolute red lines

The Danish Presidency wants to bring the draft law through the Council “as soon as possible” so that the trilogue negotiations can be started in a timely manner. The feedback from the states should be limited to “absolute red lines”.

The majority of states “supported the compromise proposal.” At least 15 spoke out in favour, including Germany and France.

Germany “welcomed both the deletion of the mandatory measures and the permanent anchoring of voluntary measures.”

Italy also sees voluntary chat control as skeptical. “We fear that the instrument could also be extended to other crimes, so we have difficulty supporting the proposal.” Politicians have already called for chat control to be extended to other content.

Absolute minimum consensus

Other states called the compromise “an absolute minimum consensus.” They “actually wanted more – especially in the sense of commitments.” Some states “showed themselves clearly disappointed by the cancellations made.”

Spain, in particular, “still considered mandatory measures to be necessary, unfortunately, a comprehensive agreement on this was not possible.” Hungary, too, “saw volunteerism as the sole concept as too little.”

Spain, Hungary and Bulgaria proposed “an obligation for providers to have to expose at least in open areas.” The Danish Presidency “described the proposal as ambitious, but did not take it up to avoid further discussion.”

Denmark explicitly pointed to the review clause. Thus, “the possibility of detection orders is kept open at a later date.” Hungary stressed that “this possibility must also be used.”

No obligation

The Danish Presidency had publicly announced that the chat control should not be mandatory, but voluntary.

However, the formulated compromise proposal was contradictory. She had deleted the article on mandatory chat control. However, another article said services should also carry out voluntary measures.

Several states have asked whether these formulations “could lead to a de facto obligation.” The Legal Services agreed: “The wording can be interpreted in both directions.” The Presidency of the Council “clarified that the text only had a risk mitigation obligation, but not a commitment to detection.”

The day after the meeting, the presidency of the Council sent out the likely final draft law of the Council. It states explicitly: ‘No provision of this Regulation shall be interpreted as imposing obligations of detection obligations on providers’.

Damage and abuse

Mandatory chat control is not the only issue in the planned law. Voluntary chat control is also prohibited. The European Commission cannot prove its proportionality. Many oppose voluntary chat control, including the EU Commission, the European Data Protection Supervisor and the German Data Protection Supervisor.

A number of scientists are critical of the compromise proposal. The voluntary chat control does not designate it to be appropriate. “Their benefit is not proven, while the potential for harm and abuse is enormous.”

The law also calls for mandatory age checks. The scientists criticize that age checks “bring with it an inherent and disproportionate risk of serious data breaches and discrimination without guaranteeing their effectiveness.” The Federal Data Protection Officer also fears a “large-scale abolition of anonymity on the Internet.”

Now follows Trilog

The EU countries will not discuss these points further. The Danish Presidency “reaffirmed its commitment to the compromise proposal without the Spanish proposals.”

The Permanent Representatives of the EU States will meet next week. In December, the justice and interior ministers meet. These two bodies are to adopt the bill as the official position of the Council.

This is followed by the trilogue. There, the Commission, Parliament and the Council negotiate to reach a compromise from their three separate bills.

[…]

A “risk mitigation obligation” can be used to explain anything and obligate spying through whatever services the EU says there is “risk”

Source: Translated from EU states agree on voluntary chat control

Considering the whole proposal was shot down several times in the past years and even past month, using a back door rush to push this through is not how a democracy is supposed to function at all. And this is how fascism grips it’s iron claws. What is going on in Demark?

For more information on the history of Chat Control click here

EU proposes doing away with constant cookies requests by setting the “No” in your browser settings

People will no longer be bombarded by constant requests to accept or reject “cookies” when browsing the internet, under proposed changes to the European Union’s strict data privacy laws.

The pop-up prompts asking internet users to consent to cookies when they visit a website are widely seen as a nuisance, undermining the original privacy intentions of the digital rules.

[I don’t think this undermines anything – cookie consent got rid of a LOT of spying and everyone now just automatically clicks on NO or uses addons to do this (well, if you are using Firefox as a browser). The original purpose: stop companies spying has been achieved]

Brussels officials have now tabled changes that would allow people to accept or reject cookies for a six-month period, and potentially set their internet browser to automatically opt-in or out, to avoid being repeatedly asked whether they consent to websites remembering information about their past visits.

Cookies allow websites to keep track of a user’s previous activity, allowing sites to pull up items added to an online shopping cart that were not purchased, or remember whether someone had logged in to an account on the site before, as well as target advertisements.

[…]

Source: EU proposes doing away with constant internet ‘cookies’ requests – The Irish Times

Switzerland plans surveillance worse than US

In Switzerland, a country known for its love for secrecy, particularly when it comes to banking, the tides have turned: An update to the VÜPF surveillance law directly targets privacy and anonymity services such as VPNs as well as encrypted chat apps and email providers. Right now the law is still under discussion in the Swiss Bundesrat.

[…]

While Swiss privacy has been overhyped, legislative rules in Switzerland are currently decent and comparable to German data protection laws. This update to the VÜPF, which could come into force by 2026, would change data protection legislation in Switzerland dramatically.

Why the update is dangerous

If the law passes in its current form,

  • Swiss email and VPN providers with just 5,000 users are forced to log IP addresses and retain the data for six months – while data retention in Germany is illegal for email providers.
  • ID or driver’s license, maybe a phone number, are required for the registration process of various services – rendering the anonymous usage impossible.
  • Data must be delivered upon request in plain text, meaning providers must be able to decrypt user data on their end (except for end-to-end encrypted messages exchanged between users).

What is more, the law is not introduced by or via the Parliament, but instead the Swiss government, the Federal Council and the Federal Department of Justice and Police (FDJP), want to massively expand internet surveillance by updating the VÜPF – without Parliament having a say. This comes as a shock in a country proud of its direct democracy with regular people’s decisions on all kinds of laws. However, in 2016 the Swiss actually voted for more surveillance, so direct democracy might not help here.

History of surveillance in Switzerland

In 2016, Swiss Parliament updated its data retention law BÜPF to enforce data retention for all communication data (post, email, phone, text messages, ip addresses). In 2018, the revision of the VÜPF translated this into administrative obligations for ISPs, email providers, and others, with exceptions in regard to the size of the provider and whether they were classified as telecommunications service providers or communications services.

This led to the fact that services such as Threema and ProtonMail were exempt from some of the obligations that providers such as Swisscom, Salt, and Sunrise had to comply with – even though the Swiss government would have liked to classify them as quasi network operators and telecommunications providers as well. The currently discussed update of the VÜPF seems to directly target smaller providers as well as providers of anonymous services and VPNs.

The Swiss surveillance state has always sought a lot of power, and had to be called back by the Federal Supreme Court in the past to put surveillance on a sound legal basis.

But now, article 50a of the VÜPF reform mandates that providers must be able to remove “the encryption provided by them or on their behalf”, basically asking for backdoor access to encryption. However, end-to-end encrypted messages exchanged between users do not fall under this decryption obligation. Yet, even Swiss email provider Proton Mail says to Der Bund that “Swiss surveillance would be much stricter than in the USA and the EU, and Switzerland would lose its competitiveness as a business location.”

Because of this upcoming legal change in Switzerland, Proton has started to move its server from Switzerland to the EU.

Source: Switzerland plans surveillance worse than US | Tuta

Roblox begins asking tens of millions of children to send it a selfie, for “age verification”.

Roblox is starting to roll out the mandatory age checks that will require all of its users to submit an ID or scan their face in order to access the platform’s chat features. The updated policy, which the company announced earlier this year, will be enforced first in Australia, New Zealand and the Netherlands and will expand to all other markets by early next year.

The company also detailed a new “age-based chat” system, which will limit users’ ability to interact with people outside of their age group. After verifying or estimating a user’s age, Roblox will assign them to an age group ranging from 9 years and younger to 21 years and older (there are six total age groups). Teens and children will then be limited from connecting with people that aren’t in or close to their estimated age group in in-game chats.

Unlike most social media apps which have a minimum age of 13, Roblox permits much younger children to use its platform. Since most children and many teens don’t have IDs, the company uses “age estimation” tech provided by identity company Persona. The checks, which use video selfies, are conducted within Roblox’s app and the company says that images of users’ faces are immediately deleted after completing the process.

[…]

Source: Roblox begins asking tens of millions of children to verify their age with a selfie

Deleted by Roblox itself, but also by Persona? Pretty scary, 1. having a database of all these kiddies faces and their online persona’s, ways of talking and typing, and 2. that even if the data is deleted, it could be intercepted as it is sent to Roblox and on to the verifier.

Google is collecting troves of data from downgraded Nest thermostats

Google officially turned off remote control functionality for early Nest Learning Thermostats last month, but it hasn’t stopped collecting a stream of data from these downgraded devices. After digging into the backend, security researcher Cody Kociemba found that the first- and second-generation Nest Learning Thermostats are still sending Google information about manual temperature changes, whether a person is present in the room, if sunlight is hitting the device, and more.

[…]

fter cloning Google’s API to create this custom software, he started receiving a trove of logs from customer devices, which he turned off. “On these devices, while they [Google] turned off access to remotely control them, they did leave in the ability for the devices to upload logs. And the logs are pretty extensive,” Kociemba tells The Verge.

[…]

Google is still getting all the information collected by Nest Learning Thermostats, including data measured by their sensors, such as temperature, humidity, ambient light, and motion. “I was under the impression that the Google connection would be severed along with the remote functionality, however that connection is not severed, and instead is a one-way street,” Kociemba says.

[…]

Source: Google is collecting troves of data from downgraded Nest thermostats | The Verge

Unremovable Spyware on Samsung Devices Comes Pre-installed on Galaxy Series Devices

The software in question, AppCloud, developed by the mobile analytics firm IronSource, has been embedded in devices sold primarily in the Middle East and North Africa (MENA) region.

Security researchers and privacy advocates warn that it quietly collects sensitive user data, fueling fears of surveillance in politically volatile areas.

AppCloud tracks users’ locations, app usage patterns, and device information without seeking ongoing consent after initial setup. Even more concerning, attempts to uninstall it often fail due to its deep integration into Samsung’s One UI operating system.

Reports indicate the app reactivates automatically following software updates or factory resets, making it virtually unremovable for average users. This has sparked outrage among consumers in countries such as Egypt, Saudi Arabia, and the UAE, where affordable Galaxy models are popular entry points into Android.

The issue came to light through investigations by SMEX, a Lebanon-based digital rights group focused on MENA privacy. In a recent report, SMEX highlighted how AppCloud’s persistence could enable third-party unauthorized data harvesting, posing significant risks in regions with histories of government overreach.

“This isn’t just bloatware, it’s a surveillance enabler baked into the hardware,” said a SMEX spokesperson. The group called on Samsung to issue a global patch and disclose the full scope of data shared with ironSource.

[…]

Source: Unremovable Spyware on Samsung Devices Comes Pre-installed on Galaxy Series Devices

Denmark rises again, finds another way to try to introduce 100% surveillance state in EU after public backlash stopped the last attempt at chat control. Send emails to your MEPs easily!

Thanks to public pressure, the Danish Presidency has been forced to revise its text, explicitly stating that any detection obligations are voluntary. While much better, the text continues to both (a) effectively outlaw anonymous communication through mandatory age verification; and (b) include planned voluntary mass scannings. The Council is expected to formally adopt its position on Chat Control the 18th or 19th of November. Trilogue with the European Parliament will commence soon after.

The EU (still) wants to scan
your private messages and photos

The “Chat Control” proposal would mandate scanning of all private digital communications, including encrypted messages and photos. This threatens fundamental privacy rights and digital security for all EU citizens.

You Will Be Impacted

Every photo, every message, every file you send will be automatically scanned—without your consent or suspicion. This is not about catching criminals. It is mass surveillance imposed on all 450 million citizens of the European Union.

Source: Fight Chat Control – Protect Digital Privacy in the EU

The site linked will allow you to very easily send an email to your representatives by clicking a few times. Take the time to ensure they understand that people have a voice!

“This is a political deception” − Denmark gives New Chat Control another shot. Mass surveillance for all from behind closed doors.

It’s official, a revised version of the CSAM scanning proposal is back on the EU lawmakers’ table − and is keeping privacy experts worried.

The Law Enforcement Working Party met again this morning (November 12) in the EU Council to discuss what’s been deemed by critics the Chat Control bill.

This follows a meeting the group held on November 5, and comes as the Denmark Presidency put forward a new compromise after withdrawing mandatory chat scanning.

As reported by Netzpolitik, the latest Child Sexual Abuse Regulation (CSAR) proposal was received with broad support during the November 5 meeting, “without any dissenting votes” nor further changes needed.

The new text, which removes all provisions on detection obligations included in the bill and makes CSAM scanning voluntary, seems to be the winning path to finally find an agreement after over three years of trying.

Privacy experts and technologists aren’t quite on board, though, with long-standing Chat Control critic and digital rights jurist, Patrick Breyer, deeming the proposal “a political deception of the highest order.”

Chat Control − what’s changing and what are the risk

As per the latest version of the text, messaging service providers won’t be forced to scan all URLs, pictures, and videos shared by users, but rather choose to perform voluntary CSAM scanning.

There’s a catch, though. Article 4 will include a possible “mitigation measure” that could be applied to high-risk services to require them to take “all appropriate risk mitigation measures.”

According to Breyer, such a loophole could make the removal of detection obligations “worthless” by negating their voluntary nature. He said: “Even client-side scanning (CSS) on our smartphones could soon become mandatory – the end of secure encryption.”

Breaking encryption, the tech that security software like the best VPNs, Signal, and WhatsApp use to secure our private communications, has been the strongest argument against the proposal so far.

Breyer also warns that the new compromise goes further than the discarded proposal, passing from AI-powered monitoring targeting shared multimedia to the scanning of private chat texts and metadata, too.

“The public is being played for fools,” warns Breyer. “Following loud public protests, several member states, including Germany, the Netherlands, Poland, and Austria, said ‘No’ to indiscriminate Chat Control. Now it’s coming back through the back door.”

Breyer is far from being the only one expressing concerns. German-based encrypted email provider, Tuta, is also raising the alarm.

“Hummelgaard doesn’t understand that no means no,” the provider writes on X.

To understand the next steps, we now need to wait and see what the outcomes from today’s meeting look like.

Source: “This is a political deception” − New Chat Control convinces lawmakers, but not privacy experts yet | TechRadar