Nearby Glasses Warns You When a Glasshole is Nearby

The app, called Nearby Glasses, has one sole purpose: Look for smart glasses nearby and warn you.

Get It On Google Play

This app notifies you when smart glasses are nearby. It uses company identificators in the Bluetooth data sent out by these. Therefore, there likely are false positives (e.g. from VR headsets). Hence, please proceed with caution when approaching a person nearby wearing glasses. They might just be regular glasses, despite this app’s warning.

The app’s author Yves Jeanrenaud takes no liability whatsoever for this app nor it’s functionality. Use at your own risk. By technical design, detecting Bluetooth LE devices might sometimes just not work as expected. I am no graduated developer. This is all written in my free time and with knowledge I taught myself.
False positives are likely. This means, the app Nearby Glasses may notify you of smart glasses nearby when there might be in fact a VR headset of the same manufacturer or another product of that company’s breed. It may also miss smart glasses nearby. Again: I am no pro developer.
However, this app is free and it’s source is available (though it’s not considered foss due to the non-commercial restrition), you may review the code, change it and re-use it (under the license).
The app Nearby Glasses does not store any details about you or collects any information about you or your phone. There are no telemetry, no ads, and no other nuisance. If you install the app via Play Store, Google may know something about you and collect some stats. But the app itself does not.
If you choose to store (export) the logfile, that is completely up to you and your liability where this data go to. The logs are recorded only locally and not automatically shared with anyone. They do contain little sensitive data; in fact, only the manufacturer ID codes of BLE devices encountered.

Use with extreme caution! As stated before: There is no guarantee that detected smart glasses are really nearby. It might be another device looking technically (on the BLE adv level) similar to smart glasses.
Please do not act rashly. Think before you act upon any messages (not only from this app).

Why?

  • Because I consider smart glasses an intolerable intrusion, consent neglecting, horrible piece of tech that is already used for making various and tons of equally truely disgusting ‘content’. 1, 2
  • Some smart glasses feature small LED signifying a recording is going on. But this is easily disabled, whilst manufacturers claim to prevent that and take no responsibility at all (tech tends to do that for decades now). 3
  • Smart glasses have been used for instant facial recognition before 4 and reportedly will be out of the box 5. This puts a lot of people in danger.
  • I hope this is app is useful for someone.

How?

  • It’s a simple rather heuristic approach. Because BLE uses randomised MAC and the OSSID are not stable, nor the UUID of the service announcements, you can’t just scan for the bluetooth beacons. And, to make thinks even more dire, some like Meta, for instance, use proprietary Bluetooth services and UUIDs are not persistent, we can only rely on the communicated device names for now.
  • The currently most viable approach comes from the Bluetooth SIG assigned numbers repo. Following this, the manufacturer company’s name shows up as number codes in the packet advertising header (ADV) of BLE beacons.
  • this is what BLE advertising frames look like:
Frame 1: Advertising (ADV_IND)
Time:  0.591232 s
Address: C4:7C:8D:1E:2B:3F (Random Static)
RSSI: -58 dBm

Flags:
  02 01 06
    Flags: LE General Discoverable Mode, BR/EDR Not Supported

Manufacturer Specific Data:
  Length: 0x1A
  Type:   Manufacturer Specific Data (0xFF)
  Company ID: 0x058E (Meta Platforms Technologies, LLC)
  Data: 4D 45 54 41 5F 52 42 5F 47 4C 41 53 53

Service UUIDs:
  Complete List of 16-bit Service UUIDs
  0xFEAA
  • According to the Bluetooth SIG assigned numbers repo, we may use these company IDs:
    • 0x01AB for Meta Platforms, Inc. (formerly Facebook)
    • 0x058E for Meta Platforms Technologies, LLC
    • 0x0D53 for Luxottica Group S.p.A (Who manufacturers the Meta Ray-Bans)
    • 0x03C2 for Snapchat, Inc., that makes SNAP Spectacles They are immutable and mandatory. Of course, Meta and other manufacturers also have other products that come with Bluetooth and therefore their ID, e.g. VR Headsets. Therefore, using these company ID codes for the app’s scanning process is prone to false positives. But if you can’t see someone wearing an Occulus Rift around you and there are no buildings where they could hide, chances are good that it’s smart glasses instead.
  • During pairing, the smart glasses usually emit their product name, so we can scan for that, too. But it’s rare we will see that in the field. People with the intention to use smart glasses in bars, pubs, on the street, and elsewhere usually prepare for that beforehand.
  • When the app recognised a Bluetooth Low Energy (BLE) device with a sufficient signal strength (see RSI below), it will push an alert message. This shall help you to act accordingly.

[…]

Source: Github repo

Age verification checks are now in force in the UK because of the Online Safety Act, but with the Discord fallout, it seems like one bad idea after another

Currently, I can’t check my Bluesky direct messages until I’ve allowed the Epic Games-owned KWS to look at either my bank card, my ID, or my wizened visage. As I’m based in the UK, it’s not just Bluesky I’ve got to worry about either, with similar verification processes now present on Reddit, Discord, and even my partner’s Xbox.

This is all due to the Online Safety Act, which came into effect in the UK last year. For many, these age checks are an annoyance at best—but they also represent something that will have ramifications far beyond the British Isles. The UK’s Act was designed in part to ensure children in the UK could not easily access “harmful content.” This is a broad term that includes but is not limited to pornography, content that promotes “self-harm, eating disorders, or suicide,” and “bullying”.

To comply with the act and differentiate children from the adults, many platforms have opted for age-gates like the one I’m encountering on Bluesky. Almost 70% of Brits surveyed shortly after the Online Safety Act came into effect said they supported it…though 64% didn’t think it would be all that effective. Indeed, I could log into a VPN to get past the UK-based Bluesky block—though unfortunately for me, I am stubborn, lazy, and cheap (apologies if you’ve been trying to get ahold of me).

Besides all that, I’m not especially keen to hand over my personal data to a third-party age verification vendor such as KWS for data privacy reasons. As recently as October, a Discord security breach may have leaked 70,000 age-verification ID photos. Discord’s primary age-verification partner, K-ID, was keen to clarify that it was not involved.

As Jacob has previously outlined, there are better ways to implement age checks. As it stands, though, I’m not naive enough to think the data I keep elsewhere is in hands that are any safer. However, not submitting to an age assurance check makes for one less point of failure from which my likeness or even my official documents can leak out.

Discord first announced it would be using Brits as age assurance guinea pigs back in April 2025, but it turns out that may have all been prologue. Just in case you’ve been napping under a cool mossy rock for the last while, the social platform caused quite a stir this month when it announced it would be rolling out age verifying facial scans and ID checks globally this March. The case can be made that it is ‘complying in advance,’ as the UK’s approach to online safety potentially serves as a preview for PC gamers further afield.

Discord hackers distribute malware that can stay persistent for months

(Image credit: TheDigitalArtist – Pixabay & Discord)

On the one hand, yeah, I’d rather children growing up today didn’t see all the things I saw thanks to having unfettered internet access throughout the early oughts.

Why not? I survived rotten.com and goatse – but then again, the internet didn’t have much in the way of fake news, hate speech or echo chambers…

I’d also rather young’uns now didn’t have to experience all the harassment I experienced at the hands of my own peers, newly empowered by that unfettered internet access.

On the other hand, the internet answered a lot of questions I was absolutely not going to ask my parents; when I see a vague term like “harmful content” I do have to wonder what genuinely educational resources on the wider internet—say, regarding art history or personal health—might end up age-gated because someone somewhere has decided they’re tantamount to ‘pornography.’

I’m only just the other side of 30, but Section 28 was still in effect for some of my school years. For those who don’t know, Section 28 was a law that prevented schools in England, Scotland, and Wales from doing anything that could be interpreted as “intentionally [promoting] homosexuality or [publishing] material with the intention of promoting homosexuality”. So, until the law was repealed in the early 2000’s, a lot of schools simply pretended LGBTQIA+ folks didn’t exist. The internet, for all of its faults, helped to fill that deafening silence for me.

A screenshot of a 3D model being used to pass the DIscord age verification system

(Image credit: PromptPirate on GitHub)

Even so, I remember there being content blocks back in my day, too, and I know I found more than a few ways around those. Indeed, if we take just Discord today, our James has found not one but two different ways to fool its face scans—though the platform may already be formulating a counter to these workarounds.

Shortly after issuing assurances that not all users will even have to undergo an age check, a since-edited support article revealed that some UK users “may be part of an experiment where your information will be processed by an age-assurance vendor, Persona.” Amid reports of folks easily fooling its primary third-party vendor’s age verification checks, Discord may have been seeking to diversify its defences.

Persona’s investors include Peter Thiel, co-founder of ICE’s premier surveillance provider, Palantir. Though Persona and Palantir are two totally separate companies that do not share either data or operations, that’s still a pretty grimy connection. Not least of all because earlier this week, the US Department of Homeland Security reportedly subpoenaed a number of major online platforms—including Discord, Reddit, Google, and Meta—in order to obtain the personal details of accountholders who had been critical of ICE or identified the locations of its agents. We don’t yet know if Discord complied, though we have reached out for comment.

EDMONTON, CANADA - APRIL 28: An image of a woman holding a cell phone in front of the Discord logo displayed on a computer screen, on April 29, 2024, in Edmonton, Canada.

(Image credit: Artur Widak/NurPhoto via Getty Images)

There is an even worse wrinkle in the Discord-Persona ‘experiment’: while Discord had previously said that data like age verification face scans would only be stored and processed on users’ own devices, those who ended up part of the Persona experiment may have their information “temporarily stored for up to 7 days, then deleted.”

Indeed, some security researchers are already claiming to have “found a Persona frontend exposed to the open internet on a US government-authorized server.”

All of that said, Persona is not part of Discord’s long-term strategy, with the platform telling Kotaku earlier this week that its dealings with the vendor were part of a “limited test” that has since been concluded. That leaves K-id’s on-device processing in effect, but even that doesn’t necessarily end the privacy nightmare. Data breaches usually leave platforms scrambling for user good will, but Discord seems all too happy to keep walking into rakes.

One could jump ship and shop around for a free Discord alternative as I recently did, but all of the platforms I tested will likely have to implement some sort of age assurance check if they haven’t already in order to continue serving users based in the UK in the future. That doesn’t mean I’ll be letting them scan my face any time soon; I may have to deploy Norman Reedus and his funky foetus before long as third-party age verification vendors have done little to earn my trust or a gander at my actual face.

Source: Age verification checks are now in force in the UK because of the Online Safety Act, but with the Discord fallout, it seems like one bad idea after another | PC Gamer

Discord’s First Age-Verification ‘Experiment’ Alarms Hackers: Supplier “Persona” not only leaky, but also uses IDs for various purposes not age related

Last week, Discord users reported seeing prompts to submit personal information to Persona, a third-party age-verification service. As Discord commits to universal age-verification, the new measures have come under intense scrutiny after previous security failures. Now a trio of hacktivists say they’ve successfully breached Persona, getting a closer look at how the company uses submitted biometrics. They say their findings raise alarms beyond the possibility of leaks.

According to The Rage, Persona’s front-end security left a lot to be desired. Worse, however, were investigative findings that suggested Persona’s surveillance of the users whose data it collected was way more sprawling than originally believed.

“It was initially meant to be a passive recon investigation,” writes vmfunc, a cybersecurity researcher and one of the hackers, “that quickly turned into a rabbit hole deep dive into how commercial AI and federal government operations work together to violate our privacy every waking second.”

On top of finding it surprisingly easy to access data gathered by Persona, the research showed that faces and biometrics were not just being scanned for age verification, but flagged for suspicious behavior and bounced off watchlists as well. To some, particularly those who don’t worry about their face being deemed “suspicious,” this may not sound like an Orwellian level of intrusion, until you remember Persona’s full network.

Persona received $150 million in 2021 from the Founders Fund, a long-running tech investor group headed by Peter Thiel. Thiel’s main business, on top of palling around in Jeffery Epstein’s emails and waiting for the antichrist, is Palantir, an intentionally ominously-named data brokering service that is currently peddling user information to support ICE raids. The findings of vmfunc and co’s research doesn’t directly tether Persona and Discord’s operations to Palantir or Thiel, but it wouldn’t be conspiratorial to point out that all this data seems to be funnelling along similar slopes.

Trust but verify

Persona has confirmed the breach, CEO Rick Song corresponding and even thanking the hackers for flagging the security exploit. This has not, however, tempered concerns among those hacktivists about how the user information is ultimately being used.

“Transparently, we are actively working on a couple of potential contracts which would be publicly visible if we move forward,” writes Christie Kim, chief operating officer at Persona, in an email regarding the security breach and speculation around Discord. “However, these engagements are strictly for workforce account security of government employees and do not include ICE or any agency within the Department of Homeland Security.”

After the alarm was initially raised about Persona, Discord claimed its work with the Thiel-backed firm was only temporary, and that it didn’t have new contacts with it moving forward. It also promised user info was being wiped from servers within seven days of being gathered.

Source: Discord’s First Age-Verification ‘Experiment’ Alarms Hackers

Discord will require a face scan or ID for full access next month

The creeps staring into your bedroom brigade is winning and age verification is being normalised by a group of goons who really really want to know every poop you take. It’s a dangerous and insanely bad idea, but fortunately people are starting to wise up.

Discord announced on Monday that it’s rolling out age verification on its platform globally starting next month, when it will automatically set all users’ accounts to a “teen-appropriate” experience unless they demonstrate that they’re adults.

“For most adults, age verification won’t be required, as Discord’s age inference model uses account information such as account tenure, device and activity data, and aggregated, high-level patterns across Discord communities. Discord does not use private messages or any message content in this process,” Savannah Badalich, Discord’s global head of product policy, tells The Verge.

Users who aren’t verified as adults will not be able to access age-restricted servers and channels, won’t be able to speak in Discord’s livestream-like “stage” channels, and will see content filters for any content Discord detects as graphic or sensitive. They will also get warning prompts for friend requests from potentially unfamiliar users, and DMs from unfamiliar users will be automatically filtered into a separate inbox.

Direct messages and servers that are not age-restricted will continue to function normally, but users won’t be able to send messages or view content in an age-restricted server until they complete the age check process, even if it’s a server they were part of before age verification rolled out. Badalich says those servers will be “obfuscated” with a black screen until the user verifies they’re an adult. Users also won’t be able to join any new age-restricted servers without verifying their age.

Discord asking a user for age verification after opening a restricted server
Discord asking a user for age verification to unblur sensitive content
1/2Unverified users won’t be able to enter age-restricted servers. Image: Discord

Discord’s global age verification launch is part of a wave of similar moves at other online platforms, driven by an international legal push for age checks and stronger child safety measures. This is not the first time Discord has implemented some form of age verification, either. It initially rolled out age checks for users in the UK and Australia last year, which some users figured out how to circumvent using Death Stranding’s photo mode. Badalich says Discord “immediately fixed it after a week,” but expects users will continue finding creative ways to try getting around the age checks, adding that Discord will “try to bug bash as much as we possibly can.”

It’s not just teens trying to cheat the system who might attempt to dodge age checks. Adult users could avoid verifying, as well, due to concerns around data privacy, particularly if they don’t want to use an ID to verify their age. In October, one of Discord’s former third-party vendors suffered a data breach that exposed users’ age verification data, including images of government IDs.

If Discord’s age inference model can’t determine a user’s age, a government ID might still be required for age verification in its global rollout. According to Discord, to remove the new “teen-by-default” changes and limitations, “users can choose to use facial age estimation or submit a form of identification to [Discord’s] vendor partners, with more options coming in the future.”

The first option uses AI to analyze a user’s video selfie, which Discord says never leaves the user’s device. If the age group estimate (teen or adult) from the selfie is incorrect, users can appeal it or verify with a photo of an identity document instead. That document will be verified by a third party vendor, but Discord says the images of those documents “are deleted quickly — in most cases, immediately after age confirmation.”

A Discord user profile showing a “teen” age group and age verification options
Users can view and update their age group from their profile. Image: Discord

Badalich also says after the October data breach, Discord “immediately stopped doing any sort of age verification flows with that vendor” and is now using a different third-party vendor. She adds, “We’re not doing biometric scanning [or] facial recognition. We’re doing facial estimation. The ID is immediately deleted. We do not keep any information around like your name, the city that you live in, if you used a birth certificate or something else, any of that information.”

“A majority of people are not going to see a change in their experience.”

Badalich goes on to explain that the addition of age assurance will mainly impact adult content: “A majority of people on Discord are not necessarily looking at explicit or graphic content. When we say that, we’re really talking about things that are truly adult content [and] age inappropriate for a teen. So, the way that it will work is a majority of people are not going to see a change in their experience.”

Even so, there’s still a risk that some users will leave Discord as a result of the age verification rollout. “We do expect that there will be some sort of hit there, and we are incorporating that into what our planning looks like,” Badalich says. “We’ll find other ways to bring users back.”

Source: Discord will require a face scan or ID for full access next month | The Verge

If you want to look at more people blowing up about age verification you can try this Slashdot thread: Discord Will Require a Face Scan or ID for Full Access Next Month

How to Disable Ring’s Creepy ‘Search Party’ Feature – But if you bought a Ring, you probably don’t mind blanket corpo and govt surveillance anyway I guess.

If you tuned into Super Bowl LX on Sunday, you may have caught Ring’s big ad of the night: The company tried to tap into us dog owners’ collective fear of losing our pets, demonstrating how its new “Search Party” feature could reunite missing dogs with its owners. Ring probably thought audiences would love the feature, with existing users happy to know Search Party exists, and new customers looking to buy one of their doorbells to help find lost dogs in the neighborhood.

Of course, that’s not what happened at all. Rather than evoke heartwarming feelings, the ad scared the shit out of many of us who caught it. That’s due to how the feature itself works: Search Party uses AI to identify pets that run in its field of vision. But it’s not just your camera doing this: The feature pools together all of the Ring cameras that have Search Party enabled to look for your lost dog. In effect, it turns all these individual devices into a Ring network, or, perhaps in harsher terms, a surveillance state. It does so in pursuit of a noble goal, sure, but at what cost?

The reactions I saw online ranged from shock to anger. Some were surprised to learn that Ring cameras could even do this, seeing as you might assume your Ring doorbell is, well, yours. Others were furious, lashing out at anyone who thinks Search Party is a good idea, or that the feature isn’t the beginning of a very slippery slope. My favorite take was one comparing Search Party to Batman’s cellphone network surveillance system from The Dark Knight, which famously compromised morals and ethics in the name of catching the bad guy.

According to Ring, Search Party is a perfectly safe and wholesome way to look for lost dogs in the area. The company’s FAQs explain that users can opt-out of the feature at any time, and only Ring doorbells in the area around the home that started the current Search Party will look for the dog. In addition, Ring says the feature works based on saved videos, so Ring doorbells without a subscription and a saved video history won’t be able to participate. (Though I’m not sure the fact that the feature works with saved videos assuages any fears on my end.)

I am not pro-missing dogs. But I am pro-privacy. At the risk of sounding alarmist, Search Party really does seem like a slippery slope. Today, the neighborhood is banding together to find Mrs. Smith’s missing goldendoodle; tomorrow, they’re looking for a “suspicious person.” Innocent until proven guilty, unless caught on your neighbor’s Ring camera.

Can law enforcement request Search Party data?

Here’s the big question regarding Search Party and its slippery slope: Can law enforcement—including local police, FBI, or ICE—request saved videos from Ring cameras participating in Search Party in order to track down people, not pets?

You won’t be surprised to learn that that wasn’t answered by Ring’s Super Bowl ad, nor is it part of the official Search Party FAQs. However, we do know that, as of October 2025, Ring partnered with both Flock Safety as well as Axon. Axon makes and sells equipment for law enforcement, like tasers and body cameras, while Flock Safety is a security company that offers services like license plate recognition and video surveillance. These partnerships allow law enforcement to post requests for Ring footage directly to the Ring app. Ring users in the vicinity of the request have the choice to either share that footage or ignore the petition. Flock Safety says that users who do choose to share footage remain private.

Of course, law enforcement isn’t always going to ask for volunteers. According to Ring’s law enforcement guidelines, the company will comply with “valid and binding search warrants.” That’s not surprising, of course. But the company does note an important distinction in what it will share: Ring will share “non-content” data in response to both subpoenas and warrants, including a user’s name, home address, email address, billing info, date they made the account, purchase history, and service usage data. The company says it will not share “content,” meaning the data you store in your account, like videos and recordings of service calls, for subpoenas, only warrants.

Ring also says it will tell you if it shares your data with law enforcement, unless it is barred from doing so, or it’s clear your Ring data breaks the law. This applies for both standard data requests, as well as “emergency” requests.

Based on its current language, it seems that Ring would give up the footage used in Search Party to law enforcement, assuming they present a valid warrant. The thing is, it’s not clear whether Search Party has any actual impact on that data: For example, imagine a dog runs in front of your Ring doorbell, and the footage is saved to your history. Now, a valid warrant comes through requesting your footage. Whether you have Search Party enabled or disabled, Ring may share that footage with law enforcement—the feature itself had no impact on whether your doorbell saved the footage. The difference would be whether law enforcement has access to the identification data within the footage: Can they see that Ring thinks that dog is, in fact, Mrs. Smith’s goldendoodle, or do they simply see a video of a fluffy pup running past your house? If so, that would be your slippery slope indeed: If law enforcement could obtain your footage with facial recognition data of the suspect they’re looking for, we’d be in particularly dangerous territory.

I’ve reached out to Ring for comment on this side of Search Party, and I hope to hear back to provide a fuller answer to this question.

How to opt-out of Search Party on your Ring cameras

If you’d rather not bother with the feature at all, Ring says it’s easy enough to turn off. To start, open the Ring app, tap the hamburger menu, then choose “Control Center.” Here, choose “Search Party,” then choose the “blue Pet icon” next to each of your cameras for “Search for Lost Pets.”

To be honest, if I had a Ring camera, I’d go one step further and delete my saved videos. Law enforcement can’t obtain what I don’t save. If you want to delete these clips from your Ring account, head to the hamburger menu in the app, tap “History,” choose the “pencil icon,” then tap “Delete All” to wipe your entire history.

Source: How to Disable Ring’s ‘Search Party’ Feature | Lifehacker

Commission trials European open source communications software as a backup for Teams (not a replacement)

All this talk of digital self sufficiency, data supremacy, etc and the EU will continue to feed the hand that strangles it, whilst not paying a cent to EU companies that could build the same (and better) functionalities.

The European Commission is trialling using European open source software to run its internal communications, a spokesperson confirmed to Euractiv.

The move comes at a time of growing concern within European administrations over their heavy dependency on US software for day-to-day work amid increasingly unreliable transatlantic relations.

“As part of our efforts to use more sovereign digital solutions, the European Commission is preparing an internal communication solution based on the Matrix protocol,” the spokesperson told Euractiv.

Matrix is an open source, community-developed messaging protocol shepherded by a non-profit that’s headquartered in London. It’s already widely used for public messengers across Europe, with the French government, German healthcare providers and European armed forces all using tools built on the protocol.

Sovereign backup 

The Commission is looking into using Matrix as a “complement and backup solution” to existing internal communications software, the spokesperson said.

That means there are no plans for a Matrix-based solution to replace Microsoft Teams, which is currently widely found on the Commission’s computers, according to remarks by an EU official at a conference in October.

A different open source tool – namely the Signal messaging app, which is also a favourite with journalists – is fulfilling the backup role at present but the software wasn’t flexible enough for a large organisation like the Commission, the official also said.

The Commission is also eyeing another use case for the Matrix-based comms tool: It could be used to connect to other Union bodies in the future, which are currently lacking a common tool to communicate securely.

[…]

Source: Commission trials European open source communications software | Euractiv

Google Pixel Bug Turns Microphone on for Incoming Callers Leaving Voicemail

[…] Called “Take a Message,” the buggy feature was released last year and is supposed to automatically transcribe voicemails as they’re coming in, as well as detect and mark spam calls. Unfortunately, according to reports from multiple users on Reddit (as initially spotted by 9to5Google), the feature has started turning on the microphone while taking voicemails, allowing whoever is leaving you a voicemail to hear you.

[…]

The issue has been reported affecting Pixel devices ranging from the Pixel 4 to the Pixel 10, and on a recent support page, Google’s finally acknowledging it. However, the company’s action might not be enough, depending on how cautious you want to be.

According to Community Manager Siri Tejaswini, the company has “investigated this issue,” and has confirmed it “affects a very small subset of Pixel 4 and 5 devices under very specific and rare circumstances.” The post doesn’t go any further on the how and why of the diagnosis, but says that Google is now disabling Take a Message and “next-gen Call Screen features” on these devices.

[…]

While it’s encouraging that Google is taking action on the Take a Message bug, the company only seems to be acknowledging it for Pixel 4 and Pixel 5 models, at least for now. I’ve asked Google whether owners of other Pixel models should be worried, as user reports seem split on this. Still, because some have mentioned an issue with even the most up-to-date Pixel phone, if you want to practice your own abundance of caution, it might be worth disabling Take a Message on your device, regardless of its model number.

To do this, open your Phone app, then tap the three-lined menu icon at the top-left of the page. Navigate to Settings > Call Assist > Take a Message, and toggle the feature off.

Source: This Pixel Bug Leaked Audio to Incoming Callers, and Google’s Fix Might Not Be Enough | Lifehacker

ICE takes aim at data held by advertising and tech firms

Let us not forget that the reason Nazi Germany was so great at exporting Jews from the Netherlands was for a large part because of the great databases the Netherlands kept at that time containing religious and ethnic information on its’ population.

It’s not enough to have its agents in streets and schools; ICE now wants to see what data online ads already collect about you. The US Immigration and Customs Enforcement last week issued a Request for Information (RFI) asking data and ad tech brokers how they could help in its mission.

The RFI is not a solicitation for bids. Rather it represents an attempt to conduct market research into the spectrum of data – personal, financial, location, health, and so on – that ICE investigators can source from technology and advertising companies.

“[T]he Government is seeking to understand the current state of Ad Tech compliant and location data services available to federal investigative and operational entities, considering regulatory constraints and privacy expectations of support investigations activities,” the RFI explains.

Issued on Friday, January 23, 2026, one day prior to the shooting of VA nurse Alex Pretti by a federal immigration agent, two weeks after the shooting of Renée Good, and three weeks after the shooting of Keith Porter Jr, the RFI lands amid growing disapproval of ICE tactics and mounting pressure to withhold funding for the agency.

ICE did not immediately respond to a request to elaborate on how it might use ad tech data and to share whether any companies have responded to its invitation.

The RFI follows a similar solicitation published last October for a contractor capable of providing ICE with open source intelligence and social media information to assist the ICE Enforcement and Removal Operations (ERO) directorate’s Targeting Operations Division – tasked with finding and removing “aliens that pose a threat to public safety or national security.”

[…]

Tom Bowman, policy counsel with the Center for Democracy & Technology’s (CDT) Security & Surveillance Project, told The Register in a phone interview that ICE is attempting to rebrand surveillance as a commercial transaction.

“But that doesn’t make the surveillance any less intrusive or any less constitutionally suspect,” said Bowman. “This inquiry specifically underscores what really is a long-standing problem – that government agencies have been able to sidestep Fourth Amendment protections by purchasing data that would otherwise need a warrant to collect.”

The data derived from ad tech and various technology businesses, said Bowman, can reveal intimate details about people’s lives, including visits to medical facilities and places of worship.

[…]

“Ad tech compliance regimes were never designed to protect people from government surveillance or coercive enforcement,” he said. “Ad tech data is often collected via consent that is meaningless. The data flows are opaque. And then these types of downstream uses are really difficult to control.”

Bowman argues that while there’s been a broad failure to meaningfully regulate data brokers, legislative solutions are possible.

[…]

Source: ICE takes aim at data held by advertising and tech firms • The Register

Following Apple, now Google to pay $68m to settle lawsuit claiming it recorded and sold private conversations

Google has agreed to pay $68m (£51m) to settle a lawsuit claiming it secretly listened to people’s private conversations through their phones.

Users accused Google Assistant – a virtual assistant present on many Android devices – of recording private conversations after it was inadvertently triggered on their devices.

They claimed the recordings were then shared with advertisers in order to send them targeted advertising.

The BBC has contacted Google for comment. But in a filing seeking to settle the case, it denied wrongdoing and said it was seeking to avoid litigation.

Google Assistant is designed to wait in standby mode until it hears a particular phrase – typically “Hey Google” – which activates it.

The phone then records what it hears and sends the recording to Google’s servers where it can be analysed.

[…]

The claim has been brought as a class action lawsuit rather than an individual case – meaning if it is approved, the money will be paid out across many different claimants.

Those eligible for a payout will have owned Google devices dating back to May 2016.

But lawyers for the plaintiffs may ask for up to one-third of the settlement – amounting to about $22m in legal fees.

It follows a similar case in January where Apple agreed to pay $95m to settle a case alleging some of its devices were listening to people through its voice-activated assistant Siri without their permission.

The tech firm also denied any wrongdoing, as well as claims that it “recorded, disclosed to third parties, or failed to delete, conversations recorded as the result of a Siri activation” without consent.

Source: Google to pay $68m to settle lawsuit claiming it recorded private conversations

Microsoft will give the FBI your BitLocker keys if asked. Can do so because of cloud accounts.

Great target for hackers then, the server with unencrypted bitlocker keys on it.

Microsoft has confirmed in a statement to Forbes that the company will provide the FBI access to BitLocker encryption keys if a valid legal order is requested. These keys enable the ability to decrypt and access the data on a computer running Windows, giving law enforcement the means to break into a device and access its data.

The news comes as Forbes reports that Microsoft gave the FBI the BitLocker encryption keys to access a device in Guam that law enforcement believed to have “evidence that would help prove individuals handling the island’s Covid unemployment assistance program were part of a plot to steal funds” in early 2025.

Source: Microsoft gave FBI BitLocker keys, raising privacy fears | Windows Central

What Happened After Security Researchers Found 60 Flock Cameras Livestreaming to the Internet

A couple months ago, YouTuber Benn Jordan “found vulnerabilities in some of Flock’s license plate reader cameras,” reports 404 Media’s Jason Koebler. “He reached out to me to tell me he had learned that some of Flock’s Condor cameras were left live-streaming to the open internet.”

This led to a remarkable article where Koebler confirmed the breach by visiting a Flock surveillance camera mounted on a California traffic signal. (“On my phone, I am watching myself in real time as the camera records and livestreams me — without any password or login — to the open internet… Hundreds of miles away, my colleagues are remotely watching me too through the exposed feed.”) Flock left livestreams and administrator control panels for at least 60 of its AI-enabled Condor cameras around the country exposed to the open internet, where anyone could watch them, download 30 days worth of video archive, and change settings, see log files, and run diagnostics. Unlike many of Flock’s cameras, which are designed to capture license plates as people drive by, Flock’s Condor cameras are pan-tilt-zoom (PTZ) cameras designed to record and track people, not vehicles. Condor cameras can be set to automatically zoom in on people’s faces… The exposure was initially discovered by YouTuber and technologist Benn Jordan and was shared with security researcher Jon “GainSec” Gaines, who recently found numerous vulnerabilities in several other models of Flock’s automated license plate reader (ALPR) cameras.
Jordan appeared this week as a guest on Koebler’s own YouTube channel, while Jordan released a video of his own about the experience. titled “We Hacked Flock Safety Cameras in under 30 Seconds.” (Thanks to Slashdot reader beadon for sharing the link.) But together Jordan and 404 Media also created another video three weeks ago titled “The Flock Camera Leak is Like Netflix for Stalkers” which includes footage he says was “completely accessible at the time Flock Safety was telling cities that the devices are secure after they’re deployed.”

The video decries cities “too lazy to conduct their own security audit or research the efficacy versus risk,” but also calls weak security “an industry-wide problem.” Jordan explains in the video how he “very easily found the administration interfaces for dozens of Flock safety cameras…” — but also what happened next: None of the data or video footage was encrypted. There was no username or password required. These were all completely public-facing, for the world to see…. Making any modification to the cameras is illegal, so I didn’t do this. But I had the ability to delete any of the video footage or evidence by simply pressing a button. I could see the paths where all of the evidence files were located on the file system…

During and after the process of conducting that research and making that video, I was visited by the police and had what I believed to be private investigators outside my home photographing me and my property and bothering my neighbors. John Gaines or GainSec, the brains behind most of this research, lost employment within 48 hours of the video being released. And the sad reality is that I don’t view these things as consequences or punishment for researching security vulnerabilities. I view these as consequences and punishment for doing it ethically and transparently.

I’ve been contacted by people on or communicating with civic councils who found my videos concerning, and they shared Flock Safety’s response with me. The company claimed that the devices in my video did not reflect the security standards of the ones being publicly deployed. The CEO even posted on LinkedIn and boasted about Flock Safety’s security policies. So, I formally and publicly offered to personally fund security research into Flock Safety’s deployed ecosystem. But the law prevents me from touching their live devices. So, all I needed was their permission so I wouldn’t get arrested. And I was even willing to let them supervise this research.

I got no response.

So instead, he read Flock’s official response to a security/surveillance industry research group — while standing in front of one of their security cameras, streaming his reading to the public internet.

“Might as well. It’s my tax dollars that paid for it.”

” ‘Flock is committed to continuously improving security…'”

Source: What Happened After Security Researchers Found 60 Flock Cameras Livestreaming to the Internet | Slashdot

For more on why Flock cameras are problematic, read here

Signal Founder Creates Truly Private GPT: Confer

When you use an AI service, you’re handing over your thoughts in plaintext. The operator stores them, trains on them, and–inevitably–will monetize them. You get a response; they get everything.

Confer works differently. In the previous post, we described how Confer encrypts your chat history with keys that never leave your devices. The remaining piece to consider is inference—the moment your prompt reaches an LLM and a response comes back.

Traditionally, end-to-end encryption works when the endpoints are devices under the control of a conversation’s participants. However, AI inference requires a server with GPUs to be an endpoint in the conversation. Someone has to run that server, but we want to prevent the people who are running it (us) from seeing prompts or the responses.

Confidential computing

This is the domain of confidential computing. Confidential computing uses hardware-enforced isolation to run code in a Trusted Execution Environment (TEE). The host machine provides CPU, memory, and power, but cannot access the TEE’s memory or execution state.

LLMs are fundamentally stateless—input in, output out—which makes them ideal for this environment. For Confer, we run inference inside a confidential VM. Your prompts are encrypted from your device directly into the TEE using Noise Pipes, processed there, and responses are encrypted back. The host never sees plaintext.

But this raises an obvious concern: even if we have encrypted pipes in and out of an encrypted environment, it really matters what is running inside that environment. The client needs assurance that the code running is actually doing what it claims.

[…]

Source: Private inference | Confer Blog

Your smart TV is watching you and nobody’s stopping it

At the end of last year, Texas Attorney General Ken Paxton sued five of the largest TV companies, accusing them of excessive and deceptive surveillance of their customers.

Paxton reserved special venom for the two China-based members of the quintet. His argument is that unlike Sony, Samsung, and LG, if Hisense and TCL have conducted surveillance in the way the lawsuits accuse them of, they’d potentially be required to share all data with the Chinese Communist Party.

It is a rare pleasure to state that legal action against tech companies is cogent, timely, focused, and – if the allegations are true – deserves to succeed. It is less pleasant to predict that even if one, several, or all of these manufacturers did what they’re accused of, and were sanctioned for it, it would not put the safeguards in place to stop such practices from recurring.

At the heart of the cases is the fact that most smart TVs use Automatic Content Recognition (ACR) to send rapid-fire screenshots back to company servers, where they are analyzed to finely detail your TV usage. This sometimes covers not just streaming video, but whatever apps or external devices are displaying, and the allegations are that every other bit of personal data the set can scry is also pulled in. Installed apps can have trackers, data from other devices can be swept up.

These lawsuits aside, smart TV companies more generally boast of their prying prowess to the ecosystem of data exploiters from which they make their money. The companies are much less open about the mechanisms and amount of data collection, and deploy a barrage of defenses to entice customers into turning the stuff on and stop them from turning it off. You may have already seen massive on-screen Ts&Cs with only ACCEPT as an option, ACR controls buried in labyrinthine menu jails, features that stop working even if you complete the obstacle course – all this is old news.

How old are these practices? TV maker Vizio got hit by multiple suits between 2015 and 2017, and collected $2.2 million in fines from the Federal Trade Commission and the state of New Jersey, as well as settling related class actions to the tune of $17 million. The FTC said the fines settled claims the maker had used installed software on its TVs to collect viewing data on 11 million TVs without their owners’ knowledge or consent. A court order said the manufacturer had to delete data collected before 2016 and promise to “prominently disclose and obtain affirmative express consent” for data collection and sharing from then on.

Yet ten years on, the problem has only got worse. There is no law against data collection, and companies often eat the fines, adjust their behavior to the barest minimum compliance, and set about finding new ways to entomb your digital twin in their datacenters.

It’s not even as if more regulation helps. The European GDPR data protection and privacy regs give consumers powerful rights and companies strict obligations, which smart TV makers do not rush to observe. Researchers claim the problem is growing no matter which side of the Atlantic your TV is watching you on.

[…]

Source: Your smart TV is watching you and nobody’s stopping it • The Register

How Cops Are Using Flock’s license plate camera Network To Surveil Protesters And Activists

It’s no secret that 2025 has given Americans plenty to protest about. But as news cameras showed protesters filling streets of cities across the country, law enforcement officers—including U.S. Border Patrol agents—were quietly watching those same streets through different lenses: Flock Safety automated license plate readers (ALPRs) that tracked every passing car.

Through an analysis of 10 months of nationwide searches on Flock Safety’s servers, we discovered that more than 50 federal, state, and local agencies ran hundreds of searches through Flock’s national network of surveillance data in connection with protest activity. In some cases, law enforcement specifically targeted known activist groups, demonstrating how mass surveillance technology increasingly threatens our freedom to demonstrate.

Flock Safety provides ALPR technology to thousands of law enforcement agencies. The company installs cameras throughout their jurisdictions, and these cameras photograph every car that passes, documenting the license plate, color, make, model and other distinguishing characteristics. This data is paired with time and location, and uploaded to a massive searchable database. Flock Safety encourages agencies to share the data they collect broadly with other agencies across the country. It is common for an agency to search thousands of networks nationwide even when they don’t have reason to believe a targeted vehicle left the region.

Via public records requests, EFF obtained datasets representing more than 12 million searches logged by more than 3,900 agencies between December 2024 and October 2025. The data shows that agencies logged hundreds of searches related to the 50501 protests in February, the Hands Off protests in April, the No Kings protests in June and October, and other protests in between.

[…]

While EFF and other civil liberties groups argue the law should require a search warrant for such searches, police are simply prompted to enter text into a “reason” field in the Flock Safety system. Usually this is only a few words–or even just one.

In these cases, that word was often just “protest.”

Crime does sometimes occur at protests, whether that’s property damage, pick-pocketing, or clashes between groups on opposite sides of a protest. Some of these searches may have been tied to an actual crime that occurred, even though in most cases officers did not articulate a criminal offense when running the search. But the truth is, the only reason an officer is able to even search for a suspect at a protest is because ALPRs collected data on every single person who attended the protest.

[…]

In a few cases, police were using Flock’s ALPR network to investigate threats made against attendees or incidents where motorists opposed to the protests drove their vehicle into crowds. For example, throughout June 2025, an Arizona Department of Public Safety officer logged three searches for “no kings rock threat,” and a Wichita (Kan.) Police Department officer logged 22 searches for various license plates under the reason “Crime Stoppers Tip of causing harm during protests.”

Even when law enforcement is specifically looking for vehicles engaged in potentially criminal behavior such as threatening protesters, it cannot be ignored that mass surveillance systems work by collecting data on everyone driving to or near a protest—not just those under suspicion.

Border Patrol’s Expanding Reach

As U.S. Border Patrol (USBP), ICE, and other federal agencies tasked with immigration enforcement have massively expanded operations into major cities, advocates for immigrants have responded through organized rallies, rapid-response confrontations, and extended presences at federal facilities.

USBP has made extensive use of Flock Safety’s system for immigration enforcement, but also to target those who object to its tactics. In June, a few days after the No Kings Protest, USBP ran three searches for a vehicle using the descriptor “Portland Riots.”

[…]

Fighting Back Against ALPR

ALPR systems are designed to capture information on every vehicle that passes within view. That means they don’t just capture data on “criminals” but on everyone, all the time—and that includes people engaged in their First Amendment right to publicly dissent. Police are sitting on massive troves of data that can reveal who attended a protest, and this data shows they are not afraid to use it.

Our analysis only includes data where agencies explicitly mentioned protests or related terms in the “reason” field when documenting their search. It’s likely that scores more were conducted under less obvious pretexts and search reasons. According to our analysis, approximately 20 percent of all searches we reviewed listed vague language like “investigation,” “suspect,” and “query” in the reason field. Those terms could well be cover for spying on a protest, an abortion prosecution, or an officer stalking a spouse, and no one would be the wiser–including the agencies whose data was searched. Flock has said it will now require officers to select a specific crime under investigation, but that can and will also be used to obfuscate dubious searches.

For protestors, this data should serve as confirmation that ALPR surveillance has been and will be used to target activities protected by the First Amendment. Depending on your threat model, this means you should think carefully about how you arrive at protests, and explore options such as by biking, walking, carpooling, taking public transportation, or simply parking a little further away from the action. Our Surveillance Self-Defense project has more information on steps you could take to protect your privacy when traveling to and attending a protest.

[…]

Everyone should have the right to speak up against injustice without ending up in a database.

Source: How Cops Are Using Flock Safety’s ALPR Network To Surveil Protesters And Activists | Techdirt

New EU Jolla Phone Now Available for Pre-Order as an Independent No Spyware Linux Phone

Jolla kicked off a campaign for a new Jolla Phone, which they call the independent European Do It Together (DIT) Linux phone, shaped by the people who use it.

“The Jolla Phone is not based on Big Tech technology. It is governed by European privacy thinking and a community-led model.”

The new Jolla Phone is powered by a high-performing Mediatek 5G SoC, and features 12GB RAM, 256GB storage that can be expanded to up to 2TB with a microSDXC card, a 6.36-inch FullHD AMOLED display with ~390ppi, 20:9 aspect ratio, and Gorilla Glass, and a user-replaceable 5,500mAh battery.

The Linux phone also features 4G/5G support with dual nano-SIM and a global roaming modem configuration, Wi-Fi 6 wireless, Bluetooth 5.4, NFC, 50MP Wide and 13MP Ultrawide main cameras, front front-facing wide-lens selfie camera, fingerprint reader on the power key, a user-changeable back cover, and an RGB indication LED.

On top of that, the new Jolla Phone promises a user-configurable physical Privacy Switch that lets you turn off the microphone, Bluetooth, Android apps, or whatever you wish.

The device will be available in three colors, including Snow White, Kaamos Black, and The Orange. All the specs of the new Jolla Phone were voted on by Sailfish OS community members over the past few months.

Honouring the original Jolla Phone form factor and design, the new model ships with Sailfish OS (with support for Android apps), a Linux-based European alternative to dominating mobile operating systems that promises a minimum of 5 years of support, no tracking, no calling home, and no hidden analytics.

“Mainstream phones send vast amounts of background data. A common Android phone sends megabytes of data per day to Google even if the device is not used at all. Sailfish OS stays silent unless you explicitly allow connections,” said Jolla.

The new Jolla Phone is now available for pre-order for 99 EUR and will only be produced if at least 2000 pre-orders are reached in one month from today, until January 4th, 2026. The full price of the Linux phone will be 499 EUR (incl. local VAT), and the 99 EUR pre-order price will be fully refundable and deducted from the full price.

The device will be manufactured and sold in Europe, but Jolla says that it will design the cellular band configuration to enable global travelling as much as possible, including e.g. roaming in the U.S. carrier networks. The initial sales markets are the EU, the UK, Switzerland, and Norway.

Source: New Jolla Phone Now Available for Pre-Order as an Independent Linux Phone – 9to5Linux

New hotness in democracy: if the people say no to mass surveillance, do it again right after you have said you won’t do it. Not EU this time: it’s India

You know what they say: If at first you don’t succeed at mass government surveillance, try, try again. Only two days after India backpedaled on its plan to force smartphone makers to preinstall a state-run “cybersecurity” app, Reuters reports that the country is back at it. It’s said to be considering a telecom industry proposal with another draconian requirement. This one would require smartphone makers to enable always-on satellite-based location tracking (Assisted GPS).

The measure would require location services to remain on at all times, with no option to switch them off. The telecom industry also wants phone makers to disable notifications that alert users when their carriers have accessed their location.

[…]

Source: India is reportedly considering another draconian smartphone surveillance plan

Looks like the Indians took a page out of the Danish playbook for Chat Control and turning the EU into a 1984 Brave New World

India demands smartphone makers install government app

India’s government has issued a directive that requires all smartphone manufacturers to install a government app on every handset in the country and has given them 90 days to get the job done – and to ensure users can’t remove the code.

The app is called “Sanchar Saathi” and is a product of India’s Department of Telecommunications (DoT).

On Google Play and Apple’s App Store, the Department describes the app as “a citizen centric initiative … to empower mobile subscribers, strengthen their security and increase awareness about citizen centric initiatives.”

The app does those jobs by allowing users to report incoming calls or messages – even on WhatsApp – they suspect are attempts at fraud. Users can also report incoming calls for which caller ID reveals the +91 country code, as India’s government thinks that’s an indicator of a possible illegal telecoms operator.

Users can also block their device if they lose it or suspect it was stolen, an act that will prevent it from working on any mobile network in India.

Another function allows lookup of IMEI numbers so users can verify if their handset is genuine.

Spam and scams delivered by calls or TXTs are pervasive around the world, and researchers last year found that most Indian netizens receive three or more dodgy communiqués every day. This app has obvious potential to help reduce such attacks.

An announcement from India’s government states that cybersecurity at telcos is another reason for the requirement to install the app.

“Spoofed/ Tampered IMEIs in telecom network leads to situation where same IMEI is working in different devices at different places simultaneously and pose challenges in action against such IMEIs,” according to the announcement. “India has [a] big second-hand mobile device market. Cases have also been observed where stolen or blacklisted devices are being re-sold. It makes the purchaser abettor in crime and causes financial loss to them. The blocked/blacklisted IMEIs can be checked using Sanchar Saathi App.”

That motive is likely the reason India has required handset-makers to install Sanchar Saathi on existing handsets with a software update.

The directive also requires the app to be pre-installed, “visible, functional, and enabled for users at first setup.” Manufacturers may not disable or restrict its features and “must ensure the App is easily accessible during device setup.”

Those functions mean India’s government will soon have a means of accessing personal info on hundreds of millions of devices.

Apar Gupta, founder and director of India’s Internet Freedom Foundation, has criticized the directive on grounds that Sanchar Saathi isn’t fit for purpose. “Rather than resorting to coercion and mandating it to be installed the focus should be on improving it,” he wrote.

[…]

Source: India demands smartphone makers install government app • The Register

Canadian data order risks blowing a hole in EU sovereignty

A Canadian court has ordered French cloud provider OVHcloud to hand over customer data stored in Europe, potentially undermining the provider’s claims about digital sovereignty protections.

According to documents seen by The Register, the Royal Canadian Mounted Police (RCMP) issued a Production Order in April 2024 demanding subscriber and account data linked to four IP addresses on OVH servers in France, the UK, and Australia as part of a criminal investigation.

OVH has a Canadian arm, which was the jumping-off point for the courts, but OVH Group is a French company, so the data in France should be protected from prying eyes. Or perhaps not.

Rather than using established Mutual Legal Assistance Treaties (MLAT) between Canada and France, the RCMP sought direct disclosure through OVH’s Canadian subsidiary.

This puts OVH in an impossible position. French law prohibits such data sharing outside official treaties, with penalties up to €90,000 and six months imprisonment. But refusing the Canadian order risks contempt of court charges.

[…]

Under Trump 2.0, economic and geopolitical relations between Europe and the US have become increasingly volatile, something Microsoft acknowledged in April.

Against this backdrop, concerns about the US CLOUD Act are growing. Through the legislation, US authorities can request – via warrant or subpoena – access to data hosted by US corporations regardless of where in the world that data is stored. Hyperscalers claim they have received no such requests with respect to European customers, but the risk remains and European cloud providers have used this as a sales tactic by insisting digital information they hold is protected.

In the OVH case, if Canadian authorities are able to force access to data held on European servers rather than navigate official channels (for example, international treaties), the implications could be severe.

[…]

Earlier this week, GrapheneOS announced it no longer had active servers in France and was in the process of leaving OVH.

The privacy-focused mobile outfit said, “France isn’t a safe country for open source privacy projects. They expect backdoors in encryption and for device access too. Secure devices and services are not going to be allowed. We don’t feel safe using OVH for even a static website with servers in Canada/US via their Canada/US subsidiaries.”

In August, an OVH legal representative crowed over the admission by Microsoft that it could not guarantee data sovereignty.

It would be deeply ironic if OVH were unable to guarantee the same thing because the company has a subsidiary in Canada.

[…]

Source: Canadian data order risks blowing a hole in EU sovereignty • The Register

That didn’t take long: A few days after Chat Control, European Parliament implements Age Verification on Social Media, 16+

On Wednesday, MEPs adopted a non-legislative report by 483 votes in favour, 92 against and with 86 abstentions, expressing deep concern over the physical and mental health risks minors face online and calling for stronger protection against the manipulative strategies that can increase addiction and that are detrimental to children’s ability to concentrate and engage healthily with online content.


Minimum age for social media platforms

To help parents manage their children’s digital presence and ensure age-appropriate online engagement, Parliament proposes a harmonised EU digital minimum age of 16 for access to social media, video-sharing platforms and AI companions, while allowing 13- to 16-year-olds access with parental consent.

Expressing support for the Commission’s work to develop an EU age verification app and the European digital identity (eID) wallet, MEPs insist that age assurance systems must be accurate and preserve minors’ privacy. Such systems do not relieve platforms of their responsibility to ensure their products are safe and age-appropriate by design, they add.

To incentivise better compliance with the EU’s Digital Services Act (DSA) and other relevant laws, MEPs suggest senior managers could be made personally liable in cases of serious and persistent non-compliance, with particular respect to protection of minors and age verification.

[…]

According to the 2025 Eurobarometer, over 90% of Europeans believe action to protect children online is a matter of urgency, not least in relation to social media’s negative impact on mental health (93%), cyberbullying (92%) and the need for effective ways to restrict access to age-inappropriate content (92%).

Member states are starting to take action and responding with measures such as age limits and verification systems.

Source: Children should be at least 16 to access social media, say MEPs | News | European Parliament

Expect to see manadatory surveillance on social media (whatever they define that to be) soon as it is clearly “risky”.

The problem is real, but age verification is not the way to solve the problem. Rather, it will make it much, much worse as well as adding new problems entirely.

See also: https://www.linkielist.com/?s=age+verification&submit=Search

See also: Europen Council decides to implement Mass Surveillance and Age Verification through law protecting children from online abuse

Welcome to a new fascist thought controlled Europe, heralded by Denmark.

Chat Control: EU lawmakers finally agree on the “voluntary” scanning of your private chats

[…] The EU Council has finally reached an agreement on the controversial Child Sexual Abuse Regulation (CSAR) after more than three years of failed attempts.

Nicknamed Chat Control by its critics, the agreement has kept cryptographers, technologists, encrypted service providers, and privacy experts alike in turmoil since its inception.

Presidency after presidency, the bill has taken many shapes. But its most controversial feature is an obligation for all messaging service providers operating in the EU – including those using end-to-end-encryption – to scan their users’ private chats on the lookout for child sexual abuse material (CSAM).

At the beginning of the month, the Danish Presidency decided to change its approach with a new compromise text that makes the chat scanning voluntary, instead. That turned to be a winning move, with the proposal managing to reach an agreement in the Council on Wednesday, November 26, 2025.

Privacy experts are unlikely to celebrate, though. The decision came a few days after a group of scientists wrote yet another open letter warning that the latest text still “brings high risks to society.” That’s after other privacy experts deemed the new proposal a “political deception” rather than an actual fix.

The EU Council is now preparing to start negotiations with the European Parliament, hoping to agree on the final terms of the regulation.

What we know about the Council agreement

As per the EU Council announcement, the new law imposes a series of obligations on digital companies. Under the new rules, online service providers will be required to assess how their platforms could be misused and, based on the results, may need to “implement mitigating measures to counter that risk,” the Council notes.

Source: Chat Control: EU lawmakers finally agree on the voluntary scanning of your private chats | TechRadar

A “risk mitigation obligation” can be used to explain anything and obligate spying through whatever services the EU says there is “risk”

Considering the whole proposal was shot down several times in the past years and even past month, using a back door rush to push this through is not how a democracy is supposed to function at all. And this is how fascism grips it’s iron claws. What is going on in Demark?

Europen Council decides to implement Mass Surveillance and Age Verification through law protecting children from online abuse

[…]

Under the new rules, online service providers will be required to assess the risk that their services could be misused for the dissemination of child sexual abuse material or for the solicitation of children. On the basis of this assessment, they will have to implement mitigating measures to counter that risk. Such measures could include making available tools that enable users to report online child sexual abuse, to control what content about them is shared with others and to put in place default privacy settings for children.

Member states will designate national authorities (‘coordinating and other competent authorities’) responsible for assessing these risk assessments and mitigating measures, with the possibility of obliging providers to carry out mitigating measures.

[…]

The Council also wants to make permanent a currently temporary measure that allows companies to – voluntarily – scan their services for child sexual abuse. At present, providers of messaging services, for instance, may voluntarily check content shared on their platforms for online child sexual abuse material,

[Note here: if it is deemed “risky” then the voluntary part is scrubbed and it becomes mandatory. Anything can be called “risky” very easily (just look at the data slurping that goes on in Terms of Services through the text “improving our product”).]

The new law provides for the setting up of a new EU agency, the EU Centre on Child Sexual Abuse, to support the implementation of the regulation.

The EU Centre will assess and process the information supplied by the online providers about child sexual abuse material identified on services, and will create, maintain and operate a database for reports submitted to it by providers. It will further support the national authorities in assessing the risk that services could be used for spreading child sexual abuse material.

The Centre is also responsible for sharing companies’ information with Europol and national law enforcement bodies. Furthermore, it will establish a database of child sexual abuse indicators, which companies can use for their voluntary activities.

Source: Child sexual abuse: Council reaches position on law protecting children from online abuse – Consilium

The article does not mention how you can find out if someone is a child: that is age verification. Which comes with huge rafts of problems, such as censorship (there go the LGBTQ crowd!), hacks (Discord) stealing all the government IDs used to verify ages, and of course ways that people find to circumvent age verification (VPNs, which increase internet traffic, meme pictures of Donald Trump) which causes them to behave in a more unpredictable way, thus harming the kids this is supposed to protect.

Of course, this law has been shot down several times in the past 3 years by the EU, but that didn’t stop Denmark from finding a way to implement it nonetheless in a back door shotgun kind of way.

Age Verification, Estimation, Assurance, Oh My! A Guide To The Terminology

If you’ve been following the wave of age-gating laws sweeping across the country and the globe, you’ve probably noticed that lawmakers, tech companies, and advocates all seem to be using different terms for what sounds like the same thing. Age verification, age assurance, age estimation, age gating—they get thrown around interchangeably, but they technically mean different things. And those differences matter a lot when we’re talking about your rights, your privacy, your data, and who gets to access information online.

[click the source link below to read the different definitions – ed]

Why This Confusion Matters

Politicians and tech companies love using these terms interchangeably because it obscures what they’re actually proposing. A law that requires “age assurance” sounds reasonable and moderate. But if that law defines age assurance as requiring government ID verification, it’s not moderate at all—it’s mass surveillance. Similarly, when Instagram says it’s using “age estimation” to protect teens, that sounds privacy-friendly. But when their estimation fails and forces you to upload your driver’s license instead, the privacy promise evaporates.

Here’s the uncomfortable truth: most lawmakers writing these bills have no idea how any of this technology actually works. They don’t know that age estimation systems routinely fail for people of color, trans individuals, and people with disabilities. They don’t know that verification systems have error rates. They don’t even seem to understand that the terms they’re using mean different things. The fact that their terminology is all over the place—using “age assurance,” “age verification,” and “age estimation” interchangeably—makes this ignorance painfully clear, and leaves the onus on platforms to choose whichever option best insulates them from liability.

Language matters because it shapes how we think about these systems. “Assurance” sounds gentle. “Verification” sounds official. “Estimation” sounds technical and impersonal, and also admits its inherent imprecision. But they all involve collecting your data and create a metaphysical age gate to the internet. The terminology is deliberately confusing, but the stakes are clear: it’s your privacy, your data, and your ability to access the internet without constant identity checks. Don’t let fuzzy language disguise what these systems really do.

Republished from EFF’s Deeplinks blog.

Source: Age Verification, Estimation, Assurance, Oh My! A Guide To The Terminology | Techdirt

Danish manage to bypass democracy to implement mass EU surveillance, says it is “voluntary”

The EU states agree on a common position on chat control. Internet services should be allowed to read communication voluntarily, but will not be obliged [*cough – see bold and end of document: Ed*] to do so. We publish the classified negotiating protocol and bill. After the formal decision, the trilogue negotiations begin.

18.11.2025 at 14:03– Andre Meister – in surveillanceno additions

Man in suit at lectern, behind him flags.
Presidency of the Council: Danish Minister of Justice Hummelgaard. – CC-BY-NC-ND 4.0 Danish Presidency

The EU states have agreed on a common position on chat control. We publish the bill.

Last week, the Council working group discussed the law. We shall once again publish the classified minutes of the meeting.

Tomorrow, the Permanent Representatives want to officially decide on the position.

Update 19.10.: A Council spokesperson tells us, “The agenda item has been postponed until next week.”

Three years of dispute

For three and a half years, the EU institutions have been arguing over chat control. The Commission intends to oblige Internet services to search the content of their users without cause for information on criminal offences and to send them to authorities if suspected.

Parliament calls this mass surveillance and calls for only unencrypted content from suspects to be scanned.

A majority of EU countries want mandatory chat control. However, a blocking minority rejects this. Now the Council has agreed on a compromise. Internet services are not required to chat control, but may carry out a voluntary chat control.

Absolute red lines

The Danish Presidency wants to bring the draft law through the Council “as soon as possible” so that the trilogue negotiations can be started in a timely manner. The feedback from the states should be limited to “absolute red lines”.

The majority of states “supported the compromise proposal.” At least 15 spoke out in favour, including Germany and France.

Germany “welcomed both the deletion of the mandatory measures and the permanent anchoring of voluntary measures.”

Italy also sees voluntary chat control as skeptical. “We fear that the instrument could also be extended to other crimes, so we have difficulty supporting the proposal.” Politicians have already called for chat control to be extended to other content.

Absolute minimum consensus

Other states called the compromise “an absolute minimum consensus.” They “actually wanted more – especially in the sense of commitments.” Some states “showed themselves clearly disappointed by the cancellations made.”

Spain, in particular, “still considered mandatory measures to be necessary, unfortunately, a comprehensive agreement on this was not possible.” Hungary, too, “saw volunteerism as the sole concept as too little.”

Spain, Hungary and Bulgaria proposed “an obligation for providers to have to expose at least in open areas.” The Danish Presidency “described the proposal as ambitious, but did not take it up to avoid further discussion.”

Denmark explicitly pointed to the review clause. Thus, “the possibility of detection orders is kept open at a later date.” Hungary stressed that “this possibility must also be used.”

No obligation

The Danish Presidency had publicly announced that the chat control should not be mandatory, but voluntary.

However, the formulated compromise proposal was contradictory. She had deleted the article on mandatory chat control. However, another article said services should also carry out voluntary measures.

Several states have asked whether these formulations “could lead to a de facto obligation.” The Legal Services agreed: “The wording can be interpreted in both directions.” The Presidency of the Council “clarified that the text only had a risk mitigation obligation, but not a commitment to detection.”

The day after the meeting, the presidency of the Council sent out the likely final draft law of the Council. It states explicitly: ‘No provision of this Regulation shall be interpreted as imposing obligations of detection obligations on providers’.

Damage and abuse

Mandatory chat control is not the only issue in the planned law. Voluntary chat control is also prohibited. The European Commission cannot prove its proportionality. Many oppose voluntary chat control, including the EU Commission, the European Data Protection Supervisor and the German Data Protection Supervisor.

A number of scientists are critical of the compromise proposal. The voluntary chat control does not designate it to be appropriate. “Their benefit is not proven, while the potential for harm and abuse is enormous.”

The law also calls for mandatory age checks. The scientists criticize that age checks “bring with it an inherent and disproportionate risk of serious data breaches and discrimination without guaranteeing their effectiveness.” The Federal Data Protection Officer also fears a “large-scale abolition of anonymity on the Internet.”

Now follows Trilog

The EU countries will not discuss these points further. The Danish Presidency “reaffirmed its commitment to the compromise proposal without the Spanish proposals.”

The Permanent Representatives of the EU States will meet next week. In December, the justice and interior ministers meet. These two bodies are to adopt the bill as the official position of the Council.

This is followed by the trilogue. There, the Commission, Parliament and the Council negotiate to reach a compromise from their three separate bills.

[…]

A “risk mitigation obligation” can be used to explain anything and obligate spying through whatever services the EU says there is “risk”

Source: Translated from EU states agree on voluntary chat control

Considering the whole proposal was shot down several times in the past years and even past month, using a back door rush to push this through is not how a democracy is supposed to function at all. And this is how fascism grips it’s iron claws. What is going on in Demark?

For more information on the history of Chat Control click here

EU proposes doing away with constant cookies requests by setting the “No” in your browser settings

People will no longer be bombarded by constant requests to accept or reject “cookies” when browsing the internet, under proposed changes to the European Union’s strict data privacy laws.

The pop-up prompts asking internet users to consent to cookies when they visit a website are widely seen as a nuisance, undermining the original privacy intentions of the digital rules.

[I don’t think this undermines anything – cookie consent got rid of a LOT of spying and everyone now just automatically clicks on NO or uses addons to do this (well, if you are using Firefox as a browser). The original purpose: stop companies spying has been achieved]

Brussels officials have now tabled changes that would allow people to accept or reject cookies for a six-month period, and potentially set their internet browser to automatically opt-in or out, to avoid being repeatedly asked whether they consent to websites remembering information about their past visits.

Cookies allow websites to keep track of a user’s previous activity, allowing sites to pull up items added to an online shopping cart that were not purchased, or remember whether someone had logged in to an account on the site before, as well as target advertisements.

[…]

Source: EU proposes doing away with constant internet ‘cookies’ requests – The Irish Times