The Linkielist

Linking ideas with the world

The Linkielist

Uni of Melbourne used Wi-Fi location data to ID protestors

Australia’s University of Melbourne last year used Wi-Fi location data to identify student protestors.

The University used Wi-Fi to identify students who participated in July 2024 sit-in protest. As described in a report [PDF] into the matter by the state of Victoria’s Office of the Information Commissioner, the University directed protestors to leave the building they occupied and warned those who remained could be suspended, disciplined, or reported to police.

The report says 22 chose to remain, and that the University used CCTV and WiFi location data to identify them.

The Information Commissioner found that use of CCTV to identify protestors did not breach privacy, but felt using Wi-Fi location data did because the University’s policies lacked detail.

“Given that individuals would not have been aware of why their Wi-Fi location data was collected and how it may be used, they could not exercise an informed choice as to whether to use the Wi-Fi network during the sit-in, and be aware of the possible consequences for doing so,” the report found.

As the investigation into use of location data unfolded, the University changed its policies regarding use of location data. The Office of the Information Commissioner therefore decided not to issue a formal compliance notice, and will monitor the University to ensure it complies with its undertakings.

Source: Australian uni used Wi-Fi location data to ID protestors • The Register

Privacy‑Preserving Age Verification Falls Apart On Contact With Reality

[…] Identity‑proofing creates a privacy bottleneck. Somewhere, an identity provider must verify you. Even if it later mints an unlinkable token, that provider is the weak link—and in regulated systems it will not be allowed to “just delete” your information. As Bellovin puts it:

Regulation implies the ability for governments to audit the regulated entities’ behavior. That in turn implies that logs must be kept. It is likely that such logs would include user names, addresses, ages, and forms of credentials presented.

Then there’s the issue of fraud and duplication of credentials. Accepting multiple credential types increases coverage and increases abuse; people can and do hold multiple valid IDs:

The fact that multiple forms of ID are acceptable… exacerbates the fraud issue…This makes it impossible to prevent a single person from obtaining multiple primary credentials, including ones for use by underage individuals.

Cost and access will absolutely chill speech. Identity providers are expensive. If users pay, you’ve built a wealth test for lawful speech. If sites pay, the costs roll downhill (fees, ads, data‑for‑access) and coverage narrows to the cheapest providers who may also be more susceptible to breaches:

Operating an IDP is likely to be expensive… If web sites shoulder the cost, they will have to recover it from their users. That would imply higher access charges, more ads (with their own privacy challenges), or both.

Sharing credentials drives mission creep, which will create dangers with the technology. If a token proves only “over 18,” people will share it (parents to kids, friends to friends). To deter that, providers tie tokens to identities/devices or bundle more attributes—making them more linkable and more revocable:

If the only use of the primary credential is obtaining age-verifying subcredentials, this isn’t much of a deterrent—many people simply won’t care…That, however, creates pressure for mission creep… , including opening bank accounts, employment verification, and vaccination certificates; however, this is also a major point of social control, since it is possible to revoke a primary credential and with it all derived subcredentials.

The end result, then is you’re not just attacking privacy again, but you’re creating a tool for authoritarian pressure:

Those who are disfavored by authoritarian governments may lose access not just to pornography, but to social media and all of these other services.

He also grounds it in lived reality, with a case study that shows who gets locked out first:

Consider a hypothetical person “Chris”, a non-driving senior citizen living with an adult child in a rural area of the U.S… Apart from the expense— quite possibly non-trivial for a poor family—Chris must persuade their child to then drive them 80 kilometers or more to a motor vehicles office…

There is also the social aspect. Imagine the embarrassment to all of an older parent having to explain to their child that they wish to view pornography.

None of this is an attack on the math. It’s a reminder that deployment reality ruins the cryptographic ideal. There’s more in the paper, but you get the idea

[…]

Source: Privacy‑Preserving Age Verification Falls Apart On Contact With Reality | Techdirt

Proton releases Lumo GPT 1.1:  faster, more advanced, European and actually private

Today we’re releasing a powerful update to Lumo that gives you a more capable privacy-first AI assistant offering faster, more thorough answers with improved awareness of recent events.

Guided by feedback from our community, we’ve been busy upgrading our models and adding GPUs, which we’ll continue to do thanks to the support of our Lumo Plus subscribers. Lumo 1.1 performs significantly better across the board than the first version of Lumo, so you can now use it more effectively for a variety of use cases:

  • Get help planning projects that require multiple steps — it will break down larger goals into smaller tasks
  • Ask complex questions and get more nuanced answers
  • Generate better code — Lumo is better at understanding your requests
  • Research current events or niche topics with better accuracy and fewer hallucinations thanks to improved web search

New cat, new tricks, same privacy

The latest upgrade brings more accurate responses with significantly less need for corrections or follow-up questions. Lumo now handles complex requests much more reliably and delivers the precise results you’re looking for.

In testing, Lumo’s performance has increased across several metrics:

  • Context: 170% improvement in context understanding so it can accurately answer questions based on your documents and data
  • Coding: 40% better ability to understand requests and generate correct code
  • Reasoning: Over 200% improvement in planning tasks, choosing the right tools such as web search, and working through complex multi-step problems

Most importantly, Lumo does all of this while respecting the confidentiality of your chats. Unlike every major AI platform, Lumo is open source and built to be private by design. It doesn’t keep any record of your chats, and your conversation history is secured with zero-access encryption so nobody else can see it and your data is never used to train the models. Lumo is the only AI where your conversations are actually private.

Learn about Lumo privacy

Lumo mobile apps are now open source

Unlike Big Tech AIs that spy on you, Lumo is an open source application that exclusively runs open source models. Open source is especially important in AI because it confirms that the applications and models are not being used nefariously to manipulate responses to fit a political narrative or secretly leak data. While the Lumo web client is already open source(new window), today we are also releasing the code for the mobile apps(new window). In line with Lumo being the most transparent and private AI, we have also published the Lumo security model so you can see how Lumo’s zero access encryption works and why nobody, not even Proton can access your conversation history.

Source: Introducing Lumo 1.1 for faster, advanced reasoning | Proton

The EU could be scanning your chats by October 2025 with Chat Control

Denmark kicked off its EU Presidency on July 1, 2025, and, among its first actions, lawmakers swiftly reintroduced the controversial child sexual abuse (CSAM) scanning bill to the top of the agenda.

Having been deemed by critics as Chat Control, the bill aims to introduce new obligations for all messaging services operating in Europe to scan users’ chats, even if they’re encrypted.

The proposal, however, has been failing to attract the needed majority since May 2022, with Poland’s Presidency being the last to give up on such a plan.

Denmark is a strong supporter of Chat Control. Now, the new rules could be adopted as early as October 14, 2025, if the Danish Presidency manages to find a middle ground among the countries’ members.

Crucially, according to the latest data leaked by the former MEP for the German Pirate Party, Patrick Breyer, many countries that said no to Chat Control in 2024 are now undecided, “even though the 2025 plan is even more extreme,” he added.

[…]

As per its first version, all messaging software providers would be required to perform indiscriminate scanning of private messages to look for CSAM – so-called ‘client-side scanning‘. The proposal was met with a strong backlash, and the European Court of Human Rights ended up banning all legal efforts to weaken encryption of secure communications in Europe.

In June 2024, Belgium then proposed a new text to target only shared photos, videos, and URLs, upon users’ permission. This version didn’t satisfy either the industry or voting EU members due to its coercive nature. As per the Belgian text, users must give consent to the shared material being scanned before being encrypted to keep using the functionality.

Source: The EU could be scanning your chats by October 2025 – here’s everything we know | TechRadar

Pluralistic: “Privacy preserving age verification” is bullshit

[…]

when politicians are demanding that technologists NERD HARDER! to realize their cherished impossibilities.

That’s just happened, and in relation to one of the scariest, most destructive NERD HARDER! tech policies ever to be assayed (a stiff competition). I’m talking about the UK Online Safety Act, which imposes a duty on websites to verify the age of people they communicate with before serving them anything that could be construed as child-inappropriate (a category that includes, e.g., much of Wikipedia):

https://wikimediafoundation.org/news/2025/08/11/wikimedia-foundation-challenges-uk-online-safety-act-regulations/

The Starmer government has, incredibly, developed a passion for internet regulations that are even stupider than Tony Blair’s and David Cameron’s. Requiring people to identify themselves (generally, via their credit cards) in order to look at porn will create a giant database of every kink and fetish of every person in the UK, which will inevitably leak and provide criminals and foreign spies with a kompromat system they can sort by net worth of the people contained within.

This hasn’t deterred Starmer, who insists that if we just NERD HARDER!, we can use things like “zero-knowledge proofs” to create “privacy-preserving” age verification system, whereby a service can assure itself that it is communicating with an adult without ever being able to determine who it is communicating with.

In support of this idea, Starmer and co like to cite some genuinely exciting and cool cryptographic work on privacy-preserving credential schemes. Now, one of the principal authors of the key papers on these credential schemes, Steve Bellovin, has published a paper that is pithily summed up via its title, “Privacy-Preserving Age Verification—and Its Limitations”:

https://www.cs.columbia.edu/~smb/papers/age-verify.pdf

The tldr of this paper is that Starmer’s idea will not work and cannot work. The research he relies on to defend the technological feasibility of his cherished plan does not support his conclusion.

Bellovin starts off by looking at the different approaches various players have mooted for verifying their users’ age. For example, Google says it can deploy a “behavioral” system that relies on Google surveillance dossiers to make guesses about your age. Google refuses to explain how this would work, but Bellovin sums up several of the well-understood behavioral age estimation techniques and explains why they won’t work. It’s one thing to screw up age estimation when deciding which ad to show you; it’s another thing altogether to do this when deciding whether you can access the internet.

Others say they can estimate your age by using AI to analyze a picture of your face. This is a stupid idea for many reasons, not least of which is that biometric age estimation is notoriously unreliable when it comes to distinguishing, say, 16 or 17 year olds from 18 year olds. Nevertheless, there are sitting US Congressmen who not only think this would work – they labor under the misapprehension that this is already going on:

https://pluralistic.net/2023/04/09/how-to-make-a-child-safe-tiktok/

So that just leaves the privacy-preserving credential schemes, especially the Camenisch-Lysyanskaya protocol. This involves an Identity Provider (IDP) that establishes a user’s identity and characteristics using careful document checks and other procedures. The IDP then hands the user a “primary credential” that can attest to everything the IDP knows about the user, and any number of “subcredentials” that only attest to specific facts about that user (such as their age).

These are used in zero-knowledge proofs (ZKP) – a way for two parties to validate that one of them asserts a fact without learning what that fact is in the process (this is super cool stuff). Users can send their subcredentials to a third party, who can use a ZKP to validate them without learning anything else about the user – so you could prove your age (or even just prove that you are over 18 without disclosing your age at all) without disclosing your identity.

There’s some good news for implementing CL on the web: rather than developing a transcendentally expensive and complex new system for these credential exchanges and checks, CL can piggyback on the existing Public Key Infrastructure (PKI) that powers your browser’s ability to have secure sessions when you visit a website with https:// in front of the address (instead of just http://).

However, doing so poses several difficulties, which Bellovin enumerates under a usefully frank section header: “INSURMOUNTABLE OBSTACLES.”

The most insurmountable of these obstacles is getting set up with an IDP in the first place – that is, proving who you are to some agency, but only one such agency (so you can’t create two primary credentials and share one of them with someone underage). Bellovin cites Supreme Court cases about voter ID laws and the burdens they impose on people who are poor, old, young, disabled, rural, etc.

Fundamentally, it can be insurmountably hard for a lot of people to get, say, a driver’s license, or any other singular piece of ID that they can provide to an IDP in order to get set up on the system.

The usual answer for this is for IDPs to allow multiple kinds of ID. This does ease the burden on users, but at the expense of creating fatal weaknesses in the system: if you can set up an identity with multiple kinds of ID, you can visit different IDPs and set up an ID with each (just as many Americans today have drivers licenses from more than one state).

The next obstacle is “user challenges,” like the problem of households with shared computers, or computers in libraries, hotels, community centers and other public places. The only effective way to do this is to create (expensive) online credential stores, which are likely to be out of reach of the poor and disadvantaged people who disproportionately rely on public or shared computers.

Next are the “economic issues”: this stuff is expensive to set up and maintain, and someone’s gotta pay for it. We could ask websites that offer kid-inappropriate content to pay for it, but that sets up an irreconcilable conflict of interest. These websites are going to want to minimize their costs, and everything they can do to reduce costs will make the system unacceptably worse. For example, they could choose only to set up accounts with IDPs that are local to the company that operates the server, meaning that anyone who lives somewhere else and wants to access that website is going to have to somehow get certified copies of e.g. their birth certificate and driver’s license to IDPs on the other side of the planet. The alternative to having websites foot the bill for this is asking users to pay for it – meaning that, once again, we exclude poor people from the internet.

Finally, there’s “governance”: who runs this thing? In practice, the security and privacy guarantees of the CL protocol require two different kinds of wholly independent institutions: identity providers (who verify your documents), and certificate authorities (who issue cryptographic certificates based on those documents). If these two functions take place under one roof, the privacy guarantees of the system immediately evaporate.

An IDP’s most important role is verifying documents and associating them with a specific person. But not all IDPs will be created equal, and people who wish to cheat the system will gravitate to the worst IDPs. However, lots of people who have no nefarious intent will also use these IDPs, merely because they are close by, or popular, or were selected at random. A decision to strike off an IDP and rescind its verifications will force lots of people – potentially millions of people – to start over with the whole business of identifying themselves, during which time they will be unable to access much of the web. There’s no practical way for the average person to judge whether an IDP they choose is likely to be found wanting in the future.

So we can regulate IDPs, but who will do the regulation? Age verification laws affect people outside of a government’s national territory – anyone seeking to access content on a webserver falls under age verification’s remit. Remember, IDPs handle all kinds of sensitive data: do you want Russia, say, to have a say in deciding who can be an IDP and what disclosure rules you will have to follow?

To regulate IDPs (and certificate authorities), these entities will have to keep logs, which further compromises the privacy guarantees of the CL protocol.

Looming all of this is a problem with the CL protocol as being built on regulated entities, which is that CL is envisioned as a way to do all kinds of business, from opening a bank account to proving your vaccination status or your right to work or receive welfare. Authoritarian governments who order primary credential revocations of their political opponents could thoroughly and terrifyingly “unperson” them at the stroke of a pen.

The paper’s conclusions provide a highly readable summary of these issues, which constitute a stinging rebuke to anyone contemplating age-verification schemes. These go well beyond the UK, and are in the works in Canada, Australia, the EU, Texas and Louisiana.

Age verification is an impossibility, and an impossibly terrible idea with impossibly vast consequences for privacy and the open web, as my EFF colleague Jason Kelley explained on the Malwarebytes podcast:

https://www.malwarebytes.com/blog/podcast/2025/08/the-worst-thing-for-online-rights-an-age-restricted-grey-web-lock-and-code-s06e16

Politicians – even nontechnical ones – can make good tech policy, provided they take expert feedback seriously (and distinguish it from self-interested industry lobbying).

When it comes to tech policy, wanting it badly is not enough. The fact that it would be really cool if we could get technology to do something has no bearing on whether we can actually get technology to do that thing. NERD HARDER! isn’t a policy, it’s a wish.

Wish in one hand and shit in the other and see which one will be full first:

https://www.reddit.com/r/etymology/comments/oqiic7/studying_the_origins_of_the_phrase_wish_in_one/

Source: Pluralistic: “Privacy preserving age verification” is bullshit (14 Aug 2025) – Pluralistic: Daily links from Cory Doctorow

UK passport database images used in facial recognition scans

Privacy groups report a surge in UK police facial recognition scans of databases secretly stocked with passport photos lacking parliamentary oversight.

Big Brother Watch says the UK government has allowed images from the country’s passport and immigration databases to be made available to facial recognition systems, without informing the public or parliament.

The group claims the passport database contains around 58 million headshots of Brits, plus a further 92 million made available from sources such as the immigration database, visa applications, and more.

By way of comparison, the Police National Database contains circa 20 million photos of those who have been arrested by, or are at least of interest to, the police.

In a joint statement, Big Brother Watch, its director Silkie Carlo, Privacy International, and its senior technologist Nuno Guerreiro de Sousa, described the databases and lack of transparency as “Orwellian.” They have also written to both the Home Office and the Metropolitan Police, calling for a ban on the practice.

The comments come after Big Brother Watch submitted Freedom of Information requests, which revealed a significant uptick in police scanning the databases in question as part of the force’s increasing facial recognition use.

The number of searches by 31 police forces against the passport databases rose from two in 2020 to 417 by 2023, and scans using the immigration database photos rose from 16 in 2023 to 102 the following year.

Carlo said: “This astonishing revelation shows both our privacy and democracy are at risk from secretive AI policing, and that members of the public are now subject to the inevitable risk of misidentifications and injustice. Police officers can secretly take photos from protests, social media, or indeed anywhere and seek to identify members of the public without suspecting us of having committed any crime.

“This is a historic breach of the right to privacy in Britain that must end. We’ve taken this legal action to defend the rights of tens of millions of innocent people in Britain.”

[…]

Recent data from the Met attempted to imbue a sense of confidence in facial recognition, as the number of arrests the technology facilitated passed the 1,000 mark, the force said in July.

However, privacy campaigners were quick to point out that this accounted for just 0.15 percent of the total arrests in London since 2020. They suggested that despite the shiny 1,000 number, this did not represent a valuable return on investment in the tech.

Alas, the UK has not given up on its pursuit of greater surveillance powers. Prime Minister Keir Starmer, a former human rights lawyer, is a big fan of FR, having said last year that it was the answer to preventing future riots like the ones that broke out across the UK last year following the Southport murders. ®

Source: UK passport database images used in facial recognition scans • The Register

EU Chat Control Plan Gains Support Again, Threatens Encryption, mass surveillance, age verification

A controversial European Union proposal dubbed “Chat Control” is regaining momentum, with 19 out of 27 EU member states reportedly backing the measure.

The plan would mandate that messaging platforms, including WhatsApp, Signal and Telegram, must scan every message, photo and video sent by users starting in October, even if end-to-end encryption is in place, popular French tech blogger Korben wrote on Monday.

Denmark reintroduced the proposal on July 1, the first day of its EU Council presidency. France, once opposed, is now in favor, Korben said, citing Patrick Breyer, a former member of the European Parliament for Germany and the European Pirate Party.

Belgium, Hungary, Sweden, Italy and Spain are also in favor, while Germany remains undecided. However, if Berlin joins the majority, a qualified council vote could push the plan through by mid-October, Korben said.

A qualified majority in the EU Council is achieved when two conditions are met. First, at least 55 percent of member states, meaning 15 out of 27, must vote in favor. Second, those countries must represent at least 65% of the EU’s total population.

EU Chat Control bill finds support. Source: Pavol Luptak

Pre-encryption scanning on devices

Instead of weakening encryption, the plan seeks to implement client-side scanning, meaning software embedded in users’ devices that inspects content before it is encrypted. “A bit like if the Post Office came to read all your letters in your living room before you put them in the envelope,” Korben said.

He added that the real target isn’t criminals, who use encrypted or decentralized channels, but ordinary users whose private conversations would now be open to algorithmic scrutiny.

The proposal cites the prevention of child sexual abuse material (CSAM) as its justification. However, it would result in “mass surveillance by means of fully automated real-time surveillance of messaging and chats and the end of privacy of digital correspondence,” Breyer wrote.

Beyond scanning, the package includes mandatory age verification, effectively removing anonymity from messaging platforms. Digital freedom groups are asking citizens to contact their MEPs, sign petitions and push back before the law becomes irreversible.

[…]

Source: EU Chat Control Plan Gains Support, Threatens Encryption

Age verification is going horribly wrong in the UK and mass surveillance threatens freedom of thought, something we fortunately still have in the EU. This must be stopped.

Meta eavesdropped on period-tracker app’s users, SF jury rules

Meta lost a major privacy trial on Friday, with a jury in San Francisco ruling that the Menlo Park giant had eavesdropped on the users of the popular period-tracking app Flo. The plaintiff’s lawyers who sued Meta are calling this a “landmark” victory — the tech company contends that the jury got it all wrong.

The case goes back to 2021, when eight women sued Flo and a group of other tech companies, including Google and Facebook, now known as Meta. The stakes were extremely personal. Flo asked users about their sex lives, mental health and diets, and guided them through menstruation and pregnancy. Then, the women alleged, Flo shared pieces of that data with other companies. The claims were largely based on a 2019 Wall Street Journal story and a 2021 Federal Trade Commission investigation.

Google, Flo and the analytics company Flurry, which was also part of the lawsuit, reached settlements with the plaintiffs, as is common in class action lawsuits about tech privacy. But Meta stuck it out through the entire trial and lost.

[…]

Their complaint also pointed to Facebook’s terms for its business tools, which said the company used so-called “event data” to personalize ads and content.

In a 2022 filing, the tech giant admitted that Flo used Facebook’s kit during this period and that the app sent data connected to “App Events.” But Meta denied receiving intimate information about users’ health.

Nonetheless, the jury ruled against Meta. Along with the eavesdropping decision, the group determined that Flo’s users had a reasonable expectation they weren’t being overheard or recorded, as well as ruling that Meta didn’t have consent to eavesdrop or record. The unanimous verdict was that the massive company violated the California Invasion of Privacy Act.

The jury’s ruling could have far-reaching effects. Per a June filing about the case’s class action status, more than 3.7 million people in the United States registered for Flo between November 2016 and February 2019. Those potential claimants are expected to be updated via email and on a case website; it’s not yet clear what the remittance from the trial or settlements might be.

[…]

Source: Meta eavesdropped on period-tracker app’s users, SF jury rules

Didn’t Take Long To Reveal The UK’s Online Safety Act Is Exactly The Privacy-Crushing Failure Everyone Warned About

[…]the real kicker is what content is now being gatekept behind invasive age verification systems. Users in the UK now need to submit a selfie or government ID to access:

Yes, you read that right. A law supposedly designed to protect children now requires victims of sexual assault to submit government IDs to access support communities. People struggling with addiction must undergo facial recognition scans to find help quitting drinking or smoking. The UK government has somehow concluded that access to basic health information and peer support networks poses such a grave threat to minors that it justifies creating a comprehensive surveillance infrastructure around it.

[…]

And this is all after a bunch of other smaller websites and forums shut down earlier this year when other parts of the law went into effect.

This is exactly what happens when you regulate the internet as if it’s all just Facebook and Google. The tech giants can absorb the compliance costs, but everyone else gets crushed.

The only websites with the financial capacity to work around the government’s new regulations are the ones causing the problems in the first place. And now Meta, which already has a monopoly on a number of near-essential online activities (from local sales to university group chats), is reaping the benefits.

[…]

The age verification process itself is a privacy nightmare wrapped in security theater. Users are being asked to upload selfies that get run through facial recognition algorithms, or hand over copies of their government-issued IDs to third-party companies. The facial recognition systems are so poorly implemented that people are easily fooling them with screenshots from video games—literally using images from the video game Death Stranding. This isn’t just embarrassing, it reveals the fundamental security flaw at the heart of the entire system. If these verification methods can’t distinguish between a real person and a video game character, what confidence should we have in their ability to protect the sensitive biometric data they’re collecting?

But here’s the thing: even when these systems “work,” they’re creating massive honeypots of personal data. As we’ve seen repeatedly, companies collecting biometric data and ID verification inevitably get breached, and suddenly intimate details about people’s online activity become public. Just ask the users of Tea, a women’s dating safety app that recently exposed thousands of users’ verification selfies after requiring facial recognition for “safety.”

The UK government’s response to widespread VPN usage has been predictably authoritarian. First, they insisted nothing would change:

“The Government has no plans to repeal the Online Safety Act, and is working closely with Ofcom to implement the Act as quickly and effectively as possible to enable UK users to benefit from its protections.”

But then, Tech Secretary Peter Kyle deployed the classic authoritarian playbook: dismissing all criticism as support for child predators. This isn’t just intellectually dishonest—it’s a deliberate attempt to shut down legitimate policy debate by smearing critics as complicit in child abuse. It’s particularly galling given that the law Kyle is defending will do absolutely nothing to stop actual predators, who will simply migrate to unregulated platforms or use the same VPNs that law-abiding citizens are now flocking to.

[…]

Meanwhile, the actual harms it purports to address? Those remain entirely unaddressed. Predators will simply move to unregulated platforms, encrypted messaging, or services that don’t comply. Or they’ll just use VPNs. The law creates the illusion of safety while actually making everyone less secure.

This is what happens when politicians decide to regulate technology they don’t understand, targeting problems they can’t define, with solutions that don’t work. The UK has managed to create a law so poorly designed that it simultaneously violates privacy, restricts freedom, harms small businesses, and completely fails at its stated goal of protecting children.

And all of this was predictable. Hell, it was predicted. Civil society groups, activists, legal experts, all warned of these results and were dismissed by the likes of Peter Kyle as supporting child predators.

[…]

A petition set up on the UK government’s website demanding a repeal of the entire OSA received many hundreds of thousands of signatures within days. The government has already brushed it off with more nonsense, promising that the enforcer of the law, Ofcom, “will take a sensible approach to enforcement with smaller services that present low risk to UK users, only taking action where it is proportionate and appropriate, and will focus on cases where the risk and impact of harm is highest.”

But that’s a bunch of vague nonsense that doesn’t take into account that no platform wants to be on the receiving end of such an investigation, and thus will take these overly aggressive steps to avoid scrutiny.

[…]

What makes this particularly tragic is that there were genuine alternatives. Real child safety measures—better funding for mental health support, improved education programs, stronger privacy protections that don’t require mass surveillance—were all on the table. Instead, the UK chose the path that maximizes government control while minimizing actual safety.

The rest of the world should take note.

Source: Didn’t Take Long To Reveal The UK’s Online Safety Act Is Exactly The Privacy-Crushing Failure Everyone Warned About

Public ChatGPT Queries Are Getting Indexed By Google and Other Search Engines (update: fixed!)

An anonymous reader quotes a report from TechCrunch: It’s a strange glimpse into the human mind: If you filter search results on Google, Bing, and other search engines to only include URLs from the domain “https://chatgpt.com/share,” you can find strangers’ conversations with ChatGPT. Sometimes, these shared conversation links are pretty dull — people ask for help renovating their bathroom, understanding astrophysics, and finding recipe ideas. In another case, one user asks ChatGPT to rewrite their resume for a particular job application (judging by this person’s LinkedIn, which was easy to find based on the details in the chat log, they did not get the job). Someone else is asking questions that sound like they came out of an incel forum. Another person asks the snarky, hostile AI assistant if they can microwave a metal fork (for the record: no), but they continue to ask the AI increasingly absurd and trollish questions, eventually leading it to create a guide called “How to Use a Microwave Without Summoning Satan: A Beginner’s Guide.”

ChatGPT does not make these conversations public by default. A conversation would be appended with a “/share” URL only if the user deliberately clicks the “share” button on their own chat and then clicks a second “create link” button. The service also declares that “your name, custom instructions, and any messages you add after sharing stay private.” After clicking through to create a link, users can toggle whether or not they want that link to be discoverable. However, users may not anticipate that other search engines will index their shared ChatGPT links, potentially betraying personal information (my apologies to the person whose LinkedIn I discovered).
According to ChatGPT, these chats were indexed as part of an experiment. “ChatGPT chats are not public unless you choose to share them,” an OpenAI spokesperson told TechCrunch. “We’ve been testing ways to make it easier to share helpful conversations, while keeping users in control, and we recently ended an experiment to have chats appear in search engine results if you explicitly opted in when sharing.”

A Google spokesperson also weighed in, telling TechCrunch that the company has no control over what gets indexed. “Neither Google nor any other search engine controls what pages are made public on the web. Publishers of these pages have full control over whether they are indexed by search engines.”

Source: Public ChatGPT Queries Are Getting Indexed By Google and Other Search Engines

UK’s most tattooed man blocked from accessing porn online by new rules

Britain’s most tattooed man has a lot more time on his hands and not a lot else thanks to new porn laws.

The King of Ink says facial recognition tech has made it harder to chat to webcam girls, after sites started mistaking his tattooed face for a mask.

The new rules came into force last week, introducing stricter checks under Ofcom’s children’s codes.

The King of Ink, as he’s legally known, said: ‘Some of the websites are asking for picture verification, like selfies, and it’s not recognising my face.

‘It’s saying “remove your mask” because the technology is made so you can’t hold up a picture to the camera or wear a mask.

‘Would this also be the case for someone who is disfigured? They should have thought of this from day one.’

The businessman and entrepreneur, from Stechford, Birmingham, feels discriminated against on the basis of his permanent identity.

Britain's most tattooed man can't watch porn under new rules because it doesn't recognise his face King Of Ink Land King Body Art The Extreme Ink-ite (Mathew Whelan)
The tattoo enthusiast says his heavily tattooed face is a permanent part of his identity (Picture: @kingofinklandkingbodyart)

‘It’s as important as the name really and I changed my name legally,’ he said

‘Without a name you haven’t got an identity, and it’s the same with a face.

[…]

Source: UK’s most tattooed man blocked from accessing porn online by new rules | News UK | Metro News

So many ways to circumvent it, so many ways it break and really, age verification’s only winners are the tech companies that people are forced to pay money to.

Google AI is watching — how to turn off Gemini on Android

[…]Why you shouldn’t trust Gemini with your data

Gemini promises to simplify how you interact with your Android — fetching emails, summarizing meetings, pulling up files. But behind that helpful facade is an unprecedented level of centralized data collection, powered by a company known for privacy washing, (new window)misleadin(new window)g users(new window) about how their data is used, and that was hit with $2.9 billion in fines in 2024 alone, mostly for privacy violations and antitrust breaches.

Other people may see your sensitive information

Even more concerning, human reviewers may process your conversations. While Google claims these chats are disconnected from your Google account before review, that doesn’t mean much when a simple prompt like “Show me the email I sent yesterday” might return personal data like your name and phone number.

Your data may be shared beyond Google

Gemini may also share your data with third-party services. When Gemini interacts with other services, your data gets passed along and processed under their privacy policies, not just Google’s. Right now, Gemini mostly connects with Google services, but integrations with apps like WhatsApp and Spotify are already showing up. Once your data leaves Google, you cannot control where it goes or how long it’s kept.

The July 2025 update keeps Gemini connected without your consent

Before July, turning off Gemini Apps Activity automatically disabled all connected apps, so you couldn’t use Gemini to interact with other services unless you allowed data collection for AI training and human review. But Google’s July 7 update changed this behavior and now keeps Gemini connected to certain services — such as Phone, Messages, WhatsApp, and Utilities — even if activity tracking is off.

While this might sound like a privacy-conscious change — letting you use Gemini without contributing to AI training — it still raises serious concerns. Google has effectively preserved full functionality and ongoing access to your data, even after you’ve opted out.

Can you fully disable Gemini on Android?

No, and that’s by design.

[…]

How to turn off Gemini AI on Android

  1. Open the Gemini app on your Android.
  2. Tap your profile icon in the top-right corner.
  3. Go to Gemini Apps Activity*.
  1. Tap Turn offTurn off and delete activity, and follow the prompts.
  1. Select your profile icon again and go to Apps**.
  1. Tap the toggle switch to prevent Gemini from interacting with Google apps and third-party services.

*Gemini Apps Activity is a setting that controls whether your interactions with Gemini are saved to your Google account and used to improve Google’s AI systems. When it’s on, your conversations may be reviewed by humans, stored for up to 3 years, and used for AI training. When it’s off, your data isn’t used for AI training, but it’s still stored for up to 72 hours so Google can process your requests and feedback.

**Apps are the Google apps and third-party services that Gemini can access to perform tasks on your behalf — like reading your Gmail, checking your Google Calendar schedule, retrieving documents from Google Drive, playing music via Spotify, or sending messages on your behalf via WhatsApp. When Gemini is connected to these apps, it can access your personal content to fulfill prompts, and that data may be processed by Google or shared with the third-party app according to their own privacy policies.

Source: Google AI is watching — how to turn off Gemini on Android | Proton

Google Is Rolling Out Its AI Age Verification to More Services, and I’m Skeptical

Yesterday, I wrote about how YouTube is now using AI to guess your age. The idea is this: Rather than rely on the age attached to your account, YouTube analyzes your activity on its platform, and makes a determination based on how your activity corresponds to others users. If the AI thinks you’re an adult, you can continue on; if it thinks your behavior aligns with that of a teenage user, it’ll put restrictions and protections on your account.

Now, Google is expanding its AI age verification tools beyond just its video streaming platform, to other Google products as well. As with YouTube, Google is trialing this initial rollout with a small pool of users, and based on its results, will expand the test to more users down the line. But over the next few weeks, your Google Account may be subject to this new AI, whose only goal is to estimate how old you are.

That AI is trained to look for patterns of behavior across Google products associated with users under the age of 18. That includes the categories of information you might be searching for, or the types of videos you watch on YouTube. Google’s a little cagey on the details, but suffice it to say that the AI is likely snooping through most, if not all, of what you use Google and its products for.

Restrictions and protections on teen Google accounts

We do know some of the restrictions and protections Google plans to implement when it detects a user is under 18 years old. As I reported yesterday, that involves turning on YouTube’s Digital Wellbeing tools, such as reminders to stop watching videos, and, if it’s late, encouragements to go to bed. YouTube will also limit repetitive views of certain types of content.

In addition to these changes to YouTube, you’ll also find you can no longer access Timeline in Maps. Timeline saves your Google Maps history, so you can effectively travel back through time and see where you’ve been. It’s a cool feature, but Google restricts access to users 18 years of age or older. So, if the AI detects you’re underage, no Timeline for you.

[…]

Source: Google Is Rolling Out Its AI Age Verification to More Services, and I’m Skeptical

Of course there is no mention of how to ask for recourse if the AI gets it wrong.

After the UK, online age verification is landing in the EU

Denmark, Greece, Spain, France, and Italy are the first to test the technical solution unveiled by the European Commission on July 14, 2025.

The announcement came less than two weeks before the UK enforced mandatory age verification checks on July 25. These have so far sparked concerns about the privacy and security of British users, fueling a spike in usage amongst the best VPN apps.

[…]

The introduction of this technical solution is a key step in implementing children’s online safety rules under the Digital Services Act (DSA).

Lawmakers ensure that this solution seeks to set “a new benchmark for privacy protection” in age verification.

That’s because online services will only receive proof that the user is 18+, without any personal details attached.

Further work on the integration of zero-knowledge proofs is also ongoing, with the full implementation of mandatory checks in the EU expected to be enforced in 2026.

[…]

Starting from Friday, July 25, millions of Britons will need to be ready to prove their age before accessing certain websites or content.

Under the Online Safety Act, sites displaying adult-only content must prevent minors from accessing their services via robust age checks.

Social media, dating apps, and gaming platforms are also expected to verify their users’ age before showing them so-called harmful content.

[…]

The vagueness of what constitutes harmful content, as well as the privacy and security risks linked with some of these age verification methods, have attracted criticism among experts, politicians, and privacy-conscious citizens who fear a negative impact on people’s digital rights.

While the EU approach seems better on paper, it remains to be seen how the age verification scheme will ultimately be enforced.

[…]

Source: After the UK, online age verification is landing in the EU | TechRadar

And so comes the EU spying on our browsing habits, telling us what is and isn’t good for us to see. I can make my own mind up, thank you. How annoying that I will be rate limited to the VPN I get.

Echolon Exercise Bikes Lose Features, must phone home to work at all after Firmware Update

[…] It seems like a simple concept that everyone should be able to agree to: if I buy a product from you that does x, y, and z, you don’t get to remove x, y, or z remotely after I’ve made that purchase. How we’ve gotten to a place where companies can simply remove, or paywall, product features without recourse for the customer they essentially bait and switched is beyond me.

But it keeps happening. The most recent example of this is with Echelon exercise bikes. Those bikes previously shipped to paying customers with all kinds of features for ride metrics and connections to third-party apps and services without anything further needed from the user. That all changed recently when a firmware update suddenly forced an internet connection and a subscription to a paid app to make any of that work.

As explained in a Tuesday blog post by Roberto Viola, who develops the “QZ (qdomyos-zwift)” app that connects Echelon machines to third-party fitness platforms, like Peloton, Strava, and Apple HealthKit, the firmware update forces Echelon machines to connect to Echelon’s servers in order to work properly. A user online reported that as a result of updating his machine, it is no longer syncing with apps like QZ, and he is unable to view his machine’s exercise metrics in the Echelon app without an Internet connection.

Affected Echelon machines reportedly only have full functionality, including the ability to share real-time metrics, if a user has the Echelon app active and if the machine is able to reach Echelon’s servers.

Want to know how fast you’re going on the bike you’re sitting upon? That requires an internet connection. Want to get a sense of how you performed on your ride on the bike? That requires an internet connection. And if Echelon were to go out of business? Then your bike just no longer works beyond the basic function of pedaling it.

And the ability to use third-party apps is reportedly just, well, gone.

For some owners of Echelon equipment, QZ, which is currently rated as the No. 9 sports app on Apple’s App Store, has been central to their workouts. QZ connects the equipment to platforms like Zwift, which shows people virtual, scenic worlds while they’re exercising. It has also enabled new features for some machines, like automatic resistance adjustments. Because of this, Viola argued in his blog that QZ has “helped companies grow.”

“A large reason I got the [E]chelon was because of your app and I have put thousands of miles on the bike since 2021,” a Reddit user told the developer on the social media platform on Wednesday.

Instead of happily accepting that someone out there is making its product more attractive and valuable, Echelon is instead going for some combination of overt control and the desire for customer data. Data which will be used, of course, for marketing purposes.

There’s also value in customer data. Getting more customers to exercise with its app means Echelon may gather more data for things like feature development and marketing.

What you won’t hear anywhere, at least that I can find, is any discussion of the ability to return or get refunds for customers who bought these bikes when they did things that they no longer will do after the fact. That’s about as clear a bait and switch type of a scenario as you’re likely to find.

Unfortunately, with the FTC’s Bureau of Consumer Protection being run by just another Federalist Society imp, it’s unlikely that anything material will be done to stop this sort of thing.

Source: Exercise Bike Company Yanks Features Away From Purchased Bikes Via Firmware Update | Techdirt

WhoFi: Unique ‘fingerprint’ based on Wi-Fi interactions allows reidentification of people being observed

Researchers in Italy have developed a way to create a biometric identifier for people based on the way the human body interferes with Wi-Fi signal propagation.

The scientists claim this identifier, a pattern derived from Wi-Fi Channel State Information, can re-identify a person in other locations most of the time when a Wi-Fi signal can be measured. Observers could therefore track a person as they pass through signals sent by different Wi-Fi networks – even if they’re not carrying a phone.

In the past decade or so, scientists have found that Wi-Fi signals can be used for various sensing applications, such as seeing through walls, detecting falls, sensing the presence of humans, and recognizing gestures including sign language.

Following the approval of the IEEE 802.11bf specification in 2020, the Wi-Fi Alliance began promoting Wi-Fi Sensing, positioning Wi-Fi as something more than a data transit mechanism.

The researchers – Danilo Avola, Daniele Pannone, Dario Montagnini, and Emad Emam, from La Sapienza University of Rome – call their approach “WhoFi”, as described in a preprint paper titled, “WhoFi: Deep Person Re-Identification via Wi-Fi Channel Signal Encoding.”

(The authors presumably didn’t bother checking whether the WhoFi name was taken. But an Oklahoma-based provider of online community spaces shares the same name.)

Who are you, really?

Re-identification, the researchers explain, is a common challenge in video surveillance. It’s not always clear when a subject captured on video is the same person recorded at another time and/or place.

Re-identification doesn’t necessarily reveal a person’s identity. Instead, it is just an assertion that the same surveilled subject appears in different settings. In video surveillance, this might be done by matching the subject’s clothes or other distinct features in different recordings. But that’s not always possible.

The Sapienza computer scientists say Wi-Fi signals offer superior surveillance potential compared to cameras because they’re not affected by light conditions, can penetrate walls and other obstacles, and they’re more privacy-preserving than visual images.

“The core insight is that as a Wi-Fi signal propagates through an environment, its waveform is altered by the presence and physical characteristics of objects and people along its path,” the authors state in their paper. “These alterations, captured in the form of Channel State Information (CSI), contain rich biometric information.”

CSI in the context of Wi-Fi devices refers to information about the amplitude and phase of electromagnetic transmissions. These measurements, the researchers say, interact with the human body in a way that results in person-specific distortions. When processed by a deep neural network, the result is a unique data signature.

Researchers proposed a similar technique, dubbed EyeFi, in 2020, and asserted it was accurate about 75 percent of the time.

The Rome-based researchers who proposed WhoFi claim their technique makes accurate matches on the public NTU-Fi dataset up to 95.5 percent of the time when the deep neural network uses the transformer encoding architecture.

“The encouraging results achieved confirm the viability of Wi-Fi signals as a robust and privacy-preserving biometric modality, and position this study as a meaningful step forward in the development of signal-based Re-ID systems,” the authors say. ®

Source: WhoFi: Unique ‘fingerprint’ based on Wi-Fi interactions • The Register

Google to Gemini Users: We’re Going to Look at Your Texts Whether You Like It or Not

[…]As highlighted in a Reddit post, Google recently sent out an email to some Android users informing them that Gemini will now be able to “help you use Phone, Messages, WhatsApp, and Utilities on your phone whether your Gemini Apps Activity is on or off.” That change, according to the email, will take place on July 7. In short, that sounds—at least on the surface—like whether you have opted in or out, Gemini has access to all of those very critical apps on your device.

Google email about Gemini privacy.
© Reddit / Screenshot by Gizmodo

Google continues in the email, which was screenshotted by Android Police, by stating that “if you don’t want to use these features, you can turn them off in Apps settings page,” but doesn’t elaborate on where to find that page or what exactly will be disabled if you avail yourself of that setting option. Notably, when App Activity is enabled, Google stores information on your Gemini usage (inputs and responses, for example) for up to 72 hours, and some of that data may actually be reviewed by a human. That’s all to say that enabling Gemini access to those critical apps by default may be a bridge too far for some who are worried about protecting their privacy or wary of AI in general.

[…]

The worst part is, if we’re not careful, all of that information might end up being collected without our consent, or at least without our knowledge. I don’t know about you, but as much as I want AI to order me a cab, I think keeping my text messages private is a higher priority.

Source: Google to Gemini Users: We’re Going to Look at Your Texts Whether You Like It or Not

Judge Denies Creating ‘Mass Surveillance Program’ Harming All ChatGPT Users after ordering all chats (including “deleted” ones) be kept indefinitely

An anonymous reader quotes a report from Ars Technica: After a court ordered OpenAI to “indefinitely” retain all ChatGPT logs, including deleted chats, of millions of users, two panicked users tried and failed to intervene. The order sought to preserve potential evidence in a copyright infringement lawsuit raised by news organizations. In May, Judge Ona Wang, who drafted the order, rejected the first user’s request (PDF) on behalf of his company simply because the company should have hired a lawyer to draft the filing. But more recently, Wang rejected (PDF) a second claim from another ChatGPT user, and that order went into greater detail, revealing how the judge is considering opposition to the order ahead of oral arguments this week, which were urgently requested by OpenAI.

The second request (PDF) to intervene came from a ChatGPT user named Aidan Hunt, who said that he uses ChatGPT “from time to time,” occasionally sending OpenAI “highly sensitive personal and commercial information in the course of using the service.” In his filing, Hunt alleged that Wang’s preservation order created a “nationwide mass surveillance program” affecting and potentially harming “all ChatGPT users,” who received no warning that their deleted and anonymous chats were suddenly being retained. He warned that the order limiting retention to just ChatGPT outputs carried the same risks as including user inputs, since outputs “inherently reveal, and often explicitly restate, the input questions or topics input.”

Hunt claimed that he only learned that ChatGPT was retaining this information — despite policies specifying they would not — by stumbling upon the news in an online forum. Feeling that his Fourth Amendment and due process rights were being infringed, Hunt sought to influence the court’s decision and proposed a motion to vacate the order that said Wang’s “order effectively requires Defendants to implement a mass surveillance program affecting all ChatGPT users.” […] OpenAI will have a chance to defend panicked users on June 26, when Wang hears oral arguments over the ChatGPT maker’s concerns about the preservation order. In his filing, Hunt explained that among his worst fears is that the order will not be blocked and that chat data will be disclosed to news plaintiffs who may be motivated to publicly disseminate the deleted chats. That could happen if news organizations find evidence of deleted chats they say are likely to contain user attempts to generate full news articles.

Wang suggested that there is no risk at this time since no chat data has yet been disclosed to the news organizations. That could mean that ChatGPT users may have better luck intervening after chat data is shared, should OpenAI’s fight to block the order this week fail. But that’s likely no comfort to users like Hunt, who worry that OpenAI merely retaining the data — even if it’s never shared with news organizations — could cause severe and irreparable harms. Some users appear to be questioning how hard OpenAI will fight. In particular, Hunt is worried that OpenAI may not prioritize defending users’ privacy if other concerns — like “financial costs of the case, desire for a quick resolution, and avoiding reputational damage” — are deemed more important, his filing said.

Source: Judge Denies Creating ‘Mass Surveillance Program’ Harming All ChatGPT Users

NB you would be pretty dense to think that anything you put into an externally hosted GPT would not be kept and used by that company for AI training and other analysis, so it’s not surprising that this data could be (and will be) requisitioned by other corporations and of course governments.

Makers of air fryers and smart speakers told to respect users’ right to privacy in UK

Makers of air fryers, smart speakers, fertility trackers and smart TVs have been told to respect people’s rights to privacy by the UK Information Commissioner’s Office (ICO).

People have reported feeling powerless to control how data is gathered, used and shared in their own homes and on their bodies.

After reports of air fryers designed to listen in to their surroundings and public concerns that digitised devices collect an excessive amount of personal information, the data protection regulator has issued its first guidance on how people’s personal information should be handled.

Is your air fryer spying on you? Concerns over ‘excessive’ surveillance in smart devices

It is demanding that manufacturers and data handlers ensure data security, are transparent with consumers and ensure the regular deletion of collected information.

Stephen Almond, the executive director for regulatory risk at the ICO, said: “Smart products know a lot about us: who we live with, what music we like, what medication we are taking and much more.

“They are designed to make our lives easier, but that doesn’t mean they should be collecting an excessive amount of information … we shouldn’t have to choose between enjoying the benefits of smart products and our own privacy.

“We all rightly have a greater expectation of privacy in our own homes, so we must be able to trust smart products are respecting our privacy, using our personal information responsibly and only in ways we would expect.”

The new guidance cites a wide range of devices that are broadly known as part of the “internet of things”, which collect data that needs to be carefully handled. These include smart fertility trackers that record the dates of their users’ periods and body temperature, send it back to the manufacturer’s servers and make an inference about fertile days based on this information.

Smart speakers that listen in not only to their owner but also to other members of their family and visitors to their home should be designed so users can configure product settings to minimise the personal information they collect.

[…]

Source: Makers of air fryers and smart speakers told to respect users’ right to privacy | Technology | The Guardian

Wouldn’t it be nice if they benefited from the same privacy laws as exist in the EU?

Pornhub Back Online in France After Court Ruling About Age Verification

Many porn sites, including Pornhub, YouPorn, and RedTube, all went dark earlier this month in France to protest a new age verification law that would have required the websites to collect ID from users. But those sites went back online Friday after a new ruling from a French court suspended enforcement of the law until it can be determined whether it conflicts with existing European Union rules, according to France24.

Aylo, the company that owns Pornhub, has previously said that requiring age verification “creates an unacceptable security risk” and warned that setting up that kind of process makes people vulnerable to hacks and leaks of sensitive information. The French law would’ve required Aylo to verify user ages with a government-issued ID or a credit card.

[…]

Age verification laws for porn websites has been a controversial issue globally, with the U.S. seeing a dramatic uptick in states passing such laws in recent years. Nineteen states now have laws that require age verification for porn sites, meaning that anyone who wants to access Pornhub in places like Florida and Texas need to use a VPN.

Australia recently passed a law banning social media use for anyone under the age of 16, regardless of explicit content, which is currently making its way through the expected challenges. The law had a 12-month buffer built in to allow the country’s internet safety regulator to figure out how to implement it. Tech giants like Meta and TikTok were dealt a blow on Friday after the commission issued a report stating that age verification “can be private, robust and effective,” though trials are ongoing about how to best make the law work, according to ABC News in Australia.

Source: Pornhub Back Online in France After Court Ruling About Age Verification

Nope. Age verification is easily broken and is a huge security / privacy risk.

Nintendo will record your Gamechat audio and video

Last month, ahead of the launch of the Switch 2 and its GameChat communication features, Nintendo updated its privacy policy to note that the company “may also monitor and record your video and audio interactions with other users.” Now that the Switch 2 has officially launched, we have a clearer understanding of how the console handles audio and video recorded during GameChat sessions, as well as when that footage may be sent to Nintendo or shared with partners, including law enforcement. Before using GameChat on Switch 2 for the first time, you must consent to a set of GameChat Terms displayed on the system itself. These terms warn that chat content is “recorded and stored temporarily” both on your system and the system of those you chat with. But those stored recordings are only shared with Nintendo if a user reports a violation of Nintendo’s Community Guidelines, the company writes.

That reporting feature lets a user “review a recording of the last three minutes of the latest three GameChat sessions” to highlight a particular section for review, suggesting that chat sessions are not being captured and stored in full. The terms also lay out that “these recordings are available only if the report is submitted within 24 hours,” suggesting that recordings are deleted from local storage after a full day. If a report is submitted to Nintendo, the company warns that it “may disclose certain information to third parties, such as authorities, courts, lawyers, or subcontractors reviewing the reported chats.” If you don’t consent to the potential for such recording and sharing, you’re prevented from using GameChat altogether.

Nintendo is extremely clear that the purpose of its recording and review system is “to protect GameChat users, especially minors” and “to support our ability to uphold our Community Guidelines.” This kind of human moderator review of chats is pretty common in the gaming world and can even apply to voice recordings made by various smart home assistants. […] Overall, the time-limited, local-unless-reported recordings Nintendo makes here seem like a minimal intrusion on the average GameChat user’s privacy. Still, if you’re paranoid about Nintendo potentially seeing and hearing what’s going on in your living room, it’s good to at least be aware of it.

Source: Nintendo Warns Switch 2 GameChat Users: ‘Your Chat Is Recorded’ (arstechnica.com)

The US Is Storing Migrant Children’s DNA in a Criminal Database

The United States government has collected DNA samples from upwards of 133,000 migrant children and teenagers—including at least one 4-year-old—and uploaded their genetic data into a national criminal database used by local, state, and federal law enforcement, according to documents reviewed by WIRED. The records, quietly released by the US Customs and Border Protection earlier this year, offer the most detailed look to date at the scale of CBP’s controversial DNA collection program. They reveal for the first time just how deeply the government’s biometric surveillance reaches into the lives of migrant children, some of whom may still be learning to read or tie their shoes—yet whose DNA is now stored in a system originally built for convicted sex offenders and violent criminals.

[…]

Spanning from October 2020 through the end of 2024, the records show that CBP swabbed the cheeks of between 829,000 and 2.8 million people, with experts estimating that the true figure, excluding duplicates, is likely well over 1.5 million. That number includes as many as 133,539 children and teenagers. These figures mark a sweeping expansion of biometric surveillance—one that explicitly targets migrant populations, including children.

[…]

Under current rules, DNA is generally collected from anyone who is also fingerprinted. According to DHS policy, 14 is the minimum age at which fingerprinting becomes routine.

[…]

“Taking DNA from a 4-year old and adding it into CODIS flies in the face of any immigration purpose,” she says, adding, “That’s not immigration enforcement. That’s genetic surveillance.”

Multiple studies show no link between immigration and increased crime.

In 2024, Glaberson coauthored a report called “Raiding the Genome” that was the first to try to quantify DHS’s 2020 expansion of DNA collection. It found that if DHS continues to collect DNA at the rate the agency itself projects, one-third of the DNA profiles in CODIS by 2034 will have been taken by DHS, and seemingly without any real due process—the protections that are supposed to be in place before law enforcement compels a person to hand over their most sensitive information.

Regeneron to Acquire all 23andMe genetic data for $256m

23andMe Holding Co. (“23andMe” or the “Company”) (OTC: MEHCQ), a leading human genetics and biotechnology company, today announced that it has entered into a definitive agreement for the sale of 23andMe to Regeneron Pharmaceuticals, Inc. (“Regeneron”) (NASDAQ: REGN), a leading U.S.-based, NASDAQ-listed biotechnology company that invents, develops and commercializes life-transforming medicines for people with serious diseases. The agreement includes Regeneron’s commitment to comply with the Company’s privacy policies and applicable law, process all customer personal data in accordance with the consents, privacy policies and statements, terms of service, and notices currently in effect and have security controls in place designed to protect such data.

[…]

Under the terms of the agreement, Regeneron will acquire substantially all of the assets of the Company, including the Personal Genome Service (PGS), Total Health and Research Services business lines, for a purchase price of $256 million. The agreement does not include the purchase of the Company’s Lemonaid Health subsidiary, which the Company plans to wind down in an orderly manner, subject to and in accordance with the agreement.

[…]

 

Source: Regeneron, A Leading U.S. Biotechnology Company, to Acquire

New Orleans police secretly used facial recognition on over 200 live camera feeds

New Orleans’ police force secretly used constant facial recognition to seek out suspects for two years. An investigation by The Washington Post discovered that the city’s police department was using facial recognition technology on a privately owned camera network to continually look for suspects. This application seems to violate a city ordinance passed in 2022 that required facial recognition only be used by the NOLA police to search for specific suspects of violent crimes and then to provide details about the scans’ use to the city council. However, WaPo found that officers did not reveal their reliance on the technology in the paperwork for several arrests where facial recognition was used, and none of those cases were included in mandatory city council reports.

“This is the facial recognition technology nightmare scenario that we have been worried about,” said Nathan Freed Wessler, an ACLU deputy director. “This is the government giving itself the power to track anyone — for that matter, everyone — as we go about our lives walking around in public.” Wessler added that the is the first known case in a major US city where police used AI-powered automated facial recognition to identify people in live camera feeds for the purpose of making immediate arrests.

Police use and misuse of surveillance technology has been thoroughly documented over the years. Although several US cities and states have placed restrictions on how law enforcement can use facial recognition, those limits won’t do anything to protect privacy if they’re routinely ignored by officers.

Read the full story on the New Orleans PD’s surveillance program at The Washington Post.

Source: New Orleans police secretly used facial recognition on over 200 live camera feeds

FBI Director Kash Patel Abruptly Closes Internal Watchdog Office Overseeing Surveillance Compliance

If there’s one thing the Federal Bureau of Investigation does well, it’s mass surveillance. Several years ago, then attorney general William Barr established an internal office to curb the FBI’s abuse of one controversial surveillance law. But recently, the FBI’s long-time hater (and, ironically, current director) Kash Patel shut down the watchdog group with no explanation.

On Tuesday, the New York Times reported that Patel suddenly closed the Office of Internal Auditing that Barr created in 2020. The office’s leader, Cindy Hall, abruptly retired. People familiar with the matter told the outlet that the closure of the aforementioned watchdog group alongside the Office of Integrity and Compliance are part of internal reorganization. Sources also reportedly said that Hall was trying to expand the office’s work, but her attempts to onboard new employees were stopped by the Trump administration’s hiring freezes.

The Office of Internal Auditing was a response to controversy surrounding the FBI’s use of Section 702 of the Foreign Intelligence Surveillance Act. The 2008 law primarily addresses surveillance of non-Americans abroad. However, Jeramie Scott, senior counselor at the Electronic Privacy Information Center, told Gizmodo via email that the FBI “has repeatedly abused its ability to search Americans’ communications ‘incidentally’ collected under Section 702” to conduct warrantless spying.

Patel has not released any official comment regarding his decision to close the office. But Elizabeth Goitein, senior director at the Brennan Center for Justice, told Gizmodo via email, “It is hard to square this move with Mr. Patel’s own stated concerns about the FBI’s use of Section 702.”

Last year, Congress reauthorized Section 702 despite mounting concerns over its misuses. Although Congress introduced some reforms, the updated legislation actually expanded the government’s surveillance capabilities. At the time, Patel slammed the law’s passage, stating that former FBI director Christopher Wray, who Patel once tried to sue, “was caught last year illegally using 702 collection methods against Americans 274,000 times.” (Per the New York Times, Patel is likely referencing a declassified 2023 opinion by the FISA court that used the Office of Internal Auditing’s findings to determine the FBI made 278,000 bad queries over several years.)

According to Goitein, the office has “played a key role in exposing FBI abuses of Section 702, including warrantless searches for the communication of members of Congress, judges, and protesters.” And ironically, Patel inadvertently drove its creation after attacking the FBI’s FISA applications to wiretap a former Trump campaign advisor in 2018 while investigating potential Russian election interference. Trump and his supporters used Patel’s attacks to push their own narrative dismissing any concerns. Last year, former representative Devin Nunes, who is now CEO of Truth Social, said Patel was “instrumental” to uncovering the “hoax and finding evidence of government malfeasance.”

Although Patel mostly peddled conspiracies, the Justice Department conducted a probe into the FBI’s investigation that raised concerns over “basic and fundamental errors” it committed. In response, Barr created the Office of Internal Auditing, stating, “What happened to the Trump presidential campaign and his subsequent Administration after the President was duly elected by the American people must never happen again.”

But since taking office, Patel has changed his tune about FISA. During his confirmation hearing, Patel referred to Section 702 as a “critical tool” and said, “I’m proud of the reforms that have been implemented and I’m proud to work with Congress moving forward to implement more.” However, reforms don’t mean much by themselves. As Goitein noted, “Without a separate office dedicated to surveillance compliance, [the FBI’s] abuses could go unreported and unchecked.”

[…]

Source: FBI Director Kash Patel Abruptly Closes Internal Watchdog Office Overseeing Surveillance Compliance