The Linkielist

Linking ideas with the world

The Linkielist

Europol wants to keep all data forever for law  enforcement, says unnamed(!) official. E.U. Court of Human Rights backed encryption as basic to privacy rights in 2024 and now Big Brother Chat Control is on the agenda again (EU consultation feedback link at end)

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

In the Russian case, the users relied on Telegram’s optional “secret chat” functions, which are also end-to-end encrypted. Telegram had refused to break into chats of a handful of users, telling a Moscow court that it would have to install a back door that would work against everyone. It lost in Russian courts but did not comply, leaving it subject to a ban that has yet to be enforced.
The European court backed the Russian users, finding that law enforcement having such blanket access “impairs the very essence of the right to respect for private life” and therefore would violate Article 8 of the European Convention, which enshrines the right to privacy except when it conflicts with laws established “in the interests of national security, public safety or the economic well-being of the country.”
The court praised end-to-end encryption generally, noting that it “appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information.”
In addition to prior cases, the judges cited work by the U.N. human rights commissioner, who came out strongly against encryption bans in 2022, saying that “the impact of most encryption restrictions on the right to privacy and associated rights are disproportionate, often affecting not only the targeted individuals but the general population.”
High Commissioner Volker Türk said he welcomed the ruling, which he promoted during a recent visit to tech companies in Silicon Valley. Türk told The Washington Post that“encryption is a key enabler of privacy and security online and is essential for safeguarding rights, including the rights to freedom of opinion and expression, freedom of association and peaceful assembly, security, health and nondiscrimination.”
[…]
Even as the fight over encryption continues in Europe, police officials there have talked about overriding end-to-end encryption to collect evidence of crimes other than child sexual abuse — or any crime at all, according to an investigative report by the Balkan Investigative Reporting Network, a consortium of journalists in Southern and Eastern Europe.
“All data is useful and should be passed on to law enforcement, there should be no filtering … because even an innocent image might contain information that could at some point be useful to law enforcement,” an unnamed Europol police official said in 2022 meeting minutes released under a freedom of information request by the consortium.

Source: E.U. Court of Human Rights backs encryption as basic to privacy rights – The Washington Post

An ‘unnamed’ Europol police official is peak irony in this context.

Remember to leave your feedback where you can, in this case: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/14680-Impact-assessment-on-retention-of-data-by-service-providers-for-criminal-proceedings-/public-consultation_en

The EU wants to know what you think about it keeping all your data for *cough* crime stuff.

The EU wants to save all your data, or as much as possible for as long as possible. To insult the victims of crime, they say that they want to do this to fight crime. How do you feel about the EU being turned into a surveillance society? Leave your voice in the link below.

Source: Data retention by service providers for criminal proceedings – impact assessment

Croatians suddenly realise that EU CSAM rules include hidden pervasive chat control surveillance, turning the EU into Big Brother – dissaprove massively.

“The Prime Minister of the Republic of Croatia Andrej Plenkovic, at yesterday’s press conference, accused the opposition of upheld the proposal of a regulation of the European Parliament and the Council on the establishment of rules for the prevention and combating sexual abuse of children COM (2022) 209, which is (unpopularly referred to as ‘chat control’ because, in the case of the adoption of the proposal in its integral form, it would allow the bodies of criminal prosecution to be subject to the legal prosecution of the private communication of all citizens.

[…]

On June 17, the Bosnian MP, as well as colleagues from the SDP, HDZ and the vast majority of other European MPs supported the Proposal for Amendments to the Directive on combating the sexual abuse and sexual exploitation of children and child pornography from 2011. Although both legislative documents were adopted within the same package of EU strategies for a more effective fight against child abuse and have a similar name, two documents are intrinsically different – one is the regulation, the other directive, they have different rapporteurs and entered the procedure for as many as two years apart.”

‘We’ve already spoken about it’

“The basic difference, however, is that the proposal to amend the Directive does not contain any mention of ‘chat control’, i.e. the mass surveillance of citizens. MP Bosnian, as well as colleagues from the party We Can! They strongly oppose the proposal for a regulation that supports the monitoring of the content of private conversations of all citizens and which will only be voted on in the European Parliament. Such a proposal directly violates Article 7. The Charter of Fundamental Rights of the European Union, as confirmed by the Court of Justice of the European Union in the ruling “Schrems I” (paragraph 94), and the same position was confirmed by the Legal Service of the Council of the EU.

In the previous European Parliament, the Greens resisted mass surveillance, focusing on monitoring suspicious users – the security services must first identify suspicious users and then monitor them, not the other way around. People who abuse the internet to commit criminal acts must be recognized and isolated by the numerous services for whom it is a job, but not in a way of mass, but focused surveillance of individuals.

We all have the right to privacy, because privacy must remain a secure space for our human identity. Finally, the representative of Bosanac invites Prime Minister Plenković to oppose this harmful proposal at the European Council and protect the right to privacy of Croatian citizens,” Gordan Bosanca’s office said in a statement.

Source: Bosnian accuses Plenkovic of lying: ‘I urge him to counter that proposal’

Parliamentary questions are being asked as well

A review conducted under the Danish Presidency examining the proposal for a regulation on combatting online child sexual abuse material – dubbed the ‘Chat Control’ or CSAM regulation – has raised new, grave concerns about the respect of fundamental rights in the EU.

As it stands, the proposal envisages mass scanning of private communications, including encrypted conversations, raising serious issues of compliance with Article 7 of the Charter of Fundamental Rights by threatening to undermine the data security of citizens, businesses and institutions. A mandatory weakening of end-to-end encryption would create security gaps open to exploitation by cybercriminals, rival states and terrorist organisations, and would also harm the competitiveness of our digital economy.

At the same time, the proposed technical approach is based on automated content analysis tools which produce high rates of false positives, creating the risk that innocent users could be wrongly incriminated, while the effectiveness of this approach in protecting children has not been proven. Parliament and the Council have repeatedly rejected mass surveillance.

  • 1.Considering the mandatory scanning of all private communications, is the proposed regulation compatible with Article 7 of the Charter of Fundamental Rights?

  • 2.How will it ensure that child protection is achieved through targeted measures that are proven to be effective, without violating the fundamental rights of all citizens?

  • 3.How does it intend to prevent the negative impact on cybersecurity and economic competitiveness caused by weakening encryption?

Source: Proposed Chat Control law presents new blow for privacy

The Threat Of Extreme Statutory Damages For Copyright Almost Certainly Made Anthropic Settle With Authors: Not the Use of Books for training, but the idiots used pirated books for training

In what may be the least surprising news in the world of copyright and the internet, Anthropic just agreed to settle the copyright lawsuit that everyone’s been watching, but not for the reasons most people think. This isn’t about AI training being found to infringe copyright—in fact, Anthropic won on that issue. Instead, it’s about how copyright’s broken statutory damages system can turn a narrow legal loss into a company-ending threat, forcing settlements even when the core dispute goes your way.

Anthropic had done something remarkably stupid beyond just training: they downloaded unauthorized copies of works and stored them in an internal “pirate library” for future reference. Judge Alsup was crystal clear that while the training itself was fair use, building and maintaining this library of unauthorized copies was straightforward infringement. This wasn’t some edge case—it was basic copyright violation that Anthropic should have known better than to engage in.

And while there were some defenses to this, it would likely be tough to succeed at trial with the position Judge Alsup had put them in.

The question then was about liability. Because of copyright’s absolutely ridiculous statutory damages (up to $150k per work if the infringement was found to be “willful”), which need not bear any relationship to the actual damages, Anthropic could have been on the hook for trillions of dollars in damages just in this one case. That’s not something any company is going to roll the dice on, and I’m sure that the conversation was more or less: if you win and we get hit with statutory damages, the company will shut down and you will get nothing. Instead, let’s come to some sort of deal and get the lawyers (and the named author plaintiffs) paid.

While the amount of the settlement hasn’t been revealed yet, the amount authors get paid is going to come out eventually, and… I guarantee that it will not be much.

[…]

Instead what will happen—what always happens with these collective licensing deals—is that a few of the bigger names will get wealthy, but mainly the middleman will get wealthy. These kinds of schemes only tend to enrich the middlemen (often leading to corruption).

So this result is hardly surprising. Anthropic had to settle rather than face shutting down. But my guess is that authors are going to be incredibly disappointed by how much they end up getting from the settlement. Judge Alsup still has to approve the settlement, and some people may protest it, but it would be a much bigger surprise if he somehow rejects it.

Source: The Threat Of Extreme Statutory Damages For Copyright Almost Certainly Made Anthropic Settle With Authors | Techdirt

Developer Unlocks Suddenly Paywalled Echelon Exercise Bikes But Thinks DMCA says He Can’t Legally Release His Software

An app developer has jailbroken Echelon exercise bikes to restore functionality that the company put behind a paywall last month, but copyright laws prevent him from being allowed to legally release it.

Last month, Peloton competitor Echelon pushed a firmware update to its exercise equipment that forces its machines to connect to the company’s servers in order to work properly. Echelon was popular in part because it was possible to connect Echelon bikes, treadmills, and rowing machines to free or cheap third-party apps and collect information like pedaling power, distance traveled, and other basic functionality that one might want from a piece of exercise equipment. With the new firmware update, the machines work only with constant internet access and getting anything beyond extremely basic functionality requires an Echelon subscription, which can cost hundreds of dollars a year.

[…]

App engineer Ricky Witherspoon, who makes an app called SyncSpin that used to work with Echelon bikes, told 404 Media that he successfully restored offline functionality to Echelon equipment and won the Fulu Foundation bounty. But he and the foundation said that he cannot open source or release it because doing so would run afoul of Section 1201 of the Digital Millennium Copyright Act, the wide-ranging copyright law that in part governs reverse engineering. There are various exemptions to Section 1201, but most of them allow for jailbreaks like the one Witherspoon developed to only be used for personal use.

“It’s like picking a lock, and it’s a lock that I own in my own house. I bought this bike, it was unlocked when I bought it, why can’t I distribute this to people who don’t have the technical expertise I do?” Witherspoon told 404 Media. “It would be one thing if they sold the bike with this limitation up front, but that’s not the case. They reached into my house and forced this update on me without users knowing. It’s just really unfortunate.”

[…]

“A lot of people chose Echelon’s ecosystem because they didn’t want to be locked into using Echelon’s app. There was this third-party ecosystem. That was their draw to the bike in the first place,” O’Reilly said. “But now, if the manufacturer can come in and push a firmware update that requires you to pay for subscription features that you used to have on a device you bought in the first place, well, you don’t really own it.”

“I think this is part of the broader trend of enshittification, right?,” O’Reilly added. “Consumers are feeling this across the board, whether it’s devices we bought or apps we use—it’s clear that what we thought we were getting is not continuing to be provided to us.”

Witherspoon says that, basically, Echelon added an authentication layer to its products, where the piece of exercise equipment checks to make sure that it is online and connected to Echelon’s servers before it begins to send information from the equipment to an app over Bluetooth. “There’s this precondition where the bike offers an authentication challenge before it will stream those values. It is like a true digital lock,” he said. “Once you give the bike the key, it works like it used to. I had to insert this [authentication layer] into the code of my app, and now it works.”

[…]

Witherspoon has now essentially restored functionality that he used to have to his own bike, which he said he bought in the first place because of its ability to work offline and its ability to connect to third-party apps. But others will only be able to do it if they design similar software, or if they never update the bike’s firmware. Witherspoon said that he made the old version of his SyncSpin app free and has plastered it with a warning urging people to not open the official Echelon app, because it will update the firmware on their equipment and will break functionality. Roberto Viola, the developer of a popular third-party exercise app called QZ, wrote extensively about how Echelon has broken his popular app: “Without warning, Echelon pushed a firmware update. It didn’t just upgrade features—it locked down the entire device. From now on, bikes, treadmills, and rowers must connect to Echelon’s servers just to boot,” he wrote. “No internet? No workout. Even basic offline usage is impossible.

[…]

Witherspoon told me that he is willing to talk to other developers about how he did this, but that he is not willing to release the jailbreak on his own: “I don’t feel like going down a legal rabbit hole, so for now it’s just about spreading awareness that this is possible, and that there’s another example of egregious behavior from a company like this […] if one day releasing this was made legal, I would absolutely open source this. I can legally talk about how I did this to a certain degree, and if someone else wants to do this, they can open source it if they want to.”

Source: Developer Unlocks Newly Enshittified Echelon Exercise Bikes But Can’t Legally Release His Software

I do not think that this is the way the DMCA works, but if it is, it needs some serious revision.

Google wants to verify all developers’ identities, including those not on the play store in massive data grab

  • Google will soon verify the identities of developers who distribute Android apps outside the Play Store.
  • Developers must submit their information to a new Android Developer Console, increasing their accountability for their apps.
  • Rolling out in phases from September 2026, these new verification requirements are aimed at protecting users from malware by making it harder for malicious developers to remain anonymous.

 

Most Android users acquire apps from the Google Play Store, but a small number of users download apps from outside of it, a process known as sideloading. There are some nifty tools that aren’t available on the Play Store because their developers don’t want to deal with Google’s approval or verification requirements. This is understandable for hobbyist developers who simply want to share something cool or useful without the burden of shedding their anonymity or committing to user support.

[…]

Today, Google announced it is introducing a new “developer verification requirement” for all apps installed on Android devices, regardless of source. The company wants to verify the identity of all developers who distribute apps on Android, even if those apps aren’t on the Play Store. According to Google, this adds a “crucial layer of accountability to the ecosystem” and is designed to “protect users from malware and financial fraud.” Only users with “certified” Android devices — meaning those that ship with the Play Store, Play Services, and other Google Mobile Services (GMS) apps — will block apps from unverified developers from being installed.

Google says it will only verify the identity of developers, not check the contents of their apps or their origin. However, it’s worth noting that Google Play Protect, the malware scanning service integrated into the Play Store, already scans all installed apps regardless of where they came from. Thus, the new requirement doesn’t prevent malicious apps from reaching users, but it does make it harder for their developers to remain anonymous. Google likens this new requirement to ID checks at the airport, which verify the identity of travelers but not whether they’re carrying anything dangerous.

[…]

Source: Google wants to make sideloading Android apps safer by verifying developers’ identities – Android Authority

So the new requirement doesn’t make things any safer, but gives Google a whole load of new personal data for no good reason other than that they want it. I guess it’s becoming more and more time to de-Google.

4chan will refuse to pay daily UK fines, its lawyer tells BBC

A lawyer representing the online message board 4chan says it won’t pay a proposed fine by the UK’s media regulator as it enforces the Online Safety Act.

According to Preston Byrne, managing partner of law firm Byrne & Storm, Ofcom has provisionally decided to impose a £20,000 fine “with daily penalties thereafter” for as long as the site fails to comply with its request.

“Ofcom’s notices create no legal obligations in the United States,” he told the BBC, adding he believed the regulator’s investigation was part of an “illegal campaign of harassment” against US tech firms.

Ofcom has declined to comment while its investigation continues.

“4chan has broken no laws in the United States – my client will not pay any penalty,” Mr Byrne said.

[…]

In a statement posted on X, law firms Byrne & Storm and Coleman Law said 4chan was a US company incorporated in the US, and therefore protected against the UK law.

“American businesses do not surrender their First Amendment rights because a foreign bureaucrat sends them an email,” they wrote.

“Under settled principles of US law, American courts will not enforce foreign penal fines or censorship codes.

“If necessary, we will seek appropriate relief in US federal court to confirm these principles.”

[…]

Ofcom has previously said the Online Safety Act only requires services to take action to protect users based in the UK.

[…]

If 4chan does successfully fight the fine in the US courts, Ofcom may have other options.

“Enforcing against an offshore provider is tricky,” Emma Drake, partner of online safety and privacy at law firm Bird and Bird, told the BBC.

“Ofcom can instead ask a court to order other services to disrupt a provider’s UK business, such as requiring a service’s removal from search results or blocking of UK payments.

“If Ofcom doesn’t think this will be enough to prevent significant harm, it can even ask that ISPs be ordered to block UK access.”

Source: 4chan will refuse to pay daily UK fines, its lawyer tells BBC

Welcome to the world of censorship.

Uni of Melbourne used Wi-Fi location data to ID protestors

Australia’s University of Melbourne last year used Wi-Fi location data to identify student protestors.

The University used Wi-Fi to identify students who participated in July 2024 sit-in protest. As described in a report [PDF] into the matter by the state of Victoria’s Office of the Information Commissioner, the University directed protestors to leave the building they occupied and warned those who remained could be suspended, disciplined, or reported to police.

The report says 22 chose to remain, and that the University used CCTV and WiFi location data to identify them.

The Information Commissioner found that use of CCTV to identify protestors did not breach privacy, but felt using Wi-Fi location data did because the University’s policies lacked detail.

“Given that individuals would not have been aware of why their Wi-Fi location data was collected and how it may be used, they could not exercise an informed choice as to whether to use the Wi-Fi network during the sit-in, and be aware of the possible consequences for doing so,” the report found.

As the investigation into use of location data unfolded, the University changed its policies regarding use of location data. The Office of the Information Commissioner therefore decided not to issue a formal compliance notice, and will monitor the University to ensure it complies with its undertakings.

Source: Australian uni used Wi-Fi location data to ID protestors • The Register

Privacy‑Preserving Age Verification Falls Apart On Contact With Reality

[…] Identity‑proofing creates a privacy bottleneck. Somewhere, an identity provider must verify you. Even if it later mints an unlinkable token, that provider is the weak link—and in regulated systems it will not be allowed to “just delete” your information. As Bellovin puts it:

Regulation implies the ability for governments to audit the regulated entities’ behavior. That in turn implies that logs must be kept. It is likely that such logs would include user names, addresses, ages, and forms of credentials presented.

Then there’s the issue of fraud and duplication of credentials. Accepting multiple credential types increases coverage and increases abuse; people can and do hold multiple valid IDs:

The fact that multiple forms of ID are acceptable… exacerbates the fraud issue…This makes it impossible to prevent a single person from obtaining multiple primary credentials, including ones for use by underage individuals.

Cost and access will absolutely chill speech. Identity providers are expensive. If users pay, you’ve built a wealth test for lawful speech. If sites pay, the costs roll downhill (fees, ads, data‑for‑access) and coverage narrows to the cheapest providers who may also be more susceptible to breaches:

Operating an IDP is likely to be expensive… If web sites shoulder the cost, they will have to recover it from their users. That would imply higher access charges, more ads (with their own privacy challenges), or both.

Sharing credentials drives mission creep, which will create dangers with the technology. If a token proves only “over 18,” people will share it (parents to kids, friends to friends). To deter that, providers tie tokens to identities/devices or bundle more attributes—making them more linkable and more revocable:

If the only use of the primary credential is obtaining age-verifying subcredentials, this isn’t much of a deterrent—many people simply won’t care…That, however, creates pressure for mission creep… , including opening bank accounts, employment verification, and vaccination certificates; however, this is also a major point of social control, since it is possible to revoke a primary credential and with it all derived subcredentials.

The end result, then is you’re not just attacking privacy again, but you’re creating a tool for authoritarian pressure:

Those who are disfavored by authoritarian governments may lose access not just to pornography, but to social media and all of these other services.

He also grounds it in lived reality, with a case study that shows who gets locked out first:

Consider a hypothetical person “Chris”, a non-driving senior citizen living with an adult child in a rural area of the U.S… Apart from the expense— quite possibly non-trivial for a poor family—Chris must persuade their child to then drive them 80 kilometers or more to a motor vehicles office…

There is also the social aspect. Imagine the embarrassment to all of an older parent having to explain to their child that they wish to view pornography.

None of this is an attack on the math. It’s a reminder that deployment reality ruins the cryptographic ideal. There’s more in the paper, but you get the idea

[…]

Source: Privacy‑Preserving Age Verification Falls Apart On Contact With Reality | Techdirt

Proton releases Lumo GPT 1.1:  faster, more advanced, European and actually private

Today we’re releasing a powerful update to Lumo that gives you a more capable privacy-first AI assistant offering faster, more thorough answers with improved awareness of recent events.

Guided by feedback from our community, we’ve been busy upgrading our models and adding GPUs, which we’ll continue to do thanks to the support of our Lumo Plus subscribers. Lumo 1.1 performs significantly better across the board than the first version of Lumo, so you can now use it more effectively for a variety of use cases:

  • Get help planning projects that require multiple steps — it will break down larger goals into smaller tasks
  • Ask complex questions and get more nuanced answers
  • Generate better code — Lumo is better at understanding your requests
  • Research current events or niche topics with better accuracy and fewer hallucinations thanks to improved web search

New cat, new tricks, same privacy

The latest upgrade brings more accurate responses with significantly less need for corrections or follow-up questions. Lumo now handles complex requests much more reliably and delivers the precise results you’re looking for.

In testing, Lumo’s performance has increased across several metrics:

  • Context: 170% improvement in context understanding so it can accurately answer questions based on your documents and data
  • Coding: 40% better ability to understand requests and generate correct code
  • Reasoning: Over 200% improvement in planning tasks, choosing the right tools such as web search, and working through complex multi-step problems

Most importantly, Lumo does all of this while respecting the confidentiality of your chats. Unlike every major AI platform, Lumo is open source and built to be private by design. It doesn’t keep any record of your chats, and your conversation history is secured with zero-access encryption so nobody else can see it and your data is never used to train the models. Lumo is the only AI where your conversations are actually private.

Learn about Lumo privacy

Lumo mobile apps are now open source

Unlike Big Tech AIs that spy on you, Lumo is an open source application that exclusively runs open source models. Open source is especially important in AI because it confirms that the applications and models are not being used nefariously to manipulate responses to fit a political narrative or secretly leak data. While the Lumo web client is already open source(new window), today we are also releasing the code for the mobile apps(new window). In line with Lumo being the most transparent and private AI, we have also published the Lumo security model so you can see how Lumo’s zero access encryption works and why nobody, not even Proton can access your conversation history.

Source: Introducing Lumo 1.1 for faster, advanced reasoning | Proton

The EU could be scanning your chats by October 2025 with Chat Control

Denmark kicked off its EU Presidency on July 1, 2025, and, among its first actions, lawmakers swiftly reintroduced the controversial child sexual abuse (CSAM) scanning bill to the top of the agenda.

Having been deemed by critics as Chat Control, the bill aims to introduce new obligations for all messaging services operating in Europe to scan users’ chats, even if they’re encrypted.

The proposal, however, has been failing to attract the needed majority since May 2022, with Poland’s Presidency being the last to give up on such a plan.

Denmark is a strong supporter of Chat Control. Now, the new rules could be adopted as early as October 14, 2025, if the Danish Presidency manages to find a middle ground among the countries’ members.

Crucially, according to the latest data leaked by the former MEP for the German Pirate Party, Patrick Breyer, many countries that said no to Chat Control in 2024 are now undecided, “even though the 2025 plan is even more extreme,” he added.

[…]

As per its first version, all messaging software providers would be required to perform indiscriminate scanning of private messages to look for CSAM – so-called ‘client-side scanning‘. The proposal was met with a strong backlash, and the European Court of Human Rights ended up banning all legal efforts to weaken encryption of secure communications in Europe.

In June 2024, Belgium then proposed a new text to target only shared photos, videos, and URLs, upon users’ permission. This version didn’t satisfy either the industry or voting EU members due to its coercive nature. As per the Belgian text, users must give consent to the shared material being scanned before being encrypted to keep using the functionality.

Source: The EU could be scanning your chats by October 2025 – here’s everything we know | TechRadar

German court revives case that could threaten ad blockers

A recent ruling by the German Federal Court of Justice (BGH) has reopened the possibility that using ad blocking software could violate copyright law in Germany.

In a decision last month, the BGH – the final court of appeals on civil and criminal matters – partially overturned an appeals court decision in an 11-year copyright dispute brought by publisher Axel Springer against Adblock Plus maker Eyeo GmbH.

The ruling says that the appeals court erred when it determined that the use of ad blocking software does not infringe on a copyright holder’s exclusive right to modify a computer program.

Springer has argued – unsuccessfully so far – that its website code falls under the control of the German Copyright Act. So modifying the web page’s Document Object Model (DOM) or Cascading Style Sheets – a common way to alter or remove web page elements – represents copyright infringement under the company’s interpretation of the law.

The appellate court that initially heard and rejected that argument will now have to revisit the matter, a process likely to add several years to a case that Eyeo believed was settled seven years ago.

Eyeo did not immediately respond to a request for comment. While it offers ad blocking software, the company generates revenue from ads through its Acceptable Ads program – advertisers pay to have ads that are “respectful, nonintrusive and relevant” exempted from filtering. Non-commercial open source projects like uBlock Origin rely on community support.

Philipp-Christian Thomale, senior legal counsel for Axel Springer, celebrated the ruling in a post to LinkedIn, calling it “a true milestone in the copyright protection of software – especially with regard to cloud-based applications (SaaS).”

Among the implications, he argues, is that “software providers will be better equipped to defend against manipulation by third-party software.”

While the outcome remains undecided, Mozilla senior IP & product counsel Daniel Nazer worries that if the German courts ultimately uphold the copyright claim, that will hinder user choice on the internet.

“We sincerely hope that Germany does not become the second jurisdiction (after China) to ban ad blockers,” he wrote in a blog post on Thursday.

“This will significantly limit users’ ability to control their online environment and potentially open the door to similar restrictions elsewhere. Such a precedent could embolden legal challenges against other extensions that protect privacy, enhance accessibility, or improve security.”

Ad blocking, or more broadly content blocking, can save battery life on mobile devices, improve page load times, reduce bandwidth consumption, and protect against malicious ads and nation-states that use ads for offensive cyber operations. The US Federal Bureau of Investigation in 2022 advised, “Use an ad blocking extension when performing internet searches,” as a defense against malicious search ads.

And as Nazer observes, there are many reasons other than ad blocking that one might wish to alter a webpage, such as improving accessibility, evaluating accessibility, or protecting privacy.

[…]

“If the German Supreme Court rules that this is a copyright violation then they would be in direct breach of TFEU [Treaty on the Functioning of the European Union] as such a judgment would not comply with EU law,” he told The Register in an email, pointing to Recital 66 of 2009/136/EC.

Hanff said he was told in writing around 2016 by the EU Commission’s Legal Services that “ad blockers and other such tools absolutely fall into the category of ‘appropriate settings of a browser or other application’ as a means of providing or refusing consent for such technologies (adtech).”

[…]

Source: German court revives case that could threaten ad blockers • The Register

Philipp-Christian Thomale, you are an evil man. Internet without an ad blocker is a horrible horrible thing you should not force on anyone.

Pluralistic: “Privacy preserving age verification” is bullshit

[…]

when politicians are demanding that technologists NERD HARDER! to realize their cherished impossibilities.

That’s just happened, and in relation to one of the scariest, most destructive NERD HARDER! tech policies ever to be assayed (a stiff competition). I’m talking about the UK Online Safety Act, which imposes a duty on websites to verify the age of people they communicate with before serving them anything that could be construed as child-inappropriate (a category that includes, e.g., much of Wikipedia):

https://wikimediafoundation.org/news/2025/08/11/wikimedia-foundation-challenges-uk-online-safety-act-regulations/

The Starmer government has, incredibly, developed a passion for internet regulations that are even stupider than Tony Blair’s and David Cameron’s. Requiring people to identify themselves (generally, via their credit cards) in order to look at porn will create a giant database of every kink and fetish of every person in the UK, which will inevitably leak and provide criminals and foreign spies with a kompromat system they can sort by net worth of the people contained within.

This hasn’t deterred Starmer, who insists that if we just NERD HARDER!, we can use things like “zero-knowledge proofs” to create “privacy-preserving” age verification system, whereby a service can assure itself that it is communicating with an adult without ever being able to determine who it is communicating with.

In support of this idea, Starmer and co like to cite some genuinely exciting and cool cryptographic work on privacy-preserving credential schemes. Now, one of the principal authors of the key papers on these credential schemes, Steve Bellovin, has published a paper that is pithily summed up via its title, “Privacy-Preserving Age Verification—and Its Limitations”:

https://www.cs.columbia.edu/~smb/papers/age-verify.pdf

The tldr of this paper is that Starmer’s idea will not work and cannot work. The research he relies on to defend the technological feasibility of his cherished plan does not support his conclusion.

Bellovin starts off by looking at the different approaches various players have mooted for verifying their users’ age. For example, Google says it can deploy a “behavioral” system that relies on Google surveillance dossiers to make guesses about your age. Google refuses to explain how this would work, but Bellovin sums up several of the well-understood behavioral age estimation techniques and explains why they won’t work. It’s one thing to screw up age estimation when deciding which ad to show you; it’s another thing altogether to do this when deciding whether you can access the internet.

Others say they can estimate your age by using AI to analyze a picture of your face. This is a stupid idea for many reasons, not least of which is that biometric age estimation is notoriously unreliable when it comes to distinguishing, say, 16 or 17 year olds from 18 year olds. Nevertheless, there are sitting US Congressmen who not only think this would work – they labor under the misapprehension that this is already going on:

https://pluralistic.net/2023/04/09/how-to-make-a-child-safe-tiktok/

So that just leaves the privacy-preserving credential schemes, especially the Camenisch-Lysyanskaya protocol. This involves an Identity Provider (IDP) that establishes a user’s identity and characteristics using careful document checks and other procedures. The IDP then hands the user a “primary credential” that can attest to everything the IDP knows about the user, and any number of “subcredentials” that only attest to specific facts about that user (such as their age).

These are used in zero-knowledge proofs (ZKP) – a way for two parties to validate that one of them asserts a fact without learning what that fact is in the process (this is super cool stuff). Users can send their subcredentials to a third party, who can use a ZKP to validate them without learning anything else about the user – so you could prove your age (or even just prove that you are over 18 without disclosing your age at all) without disclosing your identity.

There’s some good news for implementing CL on the web: rather than developing a transcendentally expensive and complex new system for these credential exchanges and checks, CL can piggyback on the existing Public Key Infrastructure (PKI) that powers your browser’s ability to have secure sessions when you visit a website with https:// in front of the address (instead of just http://).

However, doing so poses several difficulties, which Bellovin enumerates under a usefully frank section header: “INSURMOUNTABLE OBSTACLES.”

The most insurmountable of these obstacles is getting set up with an IDP in the first place – that is, proving who you are to some agency, but only one such agency (so you can’t create two primary credentials and share one of them with someone underage). Bellovin cites Supreme Court cases about voter ID laws and the burdens they impose on people who are poor, old, young, disabled, rural, etc.

Fundamentally, it can be insurmountably hard for a lot of people to get, say, a driver’s license, or any other singular piece of ID that they can provide to an IDP in order to get set up on the system.

The usual answer for this is for IDPs to allow multiple kinds of ID. This does ease the burden on users, but at the expense of creating fatal weaknesses in the system: if you can set up an identity with multiple kinds of ID, you can visit different IDPs and set up an ID with each (just as many Americans today have drivers licenses from more than one state).

The next obstacle is “user challenges,” like the problem of households with shared computers, or computers in libraries, hotels, community centers and other public places. The only effective way to do this is to create (expensive) online credential stores, which are likely to be out of reach of the poor and disadvantaged people who disproportionately rely on public or shared computers.

Next are the “economic issues”: this stuff is expensive to set up and maintain, and someone’s gotta pay for it. We could ask websites that offer kid-inappropriate content to pay for it, but that sets up an irreconcilable conflict of interest. These websites are going to want to minimize their costs, and everything they can do to reduce costs will make the system unacceptably worse. For example, they could choose only to set up accounts with IDPs that are local to the company that operates the server, meaning that anyone who lives somewhere else and wants to access that website is going to have to somehow get certified copies of e.g. their birth certificate and driver’s license to IDPs on the other side of the planet. The alternative to having websites foot the bill for this is asking users to pay for it – meaning that, once again, we exclude poor people from the internet.

Finally, there’s “governance”: who runs this thing? In practice, the security and privacy guarantees of the CL protocol require two different kinds of wholly independent institutions: identity providers (who verify your documents), and certificate authorities (who issue cryptographic certificates based on those documents). If these two functions take place under one roof, the privacy guarantees of the system immediately evaporate.

An IDP’s most important role is verifying documents and associating them with a specific person. But not all IDPs will be created equal, and people who wish to cheat the system will gravitate to the worst IDPs. However, lots of people who have no nefarious intent will also use these IDPs, merely because they are close by, or popular, or were selected at random. A decision to strike off an IDP and rescind its verifications will force lots of people – potentially millions of people – to start over with the whole business of identifying themselves, during which time they will be unable to access much of the web. There’s no practical way for the average person to judge whether an IDP they choose is likely to be found wanting in the future.

So we can regulate IDPs, but who will do the regulation? Age verification laws affect people outside of a government’s national territory – anyone seeking to access content on a webserver falls under age verification’s remit. Remember, IDPs handle all kinds of sensitive data: do you want Russia, say, to have a say in deciding who can be an IDP and what disclosure rules you will have to follow?

To regulate IDPs (and certificate authorities), these entities will have to keep logs, which further compromises the privacy guarantees of the CL protocol.

Looming all of this is a problem with the CL protocol as being built on regulated entities, which is that CL is envisioned as a way to do all kinds of business, from opening a bank account to proving your vaccination status or your right to work or receive welfare. Authoritarian governments who order primary credential revocations of their political opponents could thoroughly and terrifyingly “unperson” them at the stroke of a pen.

The paper’s conclusions provide a highly readable summary of these issues, which constitute a stinging rebuke to anyone contemplating age-verification schemes. These go well beyond the UK, and are in the works in Canada, Australia, the EU, Texas and Louisiana.

Age verification is an impossibility, and an impossibly terrible idea with impossibly vast consequences for privacy and the open web, as my EFF colleague Jason Kelley explained on the Malwarebytes podcast:

https://www.malwarebytes.com/blog/podcast/2025/08/the-worst-thing-for-online-rights-an-age-restricted-grey-web-lock-and-code-s06e16

Politicians – even nontechnical ones – can make good tech policy, provided they take expert feedback seriously (and distinguish it from self-interested industry lobbying).

When it comes to tech policy, wanting it badly is not enough. The fact that it would be really cool if we could get technology to do something has no bearing on whether we can actually get technology to do that thing. NERD HARDER! isn’t a policy, it’s a wish.

Wish in one hand and shit in the other and see which one will be full first:

https://www.reddit.com/r/etymology/comments/oqiic7/studying_the_origins_of_the_phrase_wish_in_one/

Source: Pluralistic: “Privacy preserving age verification” is bullshit (14 Aug 2025) – Pluralistic: Daily links from Cory Doctorow

UK passport database images used in facial recognition scans

Privacy groups report a surge in UK police facial recognition scans of databases secretly stocked with passport photos lacking parliamentary oversight.

Big Brother Watch says the UK government has allowed images from the country’s passport and immigration databases to be made available to facial recognition systems, without informing the public or parliament.

The group claims the passport database contains around 58 million headshots of Brits, plus a further 92 million made available from sources such as the immigration database, visa applications, and more.

By way of comparison, the Police National Database contains circa 20 million photos of those who have been arrested by, or are at least of interest to, the police.

In a joint statement, Big Brother Watch, its director Silkie Carlo, Privacy International, and its senior technologist Nuno Guerreiro de Sousa, described the databases and lack of transparency as “Orwellian.” They have also written to both the Home Office and the Metropolitan Police, calling for a ban on the practice.

The comments come after Big Brother Watch submitted Freedom of Information requests, which revealed a significant uptick in police scanning the databases in question as part of the force’s increasing facial recognition use.

The number of searches by 31 police forces against the passport databases rose from two in 2020 to 417 by 2023, and scans using the immigration database photos rose from 16 in 2023 to 102 the following year.

Carlo said: “This astonishing revelation shows both our privacy and democracy are at risk from secretive AI policing, and that members of the public are now subject to the inevitable risk of misidentifications and injustice. Police officers can secretly take photos from protests, social media, or indeed anywhere and seek to identify members of the public without suspecting us of having committed any crime.

“This is a historic breach of the right to privacy in Britain that must end. We’ve taken this legal action to defend the rights of tens of millions of innocent people in Britain.”

[…]

Recent data from the Met attempted to imbue a sense of confidence in facial recognition, as the number of arrests the technology facilitated passed the 1,000 mark, the force said in July.

However, privacy campaigners were quick to point out that this accounted for just 0.15 percent of the total arrests in London since 2020. They suggested that despite the shiny 1,000 number, this did not represent a valuable return on investment in the tech.

Alas, the UK has not given up on its pursuit of greater surveillance powers. Prime Minister Keir Starmer, a former human rights lawyer, is a big fan of FR, having said last year that it was the answer to preventing future riots like the ones that broke out across the UK last year following the Southport murders. ®

Source: UK passport database images used in facial recognition scans • The Register

Be Warned: Lessons From Reddit’s Chaotic UK Age Verification Rollout

Age verification has officially arrived in the UK thanks to the Online Safety Act (OSA), a UK law requiring online platforms to check that all UK-based users are at least eighteen years old before allowing them to access broad categories of “harmful” content that go far beyond graphic sexual content. EFF has extensively criticized the OSA for eroding privacy, chilling speech, and undermining the safety of the children it aims to protect. Now that it’s gone into effect, these countless problems have begun to reveal themselves, and the absurd, disastrous outcome illustrates why we must work to avoid this age-verified future at all costs.

Perhaps you’ve seen the memes as large platforms like Spotify and YouTube attempt to comply with the OSA, while smaller sites—like forums focused on parenting, green living, and gaming on Linux—either shut down or cease some operations rather than face massive fines for not following the law’s vague, expensive, and complicated rules and risk assessments.

But even Reddit, a site that prizes anonymity and has regularly demonstrated its commitment to digital rights, was doomed to fail in its attempt to comply with the OSA. Though Reddit is not alone in bowing to the UK mandates, it provides a perfect case study and a particularly instructive glimpse of what the age-verified future would look like if we don’t take steps to stop it.

It’s Not Just Porn—LGBTQ+, Public Health, and Politics Forums All Behind Age Gates

On July 25, users in the UK were shocked and rightfully revolted to discover that their favorite Reddit communities were now locked behind age verification walls. Under the new policies, UK Redditors were asked to submit a photo of their government ID and/or a live selfie to Persona, the for-profit vendor that Reddit contracts with to provide age verification services.

 "SUBMIT PHOTO ID" or "ESTIMATE AGE FROM SELFIE."

For many, this was the first time they realized what the OSA would actually mean in practice—and the outrage was immediate. As soon as the policy took effect, reports emerged from users that subreddits dedicated to LGBTQ+ identity and support, global journalism and conflict reporting, and even public health-related forums like r/periods, r/stopsmoking, and r/sexualassault were walled off to unverified users. A few more absurd examples of the communities that were blocked off, according to users, include: r/poker, r/vexillology (the study of flags), r/worldwar2, r/earwax, r/popping (the home of grossly satisfying pimple-popping content), and r/rickroll (yup). This is, again, exactly what digital rights advocates warned about.

The OSA defines “harmful” in multiple ways that go far beyond pornography, so the obstacles the UK users are experiencing are exactly what the law intended. Like other online age restrictions, the OSA obstructs way more than kids’ access to clearly adult sites. When fines are at stake, platforms will always default to overcensoring. So every user in the country is now faced with a choice: submit their most sensitive data for privacy-invasive analysis, or stay off of Reddit entirely. Which would you choose?

[…]

Rollout Chaos: The Tech Doesn’t Even Work! 

In the days after the OSA became effective, backlash to the new age verification measures spread across the internet like wildfire as UK users made their hatred of these new policies clear. VPN usage in the UK soared, over 500,000 people signed a petition to repeal the OSA, and some shrewd users even discovered that video game face filters and meme images could fool Persona’s verification software

[…]

age verification measures still will not achieve their singular goal of protecting kids from so-called “harmful” online content. Teenagers will, uh, find a way to access the content they want. Instead of going to a vetted site like Pornhub for explicit material, curious young people (and anyone else who does not or cannot submit to age checks) will be pushed to the sketchier corners of the internet—where there is less moderation, more safety risk, and no regulation to prevent things like CSAM or non-consensual sexual content. In effect, the OSA and other age verification mandates like it will increase the risk of harm, not reduce it.

If that weren’t enough, the slew of practical issues that have accompanied Reddit’s rollout also reveals the inadequacy of age verification technology to meet our current moment. For example, users reported various bugs in the age-checking process, like being locked out or asked repeatedly for ID despite complying.

[…]

it is excessively clear that age-gating the internet is not the solution to kids’ online safety. Whether due to issues with the discriminatory and error-prone technology, or simply because they lack either a government ID or personal device of their own, millions of UK internet users will be completely locked out of important social, political, and creative communities. If we allow age verification, we welcome new levels of censorship and surveillance with it—while further lining the pockets of big tech and the slew of for-profit age verification vendors that have popped up to fill this market void.

[…]

Source: Americans, Be Warned: Lessons From Reddit’s Chaotic UK Age Verification Rollout | Electronic Frontier Foundation

Even Volkswagen Is Doing Horsepower Subscriptions Now

[…]

we’re used to hearing about subscriptions for improved performance and creature comforts on luxury cars, but VW’s trialing BMW and Mercedes-Benz’s greatest hits of consumer-hostile policies and gating an additional 27 horsepower behind a $22.30 monthly payment on the ID.3. Alternatively, owners can shell out $878 to unlock that power permanently, for the life of the vehicle.

This news comes courtesy of AutoExpress, and it’s alarming for several reasons. First, again, the ID.3 isn’t exactly a bargain, starting at the equivalent of $41,770, but it’s also no Mercedes EQE. Second, as the article points out, the car is registered at 228 hp stock, which affects insurance rates, even though owners only get 201 hp before subscribing. So, you’re paying a penalty on your insurance premium based on power that you can only access if you give Volkswagen yet more money every month.

This monthly fee also lifts torque from the standard 195 lb-ft to 228 lb-ft, and VW says that the increase in output doesn’t impact range

[…]

The best outcome we can hope for in these cases is that the outcry against it becomes so loud that VW relents. That’s worked to some degree on this side of the pond, with BMW’s heated-seat policies. But the retractions don’t last forever, and automakers are pretty much set on biding their time until software-locking everything is normalized, and they can get away with all of it.

Source: Even Volkswagen Is Doing Horsepower Subscriptions Now

So… you paid for the hardware. It is sitting in the car you own, which is parked in front of your house. And they want to ask more for what you already bought? Absolutely ridiculous and I hope the car hacking scene finds a way to circumvent this.

EU Chat Control Plan Gains Support Again, Threatens Encryption, mass surveillance, age verification

A controversial European Union proposal dubbed “Chat Control” is regaining momentum, with 19 out of 27 EU member states reportedly backing the measure.

The plan would mandate that messaging platforms, including WhatsApp, Signal and Telegram, must scan every message, photo and video sent by users starting in October, even if end-to-end encryption is in place, popular French tech blogger Korben wrote on Monday.

Denmark reintroduced the proposal on July 1, the first day of its EU Council presidency. France, once opposed, is now in favor, Korben said, citing Patrick Breyer, a former member of the European Parliament for Germany and the European Pirate Party.

Belgium, Hungary, Sweden, Italy and Spain are also in favor, while Germany remains undecided. However, if Berlin joins the majority, a qualified council vote could push the plan through by mid-October, Korben said.

A qualified majority in the EU Council is achieved when two conditions are met. First, at least 55 percent of member states, meaning 15 out of 27, must vote in favor. Second, those countries must represent at least 65% of the EU’s total population.

EU Chat Control bill finds support. Source: Pavol Luptak

Pre-encryption scanning on devices

Instead of weakening encryption, the plan seeks to implement client-side scanning, meaning software embedded in users’ devices that inspects content before it is encrypted. “A bit like if the Post Office came to read all your letters in your living room before you put them in the envelope,” Korben said.

He added that the real target isn’t criminals, who use encrypted or decentralized channels, but ordinary users whose private conversations would now be open to algorithmic scrutiny.

The proposal cites the prevention of child sexual abuse material (CSAM) as its justification. However, it would result in “mass surveillance by means of fully automated real-time surveillance of messaging and chats and the end of privacy of digital correspondence,” Breyer wrote.

Beyond scanning, the package includes mandatory age verification, effectively removing anonymity from messaging platforms. Digital freedom groups are asking citizens to contact their MEPs, sign petitions and push back before the law becomes irreversible.

[…]

Source: EU Chat Control Plan Gains Support, Threatens Encryption

Age verification is going horribly wrong in the UK and mass surveillance threatens freedom of thought, something we fortunately still have in the EU. This must be stopped.

Meta eavesdropped on period-tracker app’s users, SF jury rules

Meta lost a major privacy trial on Friday, with a jury in San Francisco ruling that the Menlo Park giant had eavesdropped on the users of the popular period-tracking app Flo. The plaintiff’s lawyers who sued Meta are calling this a “landmark” victory — the tech company contends that the jury got it all wrong.

The case goes back to 2021, when eight women sued Flo and a group of other tech companies, including Google and Facebook, now known as Meta. The stakes were extremely personal. Flo asked users about their sex lives, mental health and diets, and guided them through menstruation and pregnancy. Then, the women alleged, Flo shared pieces of that data with other companies. The claims were largely based on a 2019 Wall Street Journal story and a 2021 Federal Trade Commission investigation.

Google, Flo and the analytics company Flurry, which was also part of the lawsuit, reached settlements with the plaintiffs, as is common in class action lawsuits about tech privacy. But Meta stuck it out through the entire trial and lost.

[…]

Their complaint also pointed to Facebook’s terms for its business tools, which said the company used so-called “event data” to personalize ads and content.

In a 2022 filing, the tech giant admitted that Flo used Facebook’s kit during this period and that the app sent data connected to “App Events.” But Meta denied receiving intimate information about users’ health.

Nonetheless, the jury ruled against Meta. Along with the eavesdropping decision, the group determined that Flo’s users had a reasonable expectation they weren’t being overheard or recorded, as well as ruling that Meta didn’t have consent to eavesdrop or record. The unanimous verdict was that the massive company violated the California Invasion of Privacy Act.

The jury’s ruling could have far-reaching effects. Per a June filing about the case’s class action status, more than 3.7 million people in the United States registered for Flo between November 2016 and February 2019. Those potential claimants are expected to be updated via email and on a case website; it’s not yet clear what the remittance from the trial or settlements might be.

[…]

Source: Meta eavesdropped on period-tracker app’s users, SF jury rules

Didn’t Take Long To Reveal The UK’s Online Safety Act Is Exactly The Privacy-Crushing Failure Everyone Warned About

[…]the real kicker is what content is now being gatekept behind invasive age verification systems. Users in the UK now need to submit a selfie or government ID to access:

Yes, you read that right. A law supposedly designed to protect children now requires victims of sexual assault to submit government IDs to access support communities. People struggling with addiction must undergo facial recognition scans to find help quitting drinking or smoking. The UK government has somehow concluded that access to basic health information and peer support networks poses such a grave threat to minors that it justifies creating a comprehensive surveillance infrastructure around it.

[…]

And this is all after a bunch of other smaller websites and forums shut down earlier this year when other parts of the law went into effect.

This is exactly what happens when you regulate the internet as if it’s all just Facebook and Google. The tech giants can absorb the compliance costs, but everyone else gets crushed.

The only websites with the financial capacity to work around the government’s new regulations are the ones causing the problems in the first place. And now Meta, which already has a monopoly on a number of near-essential online activities (from local sales to university group chats), is reaping the benefits.

[…]

The age verification process itself is a privacy nightmare wrapped in security theater. Users are being asked to upload selfies that get run through facial recognition algorithms, or hand over copies of their government-issued IDs to third-party companies. The facial recognition systems are so poorly implemented that people are easily fooling them with screenshots from video games—literally using images from the video game Death Stranding. This isn’t just embarrassing, it reveals the fundamental security flaw at the heart of the entire system. If these verification methods can’t distinguish between a real person and a video game character, what confidence should we have in their ability to protect the sensitive biometric data they’re collecting?

But here’s the thing: even when these systems “work,” they’re creating massive honeypots of personal data. As we’ve seen repeatedly, companies collecting biometric data and ID verification inevitably get breached, and suddenly intimate details about people’s online activity become public. Just ask the users of Tea, a women’s dating safety app that recently exposed thousands of users’ verification selfies after requiring facial recognition for “safety.”

The UK government’s response to widespread VPN usage has been predictably authoritarian. First, they insisted nothing would change:

“The Government has no plans to repeal the Online Safety Act, and is working closely with Ofcom to implement the Act as quickly and effectively as possible to enable UK users to benefit from its protections.”

But then, Tech Secretary Peter Kyle deployed the classic authoritarian playbook: dismissing all criticism as support for child predators. This isn’t just intellectually dishonest—it’s a deliberate attempt to shut down legitimate policy debate by smearing critics as complicit in child abuse. It’s particularly galling given that the law Kyle is defending will do absolutely nothing to stop actual predators, who will simply migrate to unregulated platforms or use the same VPNs that law-abiding citizens are now flocking to.

[…]

Meanwhile, the actual harms it purports to address? Those remain entirely unaddressed. Predators will simply move to unregulated platforms, encrypted messaging, or services that don’t comply. Or they’ll just use VPNs. The law creates the illusion of safety while actually making everyone less secure.

This is what happens when politicians decide to regulate technology they don’t understand, targeting problems they can’t define, with solutions that don’t work. The UK has managed to create a law so poorly designed that it simultaneously violates privacy, restricts freedom, harms small businesses, and completely fails at its stated goal of protecting children.

And all of this was predictable. Hell, it was predicted. Civil society groups, activists, legal experts, all warned of these results and were dismissed by the likes of Peter Kyle as supporting child predators.

[…]

A petition set up on the UK government’s website demanding a repeal of the entire OSA received many hundreds of thousands of signatures within days. The government has already brushed it off with more nonsense, promising that the enforcer of the law, Ofcom, “will take a sensible approach to enforcement with smaller services that present low risk to UK users, only taking action where it is proportionate and appropriate, and will focus on cases where the risk and impact of harm is highest.”

But that’s a bunch of vague nonsense that doesn’t take into account that no platform wants to be on the receiving end of such an investigation, and thus will take these overly aggressive steps to avoid scrutiny.

[…]

What makes this particularly tragic is that there were genuine alternatives. Real child safety measures—better funding for mental health support, improved education programs, stronger privacy protections that don’t require mass surveillance—were all on the table. Instead, the UK chose the path that maximizes government control while minimizing actual safety.

The rest of the world should take note.

Source: Didn’t Take Long To Reveal The UK’s Online Safety Act Is Exactly The Privacy-Crushing Failure Everyone Warned About

Belgium Targets Internet Archive’s ‘Open Library’ in Sweeping Site Blocking Order

The Business Court in Brussels, Belgium, has issued a broad site-blocking order that aims to restrict access to shadow libraries including Anna’s Archive, Libgen, OceanofPDF, Z-Library, and the Internet Archive’s Open Library. In addition to ISP blocks, the order also directs search engines, DNS resolvers, advertisers, domain name services, CDNs and hosting companies to take action. For now, Open Library doesn’t appear to be actively blocked.

booksTraditional site-blocking measures that require local ISPs to block subscriber access to popular pirate sites are in common use around the world.

Note: this article was updated to add that Open Library does not appear to be actively blocked. More details here.

[…]

A few months ago DNS blocking arrived in Belgium, where several orders required both ISPs and DNS resolvers to restrict access to pirate sites. This prompted significant pushback, most notably Cisco’s OpenDNS ceasing operations in the country.

Broad Blocking Order Targets Internet Archive’s ‘Open Library’

A new order, issued by the Brussels Business Court in mid-July, targets an even broader set of intermediaries and stands out for other reasons as well.

[…]

Open Library was created by the late Aaron Swartz and Internet Archive’s founder Brewster Kahle, among others. As an open library its goal is to archive all published books, allowing patrons to borrow copies of them online.

The library aims to operate similarly to other libraries, loaning only one copy per book at a time. Instead of licensing digital copies, however, it has an in-house scanning operation to create and archive its own copies.

 

Open Library
 

open library
 

The Open Library project was previously sued by publishers in the United States, where the Internet Archive ultimately losing the case. As a result, over 500,000 books were made unavailable.

[…]

According to the publishers, the operators of the Open Library are not easily identified, while legally required information is allegedly missing from the site, which they see as an indication that the site is meant to operate illegally.

This description seems at odds with the fact that Open Library is part of the Internet Archive, which is a U.S.-registered 501(c)(3) non-profit.

[…]

Internet Archive was not heard in this case, as the blocking order was issued ex parte, without its knowledge. This is remarkable, as the organization is a legal entity in the United States, which receives support from many American libraries.

The broad nature of the order doesn’t stop there either. In addition to requiring ISPs, including Elon Musk’s Starlink, to block the library’s domain names, it also directs a broad range of other intermediaries to take action.

This includes search engines, DNS resolvers, advertisers, domain name services, CDNs, and hosting companies. An abbreviated overview of the requested measures is as follows;

[…]

Update: After publication, a representative from Internet Archive informed us that they are not aware of any disruption to their services at this time.

The Open Library domain (openlibrary.org) doesn’t appear on the master blacklist of FOD Economie either, while several domains of the other four ‘target sites’ are included. We have reached out to the responsible authority in Belgium to get clarification on this discrepancy and will update the article if we hear back.

A copy of the order from the Business Court in Brussels (in Dutch) is available here (pdf)

Source: Belgium Targets Internet Archive’s ‘Open Library’ in Sweeping Site Blocking Order (Update) * TorrentFreak

So this decision is totally unenforceable by Belgium, but does show how corrupt and in the pocket of big businesses the system in Belgium actually is.

Public ChatGPT Queries Are Getting Indexed By Google and Other Search Engines (update: fixed!)

An anonymous reader quotes a report from TechCrunch: It’s a strange glimpse into the human mind: If you filter search results on Google, Bing, and other search engines to only include URLs from the domain “https://chatgpt.com/share,” you can find strangers’ conversations with ChatGPT. Sometimes, these shared conversation links are pretty dull — people ask for help renovating their bathroom, understanding astrophysics, and finding recipe ideas. In another case, one user asks ChatGPT to rewrite their resume for a particular job application (judging by this person’s LinkedIn, which was easy to find based on the details in the chat log, they did not get the job). Someone else is asking questions that sound like they came out of an incel forum. Another person asks the snarky, hostile AI assistant if they can microwave a metal fork (for the record: no), but they continue to ask the AI increasingly absurd and trollish questions, eventually leading it to create a guide called “How to Use a Microwave Without Summoning Satan: A Beginner’s Guide.”

ChatGPT does not make these conversations public by default. A conversation would be appended with a “/share” URL only if the user deliberately clicks the “share” button on their own chat and then clicks a second “create link” button. The service also declares that “your name, custom instructions, and any messages you add after sharing stay private.” After clicking through to create a link, users can toggle whether or not they want that link to be discoverable. However, users may not anticipate that other search engines will index their shared ChatGPT links, potentially betraying personal information (my apologies to the person whose LinkedIn I discovered).
According to ChatGPT, these chats were indexed as part of an experiment. “ChatGPT chats are not public unless you choose to share them,” an OpenAI spokesperson told TechCrunch. “We’ve been testing ways to make it easier to share helpful conversations, while keeping users in control, and we recently ended an experiment to have chats appear in search engine results if you explicitly opted in when sharing.”

A Google spokesperson also weighed in, telling TechCrunch that the company has no control over what gets indexed. “Neither Google nor any other search engine controls what pages are made public on the web. Publishers of these pages have full control over whether they are indexed by search engines.”

Source: Public ChatGPT Queries Are Getting Indexed By Google and Other Search Engines

UK’s most tattooed man blocked from accessing porn online by new rules

Britain’s most tattooed man has a lot more time on his hands and not a lot else thanks to new porn laws.

The King of Ink says facial recognition tech has made it harder to chat to webcam girls, after sites started mistaking his tattooed face for a mask.

The new rules came into force last week, introducing stricter checks under Ofcom’s children’s codes.

The King of Ink, as he’s legally known, said: ‘Some of the websites are asking for picture verification, like selfies, and it’s not recognising my face.

‘It’s saying “remove your mask” because the technology is made so you can’t hold up a picture to the camera or wear a mask.

‘Would this also be the case for someone who is disfigured? They should have thought of this from day one.’

The businessman and entrepreneur, from Stechford, Birmingham, feels discriminated against on the basis of his permanent identity.

Britain's most tattooed man can't watch porn under new rules because it doesn't recognise his face King Of Ink Land King Body Art The Extreme Ink-ite (Mathew Whelan)
The tattoo enthusiast says his heavily tattooed face is a permanent part of his identity (Picture: @kingofinklandkingbodyart)

‘It’s as important as the name really and I changed my name legally,’ he said

‘Without a name you haven’t got an identity, and it’s the same with a face.

[…]

Source: UK’s most tattooed man blocked from accessing porn online by new rules | News UK | Metro News

So many ways to circumvent it, so many ways it break and really, age verification’s only winners are the tech companies that people are forced to pay money to.

Google AI is watching — how to turn off Gemini on Android

[…]Why you shouldn’t trust Gemini with your data

Gemini promises to simplify how you interact with your Android — fetching emails, summarizing meetings, pulling up files. But behind that helpful facade is an unprecedented level of centralized data collection, powered by a company known for privacy washing, (new window)misleadin(new window)g users(new window) about how their data is used, and that was hit with $2.9 billion in fines in 2024 alone, mostly for privacy violations and antitrust breaches.

Other people may see your sensitive information

Even more concerning, human reviewers may process your conversations. While Google claims these chats are disconnected from your Google account before review, that doesn’t mean much when a simple prompt like “Show me the email I sent yesterday” might return personal data like your name and phone number.

Your data may be shared beyond Google

Gemini may also share your data with third-party services. When Gemini interacts with other services, your data gets passed along and processed under their privacy policies, not just Google’s. Right now, Gemini mostly connects with Google services, but integrations with apps like WhatsApp and Spotify are already showing up. Once your data leaves Google, you cannot control where it goes or how long it’s kept.

The July 2025 update keeps Gemini connected without your consent

Before July, turning off Gemini Apps Activity automatically disabled all connected apps, so you couldn’t use Gemini to interact with other services unless you allowed data collection for AI training and human review. But Google’s July 7 update changed this behavior and now keeps Gemini connected to certain services — such as Phone, Messages, WhatsApp, and Utilities — even if activity tracking is off.

While this might sound like a privacy-conscious change — letting you use Gemini without contributing to AI training — it still raises serious concerns. Google has effectively preserved full functionality and ongoing access to your data, even after you’ve opted out.

Can you fully disable Gemini on Android?

No, and that’s by design.

[…]

How to turn off Gemini AI on Android

  1. Open the Gemini app on your Android.
  2. Tap your profile icon in the top-right corner.
  3. Go to Gemini Apps Activity*.
  1. Tap Turn offTurn off and delete activity, and follow the prompts.
  1. Select your profile icon again and go to Apps**.
  1. Tap the toggle switch to prevent Gemini from interacting with Google apps and third-party services.

*Gemini Apps Activity is a setting that controls whether your interactions with Gemini are saved to your Google account and used to improve Google’s AI systems. When it’s on, your conversations may be reviewed by humans, stored for up to 3 years, and used for AI training. When it’s off, your data isn’t used for AI training, but it’s still stored for up to 72 hours so Google can process your requests and feedback.

**Apps are the Google apps and third-party services that Gemini can access to perform tasks on your behalf — like reading your Gmail, checking your Google Calendar schedule, retrieving documents from Google Drive, playing music via Spotify, or sending messages on your behalf via WhatsApp. When Gemini is connected to these apps, it can access your personal content to fulfill prompts, and that data may be processed by Google or shared with the third-party app according to their own privacy policies.

Source: Google AI is watching — how to turn off Gemini on Android | Proton

Visa and Mastercard Fielding A Ton Of Complaints Over “NSFW” Games Disappearing On Platforms, acting as censors

A week or so ago, Karl Bode wrote about Vice Media’s idiotic decision to disappear several articles that had been written by its Waypoint property concerning Collective Shout. Collective Shout is an Australian group that pretends to be a feminist organization, when, in reality, it operates much more like any number of largely evangelical groups bent on censoring any content that doesn’t align with their own viewpoints (which they insist become your viewpoints as well). The point of Karl’s post was to correctly point out that Collective Shout’s decision to go after the payment processors for the major video game marketplaces over their offering NSFW games shouldn’t be hidden from the public in the interest of clickbait non-journalism.

But that whole thing about Collective Shout putting on a pressure campaign on payment processors is in and of itself a big deal, as is the response to it. Both Steam and itch.io recently either removed or de-indexed a ton of games they’re labeling NSFW, chiefly along guidelines clearly provided by the credit card companies themselves. Now, Collective Shout will tell you that it is mostly interested in going after games that depict vile actions in some ways, such as rape, child abuse, and incest.

No Mercy. That’s the name of the incest-and-rape-focused game that was geo-blocked in Australia this April, following a campaign by the local pressure group Collective Shout. The group, which stands against “the increasing pornification of culture”, then set its sights on a broader target – hundreds of other games they identified as featuring rape, incest, or child sexual abuse on Steam and itch.io. “We approached payment processors because Steam did not respond to us,” said the group of its latest campaign.

The move was effective. Steam began removing sex-related games it deemed to violate the standards of its payment processors, presenting the choice as a tradeoff in a statement to Rock Paper Shotgun: “We are retiring those games from being sold on the Steam Store, because loss of payment methods would prevent customers from being able to purchase other titles and game content on Steam.”

Itch.io followed that up shortly afterwards with its de-indexing plan, but went further and did this with all NSFW games offered on the platform. Unlike Steam, itch.io was forthcoming as to their reasoning for its actions. And they were remarkably simple.

“Our ability to process payments is critical for every creator on our platform,” Corcoran said. “To ensure that we can continue to operate and provide a marketplace for all developers, we must prioritize our relationship with our payment partners and take immediate steps towards compliance.”

Digital marketplaces being unable to collect payment through trusted partners would be, to put it tersely, the end of their business. Those same payment processors can get predictably itchy about partnering with platforms that host content that someone out there, or many someones as part of a coordinated campaign, may not like for fear that will sully their reputation. And because these are private companies we’re talking about, their fear along with any of their own sense of morality are at play here. The end result is a digital world filled with digital marketplaces that all exist under an umbrella of god-like payment processors that can pretty much dictate to those other private entities what can be on offer and what cannot.

And, as an executive from Appcharge chimed in, the processors will hang this all on the amount of fraud and chargebacks that come along with adult content, but that doesn’t change the question about whether payment processors should be neutral on legal but morally questionable content or not. Because, as you would expect, the aims of folks like Collective Shout almost certainly don’t end with things like rape and incest.

It’s possible that Collective Shout’s campaign highlighted a level of operational and reputational risk that payment processors weren’t aware of, and of a severity they didn’t expect. “I’m guessing it’s also the moral element,” Tov-Ly says. “It just makes sense, right? Why would you condone incest or rape promoting games?”

Tov-Ly is of the opinion that payment processors offer a utility, and should have no more role in the moral arbitration of art than your electricity company – meaning, none at all. “Whenever you open that Pandora’s box, you’re not impartial anymore,” he says. “Today it’s rape games and incest, but tomorrow it could be another lobbying group applying pressure on LGBT games in certain countries.”

We’ve already seen this sort of thing when it comes to book and curriculum bans that are currently plaguing far too much of the country. When porn can mean Magic Treehouse, the word loses all meaning.

What is actually happening is that payment processors are feeling what they believe is “public pressure”, but which is actually just a targeted and coordinated campaign from a tiny minority of people who watched V For Vendetta and thought it was an instruction manual. Well, the public has caught wind of this, as have game publishers that might be caught up in this censorship or whatever comes next, and coordinated contact campaigns to payment processors to complain about this new censorship are being conducted.

Gilbert Martinez had just poured himself a glass of water and was pacing his suburban home in San Antonio, Texas while trying to navigate Mastercard’s byzantine customer service hotline. He was calling to complain about recent reports that the company is pressuring online gaming storefronts like Steam and Itch.io to ban certain adult games. He estimates his first call lasted about 18 minutes and ended with him lodging a formal complaint in the wrong department.

Martinez is part of a growing backlash to Steam and Itch.io purging thousands of games from their databases at the behest of payment processing companies. Australia-based anti-porn group Collective Shout claimed credit for the new wave of censorship after inciting a write-in campaign against Visa and Mastercard, which it accused of profiting off “rape, incest, and child sexual abuse game sales.” Some fans of gaming are now mounting reverse campaigns in the hopes of nudging Visa and Mastercard in the opposite directions.

If noise is what is going to make these companies go back to something resembling sanity, this will hopefully do the trick. We’re already seeing examples of games that are being unjustly censored, described as porn when they are very much not. Not to mention instances where nuance is lost and the “porn” content is actually the opposite.

Vile: Exhumed is a textbook example of what critics of the sex game purge always feared: that guidelines aimed at clamping down on pornographic games believed to be encouraging or glorifying sexual violence would inevitably ensnare serious works of art grappling with difficult and uncomfortable subject matter in important ways. Who gets to decide which is which? For a long time, it appeared to be Steam and Itch.io. Last week’s purges revealed it’s actually Visa and Mastercard, and whoever can frighten them the most with bad publicity.

Some industry trade groups have also weighed in. The International Game Developers Association (IGDA) released a statement stating that “censorship like this is materially harmful to game developers” and urging a dialogue between “platforms, payment processors, and industry leaders with developers and advocacy groups.” “We welcome collaboration and transparency,” it wrote. “This issue is not just about adult content. It is about developer rights, artistic freedom, and the sustainability of diverse creative work in games.”

This is the result of a meddling minority attempting to foist their desires on everyone else, plain and simple. Choking the money supply is a smart choice, sure, but one that should be recognized in this case for what it is: censorship based on proclivities that are not widely shared. And if there really is material in these games that is illegal, it should obviously be done away with.

But we should not be playing this game of pretending content that is not widely seen as immoral should somehow be choked of its ability to participate in commerce.

Source: Credit Card Companies Fielding A Ton Of Complaints Over NSFW Games Disappearing On Platforms | Techdirt