UK government says digital ID won’t be compulsory – unless you want a job. Even Palantir steps back from this one.

The British government has finally given more details about the proposed digital ID project, directly responding to the 2.76 million naysayers that signed an online petition calling for it to be ditched.

This came a day after controversial spy-tech biz Palantir said it has no intention of helping the government implement the initiative – announced last week by prime minister Keir Starmer but not included in his political party’s manifesto at last year’s general election.

It is for this reason that Louis Mosley, UK boss at Palantir – the grandson of Sir Oswald Mosley – says his employer is not getting involved, despite being mentioned as a potential bidder.

“Digital ID is not one that was tested at the last election. It wasn’t in the manifesto. So we haven’t had a clear resounding public support at the ballot box for its implementation. So it isn’t one for us,” he told The Times

[…]

Following in the footsteps of Estonia and other nations, including China, the UK government wants to introduce a “free” digital ID card for people aged 16 and over – though it is consulting on whether this should start at 13 – to let people access public and private services “seamlessly.” It will “build on” GOV.UK One Login and the GOV.UK Wallet, we’re told.

“This system will allow people to access government services – such as benefits or tax records – without needing to remember multiple logins or provide physical documents.

[…]

The card, scheduled to be implemented by the end of the current Parliament, means employers will have to check digital ID when going through right-to-work checks, and despite previously saying the card will be mandatory, the government confirmed: “For clarity, it will not be a criminal offence to not hold a digital ID and police will not be able to demand to see a digital ID as part of a ‘stop and search.’

[…]

Big Brother Watch says the national ID system is a “serious threat to civil liberties.”

“Digital ID systems can be uniquely harmful to privacy, equality and civil liberties. They would allow the state to amass vast amounts of personal information about the public in centralised government databases. By linking government records through a unique single identifier, digital ID systems would make it very easy to build up a comprehensive picture of an individual’s life.”

[…]

Source: UK government says digital ID won’t be compulsory – honest • The Register

It also creates a single point of entry for anyone willing to hack the database. Centralised databases are incredibly broken ideas.

Also see: New digital ID will be mandatory to work in the UK. Ausweiss bitte!

And a quick search for “centralised database”

Outrage That NL Tax and Customs Authorities will give all data to US by switching to MS 365: ‘Insult to Parliament’

‘An insult not only to the House of Representatives, but also to Dutch and European businesses’, says GroenLinks-PvdA MP Barbara Kathmann about the switch of government services to Microsoft. Earlier today, outgoing State Secretary for Taxation Eugène Heijnen (BBB) informed the House of Representatives about the switch of the Tax Authorities, the Allowances department, and Customs to Microsoft 365. This means that these services will become dependent on this American software giant for their daily work.

Outrage over Tax Authorities’ switch to Microsoft: ‘An insult to the House of Representatives’

Over the past year, there have been frequent debates about the digital independence of the Netherlands, and the call to become independent from American companies is growing louder. The fact that the State Secretary is now announcing that three government services will still switch to Microsoft is causing a lot of anger among Kathmann. ‘They are essentially just ushering us into the American cloud during this caretaker period, and that is really not necessary.’ Bert Hubert, former supervisor of the intelligence services, previously stated that Dutch tax data could end up on American servers via email contact.

Cluster of European companies

Kathmann emphasizes that it would be naive to think that we could be independent of Microsoft tomorrow, but that Dutch and European businesses are capable of a lot.

[…]

According to the State Secretary, this is not possible because there are no comparable European alternatives. Kathmann explains that the intention is precisely not to become dependent on one supplier.

[…]

Stimulate development

Last week, caretaker Prime Minister Dick Schoof called on executives of large companies to become independent from non-European suppliers. Schoof also emphasized in the House two days ago that this is a priority.

[…]

the government can play an important role in stimulating the development of European and Dutch technology. ‘The government is the largest IT buyer in the Netherlands. If it becomes the largest buyer of European Dutch products, then it will really take off.’

[…]

Source: Kagi Translate

It really is amazing how at a time when everyone is talking about digital sovereignty, the Tax people – responsible for handling extremely sensitive data – decide to give it all to an increasingly untrustworthy ally.

Signal threatens to exit Germany over Chat Control vote – 14th of October we know if Denmark has managed to turn the EU into a Stazi surveillance state.

The Signal Foundation announced on October 3, 2025, that it would withdraw its encrypted messaging service from Germany and potentially all of Europe if the European Union’s Chat Control proposal passes in an upcoming vote. According to Signal President Meredith Whittaker, the messaging platform faces an existential choice between compromising its encryption integrity and leaving European markets entirely.

The German government holds a decisive position in the October 14, 2025 vote on the Chat Control regulation, which aims to combat child sexual abuse material but requires mass scanning of every message, photo, and video on users’ devices.

[…]

The Chat Control proposal mandates that messaging services like Signal, WhatsApp, Telegram, and Threema scan files on smartphones and end devices without suspicion to detect child sexual abuse material. This scanning would occur before encryption, according to technical documentation from the European Commission’s September 2020 draft on detecting such content in end-to-end encrypted communications.

[…]

The Chat Control vote reveals deep divisions among EU member states on digital privacy and surveillance. Fifteen countries support the proposal, eight oppose it, and several remain undecided as the October 14 deadline approaches.

[…]

Germany’s position remains critical and undecided. Despite expressing concerns about breaking end-to-end encryption at a September 12 Law Enforcement Working Party meeting, the government refrained from taking a definitive stance. This indecision makes Germany’s vote potentially decisive for the proposal’s fate.

Belgium, Italy, and Latvia remain undecided as of September 23, 2025. These countries express desire to reach agreement given the expiring interim regulation, with all three expressing support for the proposal’s goals while remaining formally uncommitted. Italy specifically voices doubts concerning inclusion of new child sexual abuse material in the scope of application. Latvia assesses the text positively but faces uncertainty about political support.

Poland and Austria share the desire for solutions but maintain skepticism about the current proposal’s approach. Greece’s position remains unclear, with the government evaluating technical implementation details. Sweden continues examining the compromise text and working on a position. Slovakia appears in both opposition and undecided categories depending on sources, reflecting the fluid nature of negotiations.

The arithmetic suggests that Germany’s decision could determine whether the required majority materializes. With 15 states supporting and 8 opposing, the undecided nations hold the balance.

[…]

Technical experts have warned that client-side scanning fundamentally undermines encryption security. A comprehensive 2021 study titled “Bugs in Our Pockets: The Risks of Client-Side Scanning,” authored by 14 security researchers including cryptography pioneers Whitfield Diffie and Ronald Rivest, concluded that such systems create serious security and privacy risks for all society.

The researchers explained that scanning every message—whether performed before or after encryption—negates the premise of end-to-end encryption. Instead of breaking Signal’s encryption protocol directly, hostile actors would only need to exploit access granted to the scanning system itself. Intelligence agencies have acknowledged this threat would prove catastrophic for national security, according to the technical consensus outlined in the research paper.

[…]

Germany’s historical experience with mass surveillance through the Stasi secret police informs current privacy advocacy. The country maintained principled opposition to Chat Control during the previous coalition government, though this position became uncertain after the current government took office

[…]

Denmark assumed the EU Council Presidency on July 1, 2025, and immediately reintroduced Chat Control as a legislative priority. Lawmakers targeted the October 14 adoption date if member states reach consensus. France, which previously opposed the measure, shifted to support the proposal by July 28, 2025, creating momentum for the 15 member states now backing the regulation.

[…]

Source: Signal threatens to exit Germany over Chat Control vote

Mesh-Mapper – Drone Remote ID mapping and mesh alerts

Project Overview

The FAA’s Remote ID requirement, which became mandatory for most drones in September 2023, means every compliant drone now broadcasts its location, pilot position, and identification data via WiFi or Bluetooth. While this regulation was designed for safety and accountability (or to violate pilot privacy 😊), it also creates an unprecedented opportunity for personal airspace awareness.

This project harnesses that data stream to create a comprehensive detection and tracking system that puts you in control of knowing what’s flying overhead. Built around the powerful dual-core Xiao ESP32 S3 microcontroller, the system captures Remote ID transmissions on both WiFi and Bluetooth simultaneously, feeding the data into a sophisticated Python Flask web application that provides real-time visualization and logging.

But here’s where it gets really interesting: the system also integrates with Meshtastic networks, allowing multiple detection nodes to share information across a mesh network. This means you can deploy several ESP32 nodes across your property or neighborhood and have them all contribute to a unified picture of drone activity in your area.

Why This Project Matters

Remote ID represents a fundamental shift in airspace transparency. For the first time, civilian drones are required to broadcast their identity and location continuously. This creates opportunities for:

  • Privacy Protection: Know when drones are operating near your property and who is operating them
  • Personal Security: Monitor activity around sensitive locations like your home or business
  • Community Awareness: Share drone activity information with neighbors through mesh networks
  • Research: Understand drone traffic patterns and airspace usage in your area
  • Education: Learn about wireless protocols and modern airspace management
The key difference between this system and commercial drone detection 
solutions is that it puts the power of airspace awareness directly in your 
hands, using affordable hardware and open-source software.

While you can build this project using off-the-shelf ESP32 development boards, I’ve designed custom PCBs specifically optimized for Remote ID detection integration with Meshtastic that are that are available on my Tindie store. Thank you PCBway for the awesome boards! The combination of their top tier quality, competitive pricing, fast turnaround times, and stellar customer service makes PCBWay the go-to choice for professional PCB fabrication, whether you’re prototyping innovative mesh detection systems or scaling up for full production runs.

https://www.pcbway.com/

Step 1: Hardware Preparation

If using custom MeshDetect boards from Tindie:

  • Boards come pre-assembled, flashed, and tested
  • Includes Stock 915mhz and 2.4ghz antennas
  • USB-C programming interface ready to use

If building with standard ESP32 S3:

  • Xiao ESP32 S3 development board recommended
  • USB-C cable for connection and power
  • Optional upgraded3 2.4GHz antenna for better range
  • Optional Heltec Lora V3 for Mesthastic Integration

Step 2: Firmware Installation

To install the firmware onto your device, follow these steps:

1. Clone the repository:

git clone https://github.com/colonelpanichacks/drone-mesh-mapper

Open the project in PlatformIO: You can use the PlatformIO IDE (in VS Code) or the PlatformIO CLI.

2.Select the correct environment:

This project uses the remotied_mesh_dualcore sketch, which enables both BLE and Wi-Fi functionality.Make sure the platformio.ini environment is set to remoteid_mesh_dualcore.

3. Connect you device via usb and flash

Upload the firmware:

  • In the IDE, select the remoteid_mesh_dualcore environment and click the “Upload” button.

3. Sofware Installation

Install Python dependencies:

  • flask>=2.0.0
  • flask-socketio>=5.0.0
  • requests>=2.25.0
  • urllib3>=1.26.0
  • pyserial>=3.5

Run the detection system:

python mapper.py

The web interface automatically opens at http://localhost:5000

Step 4: Device Configuration

1. Connect ESP32 via USB-C

2. Select the correct serial port in the web interface

3. Click “Connect” to start receiving data

4. Configure device aliases and settings as needed

How It Works

  • Core 0 handles WiFi monitoring in promiscuous mode, capturing Remote ID data embedded in beacon frames and processing Neighbor Awareness Networking transmissions on channel 6 by default.
  • Core 1 continuously scans for Bluetooth LE advertisements containing Remote ID data, supporting both BT 4.0 and 5.0 protocols with optimized low-power scanning.
  • Both cores feed detected Remote ID data into a unified JSON output stream via USB serial at 115200 baud. The firmware is based on Cemaxacuter’s excellent Remote ID detection work, enhanced with dual-core operation.
  • The Python Flask web application receives this data and provides real-time visualization on an interactive map, automatic logging to CSV and KML files, FAA database integration for aircraft registration lookups, support for up to 3 ESP32 devices simultaneously, live data streaming via WebSocket, and comprehensive export functions.

One of the most exciting features is Meshtastic integration. The ESP32 firmware can send compact detection messages over UART to a connected Meshtastic device. This enables:

  • Distributed Monitoring: Multiple detection nodes sharing data across your property or neighborhood
  • Extended Range: Mesh networking extends effective coverage area beyond single-device limitations
  • Redundancy: Multiple nodes provide backup coverage if one device fails
  • Low-Power Operation: Meshtastic’s LoRa radios enable remote deployment without constant power
  • Community Networks: Integration with existing Meshtastic mesh networks for broader awareness
  • Messages sent over the mesh network use a compact format optimized for LoRa bandwidth constraints:

Features in Action

Real-Time Detection and Mapping

The web interface provides a Google Maps-style view with drone markers showing current aircraft positions, pilot markers indicating operator locations, color-coded flight paths derived from device MAC addresses, signal strength indicators showing detection quality, and automatic cleanup removing stale data after 5 minutes.

Data Export and Analysis

The system continuously generates multiple data formats including timestamped CSV logs perfect for spreadsheet analysis, Google Earth compatible KML files with flight path visualization featuring individual drone paths color-coded by device and timestamped waypoints, and JSON API providing real-time data access for custom integrations with RESTful endpoints and WebSocket streams.

FAA Database Integration

One of the most powerful features is automatic FAA registration lookup that queries the FAA database using detected Remote ID information, caches results to minimize API calls and improve performance, enriches detection data with aircraft registration details, and includes configurable rate limiting to respect API guidelines.

Multi-Device Coordination

The system supports up to three ESP32 devices simultaneously with automatic device discovery and connection, individual device health monitoring, load balancing across multiple receivers, and unified data view combining all devices.

Performance and Optimization

Reception Range

Testing has shown effective detection ranges of 5 Km in urban environments, 10-15 kilometers in open areas with good antennas, overlapping coverage that eliminates dead zones when using multiple devices, and significant improvement with external antennas compared to built-in antennas.

System Resources

The Python application is optimized for continuous operation with efficient memory management for large datasets, automatic log rotation to prevent disk space issues, WebSocket connection pooling for multiple clients, and configurable data retention policies.

For remote deployments, Meshtastic integration enables off-grid operation, webhook retry logic ensures reliable alert delivery, local data storage prevents data loss during network outages, and bandwidth optimization handles limited connections.

Privacy and Security Considerations

This system puts powerful airspace monitoring capabilities in individual hands, but it’s important to use it responsibly. The detection data contains location information about both drones and their operators, so implement appropriate data retention policies and be aware of local privacy regulations.

For network security, remember that the Flask development server is not production-ready, so consider a reverse proxy for production use and implement authentication for sensitive deployments. Use HTTPS for webhook communications and monitor for unauthorized access attempts.

The system enables you to know what’s flying over your property while respecting the legitimate privacy expectations of drone operators. It’s about transparency and awareness, not surveillance.

Conclusion

This Remote ID detection system represents a significant step forward in personal airspace awareness. The combination of dual-core ESP32 processing, comprehensive web-based interface, Meshtastic mesh integration, and professional data export features creates a platform that’s both accessible to makers and powerful enough for serious privacy protection applications.

The availability of custom-designed PCBs on Tindie removes the barrier of hardware design, while the open-source firmware and software ensure complete customizability. Whether you’re building a single-node setup for personal property monitoring or deploying a mesh network for neighborhood-wide awareness, this system provides the foundation for comprehensive drone detection and tracking.

As more drones come online with Remote ID compliance, having your own detection system becomes increasingly valuable for maintaining privacy and situational awareness of your local airspace

Mesh Mapper Github : https://github.com/colonelpanichacks/drone-mesh-mapper

Mesh Detect Github (all firmware for Mesh Detect boards: https://github.com/colonelpanichacks/mesh-detect

Mesh Detect SMA mount clip SMA mount clip for the Mesh Destect board by OrdoOuroboros https://www.printables.com/model/1294183-mesh-detect-board-sma-mount

Build Your Own

Ready to start monitoring your local airspace? The combination of affordable hardware, open-source software, and comprehensive documentation makes this project accessible to makers of all skill levels. Start with a single ESP32 device to learn the system, then expand to multiple nodes and Meshtastic integration as your privacy protection needs grow.

The future of airspace monitoring is distributed, affordable, and puts control back in the hands of individuals and communities. Join the movement building these next-generation detection systems!

Source: Mesh-Mapper – Drone Remote ID mapping and mesh alerts – Hackster.io

Detecting Surveillance Cameras With The ESP32 from Colonel.Panic

These days, surveillance cameras are all around us, and they’re smarter than ever. In particular, many of them are running advanced algorithms to recognize faces and scan license plates, compiling ever-greater databases on the movements and lives of individuals. Flock You is a project that aims to, at the very least, catalogue this part of the surveillance state, by detecting these cameras out in the wild.

The system is most specifically set up to detect surveillance cameras from Flock Safety, though it’s worth noting a wide range of companies produce plate-reading cameras and associated surveillance systems these days. The device uses an ESP32 microcontroller to detect these devices, relying on the in-built wireless hardware to do the job. The project can be built on a Oui-Spy device from Colonel Panic, or just by using a standard Xiao ESP32 S3 if so desired. By looking at Wi-Fi probe requests and beacon frames, as well as Bluetooth advertisements, it’s possible for the device to pick up telltale transmissions from a range of these cameras, with various pattern-matching techniques and MAC addresses used to filter results in this regard. When the device finds a camera, it sounds a buzzer notifying the user of this fact.

Meanwhile, if you’re interested in just how prevalent plate-reading cameras really are, you might also find deflock.me interesting. It’s a map of ALPR camera locations all over the world,  and you can submit your own findings if so desired. The techniques used by in the Flock You project are based on learnings from the DeFlock project. Meanwhile, if you want to join the surveillance state on your own terms, you can always build your own license plate reader instead!

Source: Detecting Surveillance Cameras With The ESP32 | Hackaday

EU becomes a little more fascist and starts collecting fingerprints at the border

The new Entry/Exit System (EES) will start operations on 12 October 2025. European countries using the EES will introduce the system gradually at their external borders. This means that data collection will be gradually introduced at border crossing points with full implementation by 10 April 2026.

Source: What is the EES?

You need to provide your personal data each time you reach the external borders of the European countries using the EES. For more information – see What does progressive start of the EES mean? 
The EES collects, records and stores: 

  • data listed in your travel document(s) (e.g. full name, date of birth, etc.)
  • date and place of each entry and exit 
  • facial image and fingerprints (called ‘biometric data’)
  • whether you were refused entry.

On the basis of the collected biometric data, biometric templates will be created and stored in the shared Biometric Matching Service (see footnote).

If you hold a short-stay visa to enter the Schengen area, your fingerprints will already be stored in the Visa Information System (VIS) and will not be stored again in the EES.

Depending on your particular situation, the system also collects your personal information from:

[…]

If you refuse to provide your biometric data, you will be denied entry into the territory of the European countries using the EES.

Who can access your personal data?

  • Border, visa and immigration authorities in the European countries using the EES for the purpose of verifying your identity and understanding whether you should be allowed to enter or stay on the territory.
  • Law enforcement authorities of the countries using the EES and Europol for law enforcement purposes. 
  • Under strict conditions, your data may be transferred to another country (inside or outside the EU) or international organisation (listed in Annex I of Regulation (EU) 2017/2226 – a UN organisation, the International Organisation for Migration, or the International Committee of the Red Cross) for return (Article 41(1) and (2), and Article 42) and/or law enforcement purposes (Article 41(6)).
  • Transport carriers will only be able to verify whether short-stay visa holders have already used the number of entries authorised by their visa and will not be able to access any further personal data.

[…]

Your data cannot be transferred to third parties – whether public or private entities – except in certain cases. See Who can access your personal data

[…]

So lots of data collected, and loads of people who can access this data – exceptions are absolutely everywhere. And for what? To satisfy far right fantasies about migration running rampant.

US, CA and EU Airlines Sell 5 Billion Plane Ticket Records to the Government For Warrantless Searching

A data broker owned by the country’s major airlines, including American Airlines, United, and Delta, [and Air France, Lufthansa, JetBlue] is selling access to five billion plane ticketing records to the government for warrantless searching and monitoring of peoples’ movements, including by the FBI, Secret Service, ICE, and many other agencies, according to a new contract and other records reviewed by 404 Media.
The contract provides new insight into the scale of the sale of passengers’ data by the Airlines Reporting Corporation (ARC), the airlines-owned data broker. The contract shows ARC’s data includes information related to more than 270 carriers and is sourced through more than 12,800 travel agencies. ARC has previously told the government to not reveal to the public where this passenger data came from, which includes peoples’ names, full flight itineraries, and financial details.
“Americans’ privacy rights shouldn’t depend on whether they bought their tickets directly from the airline or via a travel agency. ARC’s sale of data to U.S. government agencies is yet another example of why Congress needs to close the data broker loophole by passing my bipartisan bill, the Fourth Amendment Is Not For Sale Act,” Senator Ron Wyden told 404 Media in a statement.
ARC is owned and operated by at least eight major U.S. airlines, publicly released documents show. Its board of directors includes representatives from American Airlines, Delta, United, Southwest, Alaska Airlines, JetBlue, and European airlines Air France and Lufthansa, and Canada’s Air Canada. ARC acts as a bridge between airlines and travel agencies, in which it helps with fraud prevention and finds trends in travel data. ARC also sells passenger data to the government as part of what it calls the Travel Intelligence Program (TIP).
TIP is updated every day with the previous day’s ticket sales and can show a person’s paid intent to travel. Government agencies can then search this data by name, credit card, airline, and more.
The new contract shows that ARC has access to much more data than previously reported. Earlier coverage found TIP contained more than one billion records spanning more than 3 years of past and future travel. The new contract says ARC provides the government with “5 billion ticketing records for searching capabilities.”
Gallery Image
Gallery Image
Screenshots of the documents obtained by 404 Media.
404 Media obtained the contract through a Freedom of Information Act (FOIA) with the Secret Service. The contract indicates the Secret Service plans to pay ARC $885,000 for access to the data stretching into 2028.
[…]
An ARC spokesperson told 404 Media in an email that TIP “was established by ARC after the September 11, 2001, terrorist attacks and has since been used by the U.S. intelligence and law enforcement community to support national security and prevent criminal activity with bipartisan support. Over the years, TIP has likely contributed to the prevention and apprehension of criminals involved in human trafficking, drug trafficking, money laundering, sex trafficking, national security threats, terrorism and other imminent threats of harm to the United States.”
The spokesperson added “Pursuant to ARC’s privacy policy, consumers may ask ARC to refrain from selling their personal data.”
After media coverage and scrutiny from Senator Wyden’s office of the little-known data selling, ARC finally registered as a data broker in the state of California in June. Senator Wyden previously said it appeared ARC had been in violation of Californian law for not registering while selling airline customers’ data for years.

Source: Airlines Sell 5 Billion Plane Ticket Records to the Government For Warrantless Searching

Supposedly you can opt out by emailing them at privacy@arccorp.com

Danish Minister of Justice and chief architect of the current Chat Control proposal, Peter Hummelgaard:

Danish Minister of Justice, Peter Hummelgaard.

“We must break with the totally erroneous perception that it is everyone’s civil liberty to communicate on encrypted messaging services.”

Share your thoughts via https://fightchatcontrol.eu/, or to jm@jm.dk directly.

Source: https://www.ft.dk/samling/20231/almdel/REU/spm/1426/index.htm

In the answers he cites “but we must protect the children” – as soon as that argument is trotted out have a good look at what they are taking away from you. After all, who can be against the safety of children? But blanket surveillance is bad for children and awful for society. If you know you are being watched, you can’t speak freely, you can’t voice your opinion and democracy cannot function. THAT is bad for the children.

There is something rotten in the state of Denmark. Big Brother, 1984, they were warnings, not manuals.

Source: https://mastodon.social/@chatcontrol/115204439983078498

More discussion: https://www.reddit.com/r/europe/comments/1nhdtoz/danish_minister_of_justice_we_must_break_with_the/

PS I would not buy a used camel from this creep.

Swiss government may disable privacy tech, stoking fears of mass surveillance

The Swiss government could soon require service providers with more than 5,000 users to collect government-issued identification, retain subscriber data for six months and, in many cases, disable encryption.

The proposal, which is not subject to parliamentary approval, has alarmed privacy and digital-freedoms advocates worldwide because of how it will destroy anonymity online, including for people located outside of Switzerland.

A large number of virtual private network (VPN) companies and other privacy-preserving firms are headquartered in the country because it has historically had liberal digital privacy laws alongside its famously discreet banking ecosystem.

Proton, which offers secure and end-to-end encrypted email along with an ultra-private VPN and cloud storage, announced on July 23 that it is moving most of its physical infrastructure out of Switzerland due to the proposed law.

The company is investing more than €100 million in the European Union, the announcement said, and plans to help develop a “sovereign EuroStack for the future of our home continent.” Switzerland is not a member of the EU.

Proton said the decision was prompted by the Swiss government’s attempt to “introduce mass surveillance.”

Proton founder and CEO Andy Yen told Radio Télévision Suisse (RTS) that the suggested regulation would be illegal in the EU and United States.

“The only country in Europe with a roughly equivalent law is Russia,” Yen said.

[…]

Internet users would no longer be able to register for a service with just an email address or anonymously and would instead have to provide their passport, drivers license or another official ID to subscribe, said Chloé Berthélémy, senior policy adviser at European Digital Rights (eDRI), an association of civil and human rights organizations from across Europe.

The regulation also includes a mass data retention obligation requiring that service providers keep users’ email addresses, phone numbers and names along with IP addresses and device port numbers for six months, Berthélémy said. Port numbers are unique identifiers that send data to a specific application or service on a computer.

All authorities would need to do to obtain the data, Berthélémy said, is make a simple request that would circumvent existing legal control mechanisms such as court orders.

“The right to anonymity is supporting a very wide range of communities and individuals who are seeking safety online,” Berthélémy said.

“In a world where we have increasing attacks from governments on specific minority groups, on human rights defenders, journalists, any kind of watchdogs and anyone who holds those in power accountable, it’s very crucial that we … preserve our privacy online in order to do those very crucial missions.”

Source: Swiss government looks to undercut privacy tech, stoking fears of mass surveillance | The Record from Recorded Future News

Proton Mail Suspended Journalist Accounts at Request of some Cybersecurity Agency without any process

The company behind the Proton Mail email service, Proton, describes itself as a “neutral and safe haven for your personal data, committed to defending your freedom.”

But last month, Proton disabled email accounts belonging to journalists reporting on security breaches of various South Korean government computer systems following a complaint by an unspecified cybersecurity agency. After a public outcry, and multiple weeks, the journalists’ accounts were eventually reinstated — but the reporters and editors involved still want answers on how and why Proton decided to shut down the accounts in the first place.

Martin Shelton, deputy director of digital security at the Freedom of the Press Foundation, highlighted that numerous newsrooms use Proton’s services as alternatives to something like Gmail “specifically to avoid situations like this,” pointing out that “While it’s good to see that Proton is reconsidering account suspensions, journalists are among the users who need these and similar tools most.” Newsrooms like The Intercept, the Boston Globe, and the Tampa Bay Times all rely on Proton Mail for emailed tip submissions.

Shelton noted that perhaps Proton should “prioritize responding to journalists about account suspensions privately, rather than when they go viral.”

On Reddit, Proton’s official account stated that “Proton did not knowingly block journalists’ email accounts” and that the “situation has unfortunately been blown out of proportion.” Proton did not respond to The Intercept’s request for comment.

The two journalists whose accounts were disabled were working on an article published in the August issue of the long-running hacker zine Phrack. The story described how a sophisticated hacking operation — what’s known in cybersecurity parlance as an APT, or advanced persistent threat — had wormed its way into a number of South Korean computer networks, including those of the Ministry of Foreign Affairs and the military Defense Counterintelligence Command, or DCC.

The journalists, who published their story under the names Saber and cyb0rg, describe the hack as being consistent with the work of Kimsuky, a notorious North Korean state-backed APT sanctioned by the U.S. Treasury Department in 2023.

As they pieced the story together, emails viewed by The Intercept show that the authors followed cybersecurity best practices and conducted what’s known as responsible disclosure: notifying affected parties that a vulnerability has been discovered in their systems prior to publicizing the incident.

Saber and cyb0rg created a dedicated Proton Mail account to coordinate the responsible disclosures, then proceeded to notify the impacted parties, including the Ministry of Foreign Affairs and the DCC, and also notified South Korean cybersecurity organizations like the Korea Internet and Security Agency, and KrCERT/CC, the state-sponsored Computer Emergency Response Team. According to emails viewed by The Intercept, KrCERT wrote back to the authors, thanking them for their disclosure.

A note on cybersecurity jargon: CERTs are agencies consisting of cybersecurity experts specializing in dealing with and responding to security incidents. CERTs exist in over 70 countries — with some countries having multiple CERTs each specializing in a particular field such as the financial sector — and may be government-sponsored or private organizations. They adhere to a set of formal technical standards, such as being expected to react to reported cybersecurity threats and security incidents. A high-profile example of a CERT agency in the U.S. is the Cybersecurity and Infrastructure Agency, which has recently been gutted by the Trump administration.

A week after the print issue of Phrack came out, and a few days before the digital version was released, Saber and cyb0rg found that the Proton account they had set up for the responsible disclosure notifications had been suspended. A day later, Saber discovered that his personal Proton Mail account had also been suspended. Phrack posted a timeline of the account suspensions at the top of the published article, and later highlighted the timeline in a viral social media post. Both accounts were suspended owing to an unspecified “potential policy violation,” according to screenshots of account login attempts reviewed by The Intercept.

The suspension notice instructed the authors to fill out Proton’s abuse appeals form if they believed the suspension was in error. Saber did so, and received a reply from a member of Proton Mail’s Abuse Team who went by the name Dante.

In an email viewed by The Intercept, Dante told Saber that their account “has been disabled as a result of a direct connection to an account that was taken down due to violations of our terms and conditions while being used in a malicious manner.” Dante also provided a link to Proton’s terms of service, going on to state, “We have clearly indicated that any account used for unauthorized activities, will be sanctioned accordingly.” The response concluded by stating, “We consider that allowing access to your account will cause further damage to our service, therefore we will keep the account suspended.”

On August 22, a Phrack editors reached out to Proton, writing that no hacked data was passed through the suspended email accounts, and asked if the account suspension incident could be deescalated. After receiving no response from Proton, the editor sent a follow-up email on September 6. Proton once again did not reply to the email.

On September 9, the official Phrack X account made a post asking Proton’s official account asking why Proton was “cancelling journalists and ghosting us,” adding: “need help calibrating your moral compass?” The post quickly went viral, garnering over 150,000 views.

Proton’s official account replied the following day, stating that Proton had been “alerted by a CERT that certain accounts were being misused by hackers in violation of Proton’s Terms of Service. This led to a cluster of accounts being disabled. Our team is now reviewing these cases individually to determine if any can be restored.” Proton then stated that they “stand with journalists” but “cannot see the content of accounts and therefore cannot always know when anti-abuse measures may inadvertently affect legitimate activism.”

Proton did not publicly specify which CERT had alerted them, and didn’t answer The Intercept’s request for the name of the specific CERT which had sent the alert. KrCERT also did not reply to The Intercept’s question about whether they were the CERT that had sent the alert to Proton.

Later in the day, Proton’s founder and CEO Andy Yen posted on X that the two accounts had been reinstated. Neither Yen nor Proton explained why the accounts had been reinstated, whether they had been found to not violate the terms of service after all, why had they been suspended in the first place, or why a member of the Proton Abuse Team reiterated that the accounts had violated the terms of service during Saber’s appeals process.

Phrack noted that the account suspensions created a “real impact to the author. The author was unable to answer media requests about the article.” The co-authors, Phrack pointed out, were also in the midst of the responsible disclosure process and working together with the various affected South Korean organizations to help fix their systems. “All this was denied and ruined by Proton,” Phrack stated.

Phrack editors said that the incident leaves them “concerned what this means to other whistleblowers or journalists. The community needs assurance that Proton does not disable accounts unless Proton has a court order or the crime (or ToS violation) is apparent.”

Source: Proton Mail Suspended Journalist Accounts at Request of Cybersecurity Agency

If Proton can’t view the content of accounts, how did Proton verify some random CERTs claims to make the decision to close the accounts? And how did Proton review to see if they could be restored? Is it Proton policy to decide that people are guilty before proven innocent? This attitude justifies people blowing up about this incident – because it shows how vulnerable they are to random whims of Proton instead of any kind of transparent diligent process.

We beat Chat Control but the fight isn’t over – another surveillance law that mandates companies to save user data for Europol is making its way right now and there is less than 24 hours to give the EU feedback!

Please follow this link to the questionnaire and help save our future – otherwise total surveillance like never seen before will strip you of every privacy and later fundamental rights you have as a EU citizen

++++++++++++++++++++++++++++

Information

The previous data retention law was declared illegal in 2014 by CJEU (EU’s highest court) for being mass surveillance and violating human rights.

Since most EU states refused to follow the court order and the EU commission refused to enforce it, CJEU recently caved in to political pressure and changed their stance on mass surveillance, making it legal.

And that instantly spawned this data retention law that is more far fetching than the original, that was deemed illegal. Here you can read the entire plan that EU is following. Briefly:

they want to sanction unlicensed messaging apps, hosting services and websites that don’t spy on users (and impose criminal penalties)

mandatory data retention, all your online activity must be tied to your identity

end of privacy friendly VPN’s and other services

cooperate with hardware manufacturers to ensure lawful access by design (backdoors for phones and computers)

prison for everybody who doesn’t comply

If you don’t know what the best options for some questions are, privacy wise, check out this answering guide by Edri(european digital rights organization)

Source: https://www.reddit.com/r/BuyFromEU/comments/1neecov/we_beat_chat_control_but_the_fight_isnt_over/

18 popular VPNs turn out to belong to 3 different owners – and contain insecurities as well

A new peer-reviewed study alleges that 18 of the 100 most-downloaded virtual private network (VPN) apps on the Google Play Store are secretly connected in three large families, despite claiming to be independent providers. The paper doesn’t indict any of our picks for the best VPN, but the services it investigates are popular, with 700 million collective downloads on Android alone.

The study, published in the journal of the Privacy Enhancing Technologies Symposium (PETS), doesn’t just find that the VPNs in question failed to disclose behind-the-scenes relationships, but also that their shared infrastructures contain serious security flaws. Well-known services like Turbo VPN, VPN Proxy Master and X-VPN were found to be vulnerable to attacks capable of exposing a user’s browsing activity and injecting corrupted data.

Titled “Hidden Links: Analyzing Secret Families of VPN apps,” the paper was inspired by an investigation by VPN Pro, which found that several VPN companies each were selling multiple apps without identifying the connections between them. This spurred the “Hidden Links” researchers to ask whether the relationships between secretly co-owned VPNs could be documented systematically.

[…]

Family A consists of Turbo VPN, Turbo VPN Lite, VPN Monster, VPN Proxy Master, VPN Proxy Master Lite, Snap VPN, Robot VPN and SuperNet VPN. These were found to be shared between three providers — Innovative Connecting, Lemon Clove and Autumn Breeze. All three have all been linked to Qihoo 360, a firm based in mainland China and identified as a “Chinese military company” by the US Department of Defense.

Family B consists of Global VPN, XY VPN, Super Z VPN, Touch VPN, VPN ProMaster, 3X VPN, VPN Inf and Melon VPN. These eight services, which are shared between five providers, all use the same IP addresses from the same hosting company.

Family C consists of X-VPN and Fast Potato VPN. Although these two apps each come from a different provider, the researchers found that both used very similar code and included the same custom VPN protocol.

If you’re a VPN user, this study should concern you for two reasons. The first problem is that companies entrusted with your private activities and personal data are not being honest about where they’re based, who owns them or who they might be sharing your sensitive information with. Even if their apps were all perfect, this would be a severe breach of trust.

But their apps are far from perfect, which is the second problem. All 18 VPNs across all three families use the Shadowsocks protocol with a hard-coded password, which makes them susceptible to takeover from both the server side (which can be used for malware attacks) and the client side (which can be used to eavesdrop on web activity).

[…]

 

Source: Researchers find alarming overlaps among 18 popular VPNs

So Spotify Public Links Now Show Your Personal Information. You Need to Disable Spotify DMs To Get Rid Of It.

Spotify wants to be yet another messaging platform, but its new DM system has a quirk that makes me hesitant to recommend it. Spotify used to be a non-identity based platform, but things changed once it added messaging. Now, the Spotify DM system is attaching account information to song links and putting it in front of users’ eyes. That means it can accidentally leak the name and profile picture of whoever shared a link, even if they didn’t intend to give out their account information, too. Thankfully there’s a way to make links more private, and to disable Spotify DMs altogether.

How Spotify is accidentally leaking users’ information

It all starts with tracking URLs. Many major companies on the web use these. They embed information at the end of a URL to track where clicks on it came from. Which website, which page, or in Spotify’s case, which user. If you’ve generated a Share link for a song or playlist in the past, it contained your user identity string at the end. And when someone accessed and acted on that link, by adding the song or playing it, your account information was saved in their account’s identity as a connection of sorts. Maybe a little invasive, but because users couldn’t do much with that information, it was mostly just a way for Spotify to track how often people were sharing music between each other.

Before, this happened in the background and no one really cared. But with the new Spotify DM feature, connections made via tracking links are suddenly being put front and center right before users’ eyes. As spotted by Reddit user u/sporoni122, these connections are now showing up in a “Suggested” section when using Spotify DMs, even if you just happened to click on a public link once and never heard of the person who shared it. Alternatively, you might have shared a link in the past, and could be shown account information for people who clicked on it.

Even if an account is public, I could see how this would be annoying. Imagine you share a song in a Discord server where you go by an anonymous name, but someone clicks on it and finds your Spotify account, where you might go by your real name. Bam, they suddenly know who you are.

Reddit user u/Reeceeboii added that Spotify is using this URL tracking behavior to populate a list of songs and playlists shared between two users even if they happened via third-party messaging services like WhatsApp.

So, if you don’t want others to find your Spotify account through your shared songs, what do you do? Well, before posting in anonymous communities like Discord or X, try cleaning up your links first.

My colleagues and I have previously written about how you can remove tracking information from a URL automatically on iPhone, how you can use a Mac app to clean links without any effort, or how you can use an all-in one extension to get the job done regardless of platform. You can also use a website like Link Cleaner to clean up your links.

Or you can take the manual approach. In your Spotify link, remove everything at the end starting with the question mark.

What do you think so far?

So this tracked link:

https://open.spotify.com/playlist/74BUi79BzFKW7IVJBShrFD?si=28575ba800324

Becomes this clean link:

https://open.spotify.com/playlist/74BUi79BzFKW7IVJBShrFD

Here, the part with “si=“ is your identifier. Of course, if it’s a playlist you’re sharing, it will still show your name and your profile picture—that’s how the platform has always worked. So if you want to stay truly anonymous, you’ll want to keep your playlists private.

How to disable Spotify DMs

If you don’t see yourself using Spotify DMs, it might also be a good idea to just get rid of them entirely. You’ll probably still want to remove tracking information from your URLs before sharing, just for due diligence. But if you don’t want to worry about getting DMs on Spotify or having your account show up as a Suggested contact to strangers, you should also go to Settings > Privacy and social > Social features and disable Messages. That’ll opt you out of the DM feature altogether.

Disable Spotify DM.
Credit: Michelle Ehrhardt

Source: If You’ve Ever Shared a Spotify Link Publicly, You Need to Disable Spotify DMs

Age verification legislation is tanking traffic to sites that comply, and rewarding those that don’t

A new report suggests that the UK’s age verification measures may be having unforeseen knock-on effects on web traffic, with the real winners being sites that flout the law entirely.

[…]

Sure, there are ways around this if you’d rather not feed your personal data to a platform’s third-party age verification vendor. However, sites are seeing more significant consequences beyond just locking you out of your DMs. For a start, The Washington post reports web traffic to pornography sites implementing age verification has taken a totally predictable hit—but those flouting the new age check requirements have seen traffic as much as triple compared to the same time last year.

The Washington Post looked at the 90 most visited porn sites based on UK visitor data from Similarweb. Of the 90 total sites, 14 hadn’t yet deployed ‘scan your face’ age checks. The publication found that while traffic from British IP addresses to sites requiring age verification had cratered, the 14 sites without age checks “have been rewarded with a flood of traffic” from UK-based users.

It’s worth noting that VPN usage might distort the the location data of users. Still, such a surge of traffic likely brings with it a surge in income in the form of ad-revenue. Ofcom, the UK’s government-approved regulatory communications office overseeing everything from TV to the internet, may have something to say about that though. Meanwhile, sites that comply with the rules are not only losing out on ad-revenue, but are also expected to pay for the legally required age verification services on top.

[…]

Alright, stop snickering about the mental image of someone perusing porn sites professionally, and let me tell you why this is important. You may have already read that while a lot of Brits support the age verification measures broadly speaking, a sizable portion feels they’ve been implemented poorly. Indeed, a lot of the aforementioned sites that complied with the law also criticised it by linking to a petition seeking its repeal. The UK government has responded to this petition by saying it has “no plans to repeal the Online Safety Act” despite, at time of writing, over 500,000 signatures urging it to do just that.

[…]

Source: Age verification legislation is tanking traffic to sites that comply, and rewarding those that don’t | PC Gamer

Of course age verification isn’t just hitting porn sites. It is also hitting LGBTQ+ sites, public health forums, conflict reporting and global journalism and more.

And there is no way to do Age Verification privately.

Europol wants to keep all data forever for law  enforcement, says unnamed(!) official. E.U. Court of Human Rights backed encryption as basic to privacy rights in 2024 and now Big Brother Chat Control is on the agenda again (EU consultation feedback link at end)

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

In the Russian case, the users relied on Telegram’s optional “secret chat” functions, which are also end-to-end encrypted. Telegram had refused to break into chats of a handful of users, telling a Moscow court that it would have to install a back door that would work against everyone. It lost in Russian courts but did not comply, leaving it subject to a ban that has yet to be enforced.
The European court backed the Russian users, finding that law enforcement having such blanket access “impairs the very essence of the right to respect for private life” and therefore would violate Article 8 of the European Convention, which enshrines the right to privacy except when it conflicts with laws established “in the interests of national security, public safety or the economic well-being of the country.”
The court praised end-to-end encryption generally, noting that it “appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information.”
In addition to prior cases, the judges cited work by the U.N. human rights commissioner, who came out strongly against encryption bans in 2022, saying that “the impact of most encryption restrictions on the right to privacy and associated rights are disproportionate, often affecting not only the targeted individuals but the general population.”
High Commissioner Volker Türk said he welcomed the ruling, which he promoted during a recent visit to tech companies in Silicon Valley. Türk told The Washington Post that“encryption is a key enabler of privacy and security online and is essential for safeguarding rights, including the rights to freedom of opinion and expression, freedom of association and peaceful assembly, security, health and nondiscrimination.”
[…]
Even as the fight over encryption continues in Europe, police officials there have talked about overriding end-to-end encryption to collect evidence of crimes other than child sexual abuse — or any crime at all, according to an investigative report by the Balkan Investigative Reporting Network, a consortium of journalists in Southern and Eastern Europe.
“All data is useful and should be passed on to law enforcement, there should be no filtering … because even an innocent image might contain information that could at some point be useful to law enforcement,” an unnamed Europol police official said in 2022 meeting minutes released under a freedom of information request by the consortium.

Source: E.U. Court of Human Rights backs encryption as basic to privacy rights – The Washington Post

An ‘unnamed’ Europol police official is peak irony in this context.

Remember to leave your feedback where you can, in this case: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/14680-Impact-assessment-on-retention-of-data-by-service-providers-for-criminal-proceedings-/public-consultation_en

The EU wants to know what you think about it keeping all your data for *cough* crime stuff.

The EU wants to save all your data, or as much as possible for as long as possible. To insult the victims of crime, they say that they want to do this to fight crime. How do you feel about the EU being turned into a surveillance society? Leave your voice in the link below.

Source: Data retention by service providers for criminal proceedings – impact assessment

Croatians suddenly realise that EU CSAM rules include hidden pervasive chat control surveillance, turning the EU into Big Brother – dissaprove massively.

“The Prime Minister of the Republic of Croatia Andrej Plenkovic, at yesterday’s press conference, accused the opposition of upheld the proposal of a regulation of the European Parliament and the Council on the establishment of rules for the prevention and combating sexual abuse of children COM (2022) 209, which is (unpopularly referred to as ‘chat control’ because, in the case of the adoption of the proposal in its integral form, it would allow the bodies of criminal prosecution to be subject to the legal prosecution of the private communication of all citizens.

[…]

On June 17, the Bosnian MP, as well as colleagues from the SDP, HDZ and the vast majority of other European MPs supported the Proposal for Amendments to the Directive on combating the sexual abuse and sexual exploitation of children and child pornography from 2011. Although both legislative documents were adopted within the same package of EU strategies for a more effective fight against child abuse and have a similar name, two documents are intrinsically different – one is the regulation, the other directive, they have different rapporteurs and entered the procedure for as many as two years apart.”

‘We’ve already spoken about it’

“The basic difference, however, is that the proposal to amend the Directive does not contain any mention of ‘chat control’, i.e. the mass surveillance of citizens. MP Bosnian, as well as colleagues from the party We Can! They strongly oppose the proposal for a regulation that supports the monitoring of the content of private conversations of all citizens and which will only be voted on in the European Parliament. Such a proposal directly violates Article 7. The Charter of Fundamental Rights of the European Union, as confirmed by the Court of Justice of the European Union in the ruling “Schrems I” (paragraph 94), and the same position was confirmed by the Legal Service of the Council of the EU.

In the previous European Parliament, the Greens resisted mass surveillance, focusing on monitoring suspicious users – the security services must first identify suspicious users and then monitor them, not the other way around. People who abuse the internet to commit criminal acts must be recognized and isolated by the numerous services for whom it is a job, but not in a way of mass, but focused surveillance of individuals.

We all have the right to privacy, because privacy must remain a secure space for our human identity. Finally, the representative of Bosanac invites Prime Minister Plenković to oppose this harmful proposal at the European Council and protect the right to privacy of Croatian citizens,” Gordan Bosanca’s office said in a statement.

Source: Bosnian accuses Plenkovic of lying: ‘I urge him to counter that proposal’

Parliamentary questions are being asked as well

A review conducted under the Danish Presidency examining the proposal for a regulation on combatting online child sexual abuse material – dubbed the ‘Chat Control’ or CSAM regulation – has raised new, grave concerns about the respect of fundamental rights in the EU.

As it stands, the proposal envisages mass scanning of private communications, including encrypted conversations, raising serious issues of compliance with Article 7 of the Charter of Fundamental Rights by threatening to undermine the data security of citizens, businesses and institutions. A mandatory weakening of end-to-end encryption would create security gaps open to exploitation by cybercriminals, rival states and terrorist organisations, and would also harm the competitiveness of our digital economy.

At the same time, the proposed technical approach is based on automated content analysis tools which produce high rates of false positives, creating the risk that innocent users could be wrongly incriminated, while the effectiveness of this approach in protecting children has not been proven. Parliament and the Council have repeatedly rejected mass surveillance.

  • 1.Considering the mandatory scanning of all private communications, is the proposed regulation compatible with Article 7 of the Charter of Fundamental Rights?

  • 2.How will it ensure that child protection is achieved through targeted measures that are proven to be effective, without violating the fundamental rights of all citizens?

  • 3.How does it intend to prevent the negative impact on cybersecurity and economic competitiveness caused by weakening encryption?

Source: Proposed Chat Control law presents new blow for privacy

Google wants to verify all developers’ identities, including those not on the play store in massive data grab

  • Google will soon verify the identities of developers who distribute Android apps outside the Play Store.
  • Developers must submit their information to a new Android Developer Console, increasing their accountability for their apps.
  • Rolling out in phases from September 2026, these new verification requirements are aimed at protecting users from malware by making it harder for malicious developers to remain anonymous.

 

Most Android users acquire apps from the Google Play Store, but a small number of users download apps from outside of it, a process known as sideloading. There are some nifty tools that aren’t available on the Play Store because their developers don’t want to deal with Google’s approval or verification requirements. This is understandable for hobbyist developers who simply want to share something cool or useful without the burden of shedding their anonymity or committing to user support.

[…]

Today, Google announced it is introducing a new “developer verification requirement” for all apps installed on Android devices, regardless of source. The company wants to verify the identity of all developers who distribute apps on Android, even if those apps aren’t on the Play Store. According to Google, this adds a “crucial layer of accountability to the ecosystem” and is designed to “protect users from malware and financial fraud.” Only users with “certified” Android devices — meaning those that ship with the Play Store, Play Services, and other Google Mobile Services (GMS) apps — will block apps from unverified developers from being installed.

Google says it will only verify the identity of developers, not check the contents of their apps or their origin. However, it’s worth noting that Google Play Protect, the malware scanning service integrated into the Play Store, already scans all installed apps regardless of where they came from. Thus, the new requirement doesn’t prevent malicious apps from reaching users, but it does make it harder for their developers to remain anonymous. Google likens this new requirement to ID checks at the airport, which verify the identity of travelers but not whether they’re carrying anything dangerous.

[…]

Source: Google wants to make sideloading Android apps safer by verifying developers’ identities – Android Authority

So the new requirement doesn’t make things any safer, but gives Google a whole load of new personal data for no good reason other than that they want it. I guess it’s becoming more and more time to de-Google.

Uni of Melbourne used Wi-Fi location data to ID protestors

Australia’s University of Melbourne last year used Wi-Fi location data to identify student protestors.

The University used Wi-Fi to identify students who participated in July 2024 sit-in protest. As described in a report [PDF] into the matter by the state of Victoria’s Office of the Information Commissioner, the University directed protestors to leave the building they occupied and warned those who remained could be suspended, disciplined, or reported to police.

The report says 22 chose to remain, and that the University used CCTV and WiFi location data to identify them.

The Information Commissioner found that use of CCTV to identify protestors did not breach privacy, but felt using Wi-Fi location data did because the University’s policies lacked detail.

“Given that individuals would not have been aware of why their Wi-Fi location data was collected and how it may be used, they could not exercise an informed choice as to whether to use the Wi-Fi network during the sit-in, and be aware of the possible consequences for doing so,” the report found.

As the investigation into use of location data unfolded, the University changed its policies regarding use of location data. The Office of the Information Commissioner therefore decided not to issue a formal compliance notice, and will monitor the University to ensure it complies with its undertakings.

Source: Australian uni used Wi-Fi location data to ID protestors • The Register

Privacy‑Preserving Age Verification Falls Apart On Contact With Reality

[…] Identity‑proofing creates a privacy bottleneck. Somewhere, an identity provider must verify you. Even if it later mints an unlinkable token, that provider is the weak link—and in regulated systems it will not be allowed to “just delete” your information. As Bellovin puts it:

Regulation implies the ability for governments to audit the regulated entities’ behavior. That in turn implies that logs must be kept. It is likely that such logs would include user names, addresses, ages, and forms of credentials presented.

Then there’s the issue of fraud and duplication of credentials. Accepting multiple credential types increases coverage and increases abuse; people can and do hold multiple valid IDs:

The fact that multiple forms of ID are acceptable… exacerbates the fraud issue…This makes it impossible to prevent a single person from obtaining multiple primary credentials, including ones for use by underage individuals.

Cost and access will absolutely chill speech. Identity providers are expensive. If users pay, you’ve built a wealth test for lawful speech. If sites pay, the costs roll downhill (fees, ads, data‑for‑access) and coverage narrows to the cheapest providers who may also be more susceptible to breaches:

Operating an IDP is likely to be expensive… If web sites shoulder the cost, they will have to recover it from their users. That would imply higher access charges, more ads (with their own privacy challenges), or both.

Sharing credentials drives mission creep, which will create dangers with the technology. If a token proves only “over 18,” people will share it (parents to kids, friends to friends). To deter that, providers tie tokens to identities/devices or bundle more attributes—making them more linkable and more revocable:

If the only use of the primary credential is obtaining age-verifying subcredentials, this isn’t much of a deterrent—many people simply won’t care…That, however, creates pressure for mission creep… , including opening bank accounts, employment verification, and vaccination certificates; however, this is also a major point of social control, since it is possible to revoke a primary credential and with it all derived subcredentials.

The end result, then is you’re not just attacking privacy again, but you’re creating a tool for authoritarian pressure:

Those who are disfavored by authoritarian governments may lose access not just to pornography, but to social media and all of these other services.

He also grounds it in lived reality, with a case study that shows who gets locked out first:

Consider a hypothetical person “Chris”, a non-driving senior citizen living with an adult child in a rural area of the U.S… Apart from the expense— quite possibly non-trivial for a poor family—Chris must persuade their child to then drive them 80 kilometers or more to a motor vehicles office…

There is also the social aspect. Imagine the embarrassment to all of an older parent having to explain to their child that they wish to view pornography.

None of this is an attack on the math. It’s a reminder that deployment reality ruins the cryptographic ideal. There’s more in the paper, but you get the idea

[…]

Source: Privacy‑Preserving Age Verification Falls Apart On Contact With Reality | Techdirt

Proton releases Lumo GPT 1.1:  faster, more advanced, European and actually private

Today we’re releasing a powerful update to Lumo that gives you a more capable privacy-first AI assistant offering faster, more thorough answers with improved awareness of recent events.

Guided by feedback from our community, we’ve been busy upgrading our models and adding GPUs, which we’ll continue to do thanks to the support of our Lumo Plus subscribers. Lumo 1.1 performs significantly better across the board than the first version of Lumo, so you can now use it more effectively for a variety of use cases:

  • Get help planning projects that require multiple steps — it will break down larger goals into smaller tasks
  • Ask complex questions and get more nuanced answers
  • Generate better code — Lumo is better at understanding your requests
  • Research current events or niche topics with better accuracy and fewer hallucinations thanks to improved web search

New cat, new tricks, same privacy

The latest upgrade brings more accurate responses with significantly less need for corrections or follow-up questions. Lumo now handles complex requests much more reliably and delivers the precise results you’re looking for.

In testing, Lumo’s performance has increased across several metrics:

  • Context: 170% improvement in context understanding so it can accurately answer questions based on your documents and data
  • Coding: 40% better ability to understand requests and generate correct code
  • Reasoning: Over 200% improvement in planning tasks, choosing the right tools such as web search, and working through complex multi-step problems

Most importantly, Lumo does all of this while respecting the confidentiality of your chats. Unlike every major AI platform, Lumo is open source and built to be private by design. It doesn’t keep any record of your chats, and your conversation history is secured with zero-access encryption so nobody else can see it and your data is never used to train the models. Lumo is the only AI where your conversations are actually private.

Learn about Lumo privacy

Lumo mobile apps are now open source

Unlike Big Tech AIs that spy on you, Lumo is an open source application that exclusively runs open source models. Open source is especially important in AI because it confirms that the applications and models are not being used nefariously to manipulate responses to fit a political narrative or secretly leak data. While the Lumo web client is already open source(new window), today we are also releasing the code for the mobile apps(new window). In line with Lumo being the most transparent and private AI, we have also published the Lumo security model so you can see how Lumo’s zero access encryption works and why nobody, not even Proton can access your conversation history.

Source: Introducing Lumo 1.1 for faster, advanced reasoning | Proton

The EU could be scanning your chats by October 2025 with Chat Control

Denmark kicked off its EU Presidency on July 1, 2025, and, among its first actions, lawmakers swiftly reintroduced the controversial child sexual abuse (CSAM) scanning bill to the top of the agenda.

Having been deemed by critics as Chat Control, the bill aims to introduce new obligations for all messaging services operating in Europe to scan users’ chats, even if they’re encrypted.

The proposal, however, has been failing to attract the needed majority since May 2022, with Poland’s Presidency being the last to give up on such a plan.

Denmark is a strong supporter of Chat Control. Now, the new rules could be adopted as early as October 14, 2025, if the Danish Presidency manages to find a middle ground among the countries’ members.

Crucially, according to the latest data leaked by the former MEP for the German Pirate Party, Patrick Breyer, many countries that said no to Chat Control in 2024 are now undecided, “even though the 2025 plan is even more extreme,” he added.

[…]

As per its first version, all messaging software providers would be required to perform indiscriminate scanning of private messages to look for CSAM – so-called ‘client-side scanning‘. The proposal was met with a strong backlash, and the European Court of Human Rights ended up banning all legal efforts to weaken encryption of secure communications in Europe.

In June 2024, Belgium then proposed a new text to target only shared photos, videos, and URLs, upon users’ permission. This version didn’t satisfy either the industry or voting EU members due to its coercive nature. As per the Belgian text, users must give consent to the shared material being scanned before being encrypted to keep using the functionality.

Source: The EU could be scanning your chats by October 2025 – here’s everything we know | TechRadar

Pluralistic: “Privacy preserving age verification” is bullshit

[…]

when politicians are demanding that technologists NERD HARDER! to realize their cherished impossibilities.

That’s just happened, and in relation to one of the scariest, most destructive NERD HARDER! tech policies ever to be assayed (a stiff competition). I’m talking about the UK Online Safety Act, which imposes a duty on websites to verify the age of people they communicate with before serving them anything that could be construed as child-inappropriate (a category that includes, e.g., much of Wikipedia):

https://wikimediafoundation.org/news/2025/08/11/wikimedia-foundation-challenges-uk-online-safety-act-regulations/

The Starmer government has, incredibly, developed a passion for internet regulations that are even stupider than Tony Blair’s and David Cameron’s. Requiring people to identify themselves (generally, via their credit cards) in order to look at porn will create a giant database of every kink and fetish of every person in the UK, which will inevitably leak and provide criminals and foreign spies with a kompromat system they can sort by net worth of the people contained within.

This hasn’t deterred Starmer, who insists that if we just NERD HARDER!, we can use things like “zero-knowledge proofs” to create “privacy-preserving” age verification system, whereby a service can assure itself that it is communicating with an adult without ever being able to determine who it is communicating with.

In support of this idea, Starmer and co like to cite some genuinely exciting and cool cryptographic work on privacy-preserving credential schemes. Now, one of the principal authors of the key papers on these credential schemes, Steve Bellovin, has published a paper that is pithily summed up via its title, “Privacy-Preserving Age Verification—and Its Limitations”:

https://www.cs.columbia.edu/~smb/papers/age-verify.pdf

The tldr of this paper is that Starmer’s idea will not work and cannot work. The research he relies on to defend the technological feasibility of his cherished plan does not support his conclusion.

Bellovin starts off by looking at the different approaches various players have mooted for verifying their users’ age. For example, Google says it can deploy a “behavioral” system that relies on Google surveillance dossiers to make guesses about your age. Google refuses to explain how this would work, but Bellovin sums up several of the well-understood behavioral age estimation techniques and explains why they won’t work. It’s one thing to screw up age estimation when deciding which ad to show you; it’s another thing altogether to do this when deciding whether you can access the internet.

Others say they can estimate your age by using AI to analyze a picture of your face. This is a stupid idea for many reasons, not least of which is that biometric age estimation is notoriously unreliable when it comes to distinguishing, say, 16 or 17 year olds from 18 year olds. Nevertheless, there are sitting US Congressmen who not only think this would work – they labor under the misapprehension that this is already going on:

https://pluralistic.net/2023/04/09/how-to-make-a-child-safe-tiktok/

So that just leaves the privacy-preserving credential schemes, especially the Camenisch-Lysyanskaya protocol. This involves an Identity Provider (IDP) that establishes a user’s identity and characteristics using careful document checks and other procedures. The IDP then hands the user a “primary credential” that can attest to everything the IDP knows about the user, and any number of “subcredentials” that only attest to specific facts about that user (such as their age).

These are used in zero-knowledge proofs (ZKP) – a way for two parties to validate that one of them asserts a fact without learning what that fact is in the process (this is super cool stuff). Users can send their subcredentials to a third party, who can use a ZKP to validate them without learning anything else about the user – so you could prove your age (or even just prove that you are over 18 without disclosing your age at all) without disclosing your identity.

There’s some good news for implementing CL on the web: rather than developing a transcendentally expensive and complex new system for these credential exchanges and checks, CL can piggyback on the existing Public Key Infrastructure (PKI) that powers your browser’s ability to have secure sessions when you visit a website with https:// in front of the address (instead of just http://).

However, doing so poses several difficulties, which Bellovin enumerates under a usefully frank section header: “INSURMOUNTABLE OBSTACLES.”

The most insurmountable of these obstacles is getting set up with an IDP in the first place – that is, proving who you are to some agency, but only one such agency (so you can’t create two primary credentials and share one of them with someone underage). Bellovin cites Supreme Court cases about voter ID laws and the burdens they impose on people who are poor, old, young, disabled, rural, etc.

Fundamentally, it can be insurmountably hard for a lot of people to get, say, a driver’s license, or any other singular piece of ID that they can provide to an IDP in order to get set up on the system.

The usual answer for this is for IDPs to allow multiple kinds of ID. This does ease the burden on users, but at the expense of creating fatal weaknesses in the system: if you can set up an identity with multiple kinds of ID, you can visit different IDPs and set up an ID with each (just as many Americans today have drivers licenses from more than one state).

The next obstacle is “user challenges,” like the problem of households with shared computers, or computers in libraries, hotels, community centers and other public places. The only effective way to do this is to create (expensive) online credential stores, which are likely to be out of reach of the poor and disadvantaged people who disproportionately rely on public or shared computers.

Next are the “economic issues”: this stuff is expensive to set up and maintain, and someone’s gotta pay for it. We could ask websites that offer kid-inappropriate content to pay for it, but that sets up an irreconcilable conflict of interest. These websites are going to want to minimize their costs, and everything they can do to reduce costs will make the system unacceptably worse. For example, they could choose only to set up accounts with IDPs that are local to the company that operates the server, meaning that anyone who lives somewhere else and wants to access that website is going to have to somehow get certified copies of e.g. their birth certificate and driver’s license to IDPs on the other side of the planet. The alternative to having websites foot the bill for this is asking users to pay for it – meaning that, once again, we exclude poor people from the internet.

Finally, there’s “governance”: who runs this thing? In practice, the security and privacy guarantees of the CL protocol require two different kinds of wholly independent institutions: identity providers (who verify your documents), and certificate authorities (who issue cryptographic certificates based on those documents). If these two functions take place under one roof, the privacy guarantees of the system immediately evaporate.

An IDP’s most important role is verifying documents and associating them with a specific person. But not all IDPs will be created equal, and people who wish to cheat the system will gravitate to the worst IDPs. However, lots of people who have no nefarious intent will also use these IDPs, merely because they are close by, or popular, or were selected at random. A decision to strike off an IDP and rescind its verifications will force lots of people – potentially millions of people – to start over with the whole business of identifying themselves, during which time they will be unable to access much of the web. There’s no practical way for the average person to judge whether an IDP they choose is likely to be found wanting in the future.

So we can regulate IDPs, but who will do the regulation? Age verification laws affect people outside of a government’s national territory – anyone seeking to access content on a webserver falls under age verification’s remit. Remember, IDPs handle all kinds of sensitive data: do you want Russia, say, to have a say in deciding who can be an IDP and what disclosure rules you will have to follow?

To regulate IDPs (and certificate authorities), these entities will have to keep logs, which further compromises the privacy guarantees of the CL protocol.

Looming all of this is a problem with the CL protocol as being built on regulated entities, which is that CL is envisioned as a way to do all kinds of business, from opening a bank account to proving your vaccination status or your right to work or receive welfare. Authoritarian governments who order primary credential revocations of their political opponents could thoroughly and terrifyingly “unperson” them at the stroke of a pen.

The paper’s conclusions provide a highly readable summary of these issues, which constitute a stinging rebuke to anyone contemplating age-verification schemes. These go well beyond the UK, and are in the works in Canada, Australia, the EU, Texas and Louisiana.

Age verification is an impossibility, and an impossibly terrible idea with impossibly vast consequences for privacy and the open web, as my EFF colleague Jason Kelley explained on the Malwarebytes podcast:

https://www.malwarebytes.com/blog/podcast/2025/08/the-worst-thing-for-online-rights-an-age-restricted-grey-web-lock-and-code-s06e16

Politicians – even nontechnical ones – can make good tech policy, provided they take expert feedback seriously (and distinguish it from self-interested industry lobbying).

When it comes to tech policy, wanting it badly is not enough. The fact that it would be really cool if we could get technology to do something has no bearing on whether we can actually get technology to do that thing. NERD HARDER! isn’t a policy, it’s a wish.

Wish in one hand and shit in the other and see which one will be full first:

https://www.reddit.com/r/etymology/comments/oqiic7/studying_the_origins_of_the_phrase_wish_in_one/

Source: Pluralistic: “Privacy preserving age verification” is bullshit (14 Aug 2025) – Pluralistic: Daily links from Cory Doctorow

UK passport database images used in facial recognition scans

Privacy groups report a surge in UK police facial recognition scans of databases secretly stocked with passport photos lacking parliamentary oversight.

Big Brother Watch says the UK government has allowed images from the country’s passport and immigration databases to be made available to facial recognition systems, without informing the public or parliament.

The group claims the passport database contains around 58 million headshots of Brits, plus a further 92 million made available from sources such as the immigration database, visa applications, and more.

By way of comparison, the Police National Database contains circa 20 million photos of those who have been arrested by, or are at least of interest to, the police.

In a joint statement, Big Brother Watch, its director Silkie Carlo, Privacy International, and its senior technologist Nuno Guerreiro de Sousa, described the databases and lack of transparency as “Orwellian.” They have also written to both the Home Office and the Metropolitan Police, calling for a ban on the practice.

The comments come after Big Brother Watch submitted Freedom of Information requests, which revealed a significant uptick in police scanning the databases in question as part of the force’s increasing facial recognition use.

The number of searches by 31 police forces against the passport databases rose from two in 2020 to 417 by 2023, and scans using the immigration database photos rose from 16 in 2023 to 102 the following year.

Carlo said: “This astonishing revelation shows both our privacy and democracy are at risk from secretive AI policing, and that members of the public are now subject to the inevitable risk of misidentifications and injustice. Police officers can secretly take photos from protests, social media, or indeed anywhere and seek to identify members of the public without suspecting us of having committed any crime.

“This is a historic breach of the right to privacy in Britain that must end. We’ve taken this legal action to defend the rights of tens of millions of innocent people in Britain.”

[…]

Recent data from the Met attempted to imbue a sense of confidence in facial recognition, as the number of arrests the technology facilitated passed the 1,000 mark, the force said in July.

However, privacy campaigners were quick to point out that this accounted for just 0.15 percent of the total arrests in London since 2020. They suggested that despite the shiny 1,000 number, this did not represent a valuable return on investment in the tech.

Alas, the UK has not given up on its pursuit of greater surveillance powers. Prime Minister Keir Starmer, a former human rights lawyer, is a big fan of FR, having said last year that it was the answer to preventing future riots like the ones that broke out across the UK last year following the Southport murders. ®

Source: UK passport database images used in facial recognition scans • The Register