DHS wants more biometric data from more people – even from citizens

If you’re filing an immigration form – or helping someone who is – the Feds may soon want to look in your eyes, swab your cheek, and scan your face. The US Department of Homeland Security wants to greatly expand biometric data collection for immigration applications, covering immigrants and even some US citizens tied to those cases.

DHS, through its component agency US Citizenship and Immigration Services, on Monday proposed a sweeping expansion of the agency’s collection of biometric data. While ostensibly about verifying identities and preventing fraud in immigration benefit applications, the proposed rule goes much further than simply ensuring applicants are who they claim to be.

First off, the rule proposes expanding when DHS can collect biometric data from immigration benefit applicants, as “submission of biometrics is currently only mandatory for certain benefit requests and enforcement actions.” DHS wants to change that, including by requiring practically everyone an immigrant is associated with to submit their biometric data.

“DHS proposes in this rule that any applicant, petitioner, sponsor, supporter, derivative, dependent, beneficiary, or individual filing or associated with a benefit request or other request or collection of information, including U.S. citizens, U.S. nationals and lawful permanent residents, and without regard to age, must submit biometrics unless DHS otherwise exempts the requirement,” the rule proposal said.

DHS also wants to require the collection of biometric data from “any alien apprehended, arrested or encountered by DHS.”

It’s not explicitly stated in the rule proposal why US citizens associated with immigrants who are applying for benefits would have to have their biometric data collected. DHS didn’t answer questions to that end, though the rule stated that US citizens would also be required to submit biometric data “when they submit a family-based visa petition.”

Give me your voice, your eye print, your DNA samples

In addition to expanded collection, the proposed rule also changes the definition of what DHS considers to be valid biometric data.

“Government agencies have grouped together identifying features and actions, such as fingerprints, photographs, and signatures under the broad term, biometrics,” the proposal states. “DHS proposes to define the term ‘biometrics’ to mean ‘measurable biological (anatomical, physiological or molecular structure) or behavioral characteristics of an individual,'” thus giving DHS broad leeway to begin collecting new types of biometric data as new technologies are developed.

The proposal mentions several new biometric technologies DHS wants the option to use, including ocular imagery, voice prints and DNA, all on the table per the new rule.

[…]

Source: DHS wants more biometric data – even from citizens • The Register

Music festivals to collect data with RFID wristbands. Also, randomly, fascinating information about data Flitsmeister collects.

This summer, Dutch music festivals will use RFID wristbands to collect visitor data. The technology has been around for a while, but the innovation lies in its application. The wristbands are anonymous by default, but users can activate them to participate in loyalty programs or unlock on-site experiences.Visitor privacy is paramount; overly invasive tracking is avoided.

This is according to Michael Guntenaar, Managing Director at Superstruct Digital Services, in the Emerce TV video ‘Data is the new headliner at dance festivals’. Superstruct is a network of approximately 80 large festivals (focused on experience and brand identity) spread across Europe and Australia. ID&T, known for events such as Sensation, Mysteryland, and Defqon.1, joined Superstruct in September 2021. Tula Daans, Data Analyst Brand Partnerships at ID&T, also joined on behalf of ID&T.

Festivals use various data sources, primarily ticket data (age, location, gender/gender identity), but also marketing data (social media), consumption data (food and drinks), and post-event surveys.

For brand partnerships, surveys are sent to visitors after the event to gauge whether they saw brands, what they thought of them, and thus gain insight into brand perception. Deliberately, no detailed feedback is requested during the festival to avoid disturbing the visitor experience, says Guntenaar.

The Netherlands is a global leader in data collection. Defqon.1 is mentioned as a breeding ground for experiments with data and technology, due to its technically advanced team and highly engaged target group.

[…]

In a second video, ‘Real-time mobility info in a complex data landscape’, Jorn de Vries, managing director at Flitsmeister, talks about mobility data and the challenges and opportunities within this market. The market for mobility data, which ranges from traffic flows to speed camera notifications, is busy with players like Garmin, Google, Waze, and TomTom.

Nevertheless, Flitsmeister still sees room for growth, because mobility is timeless and brings challenges, such as the desire to get from A to B quickly, efficiently, green, and cheaply. Innovation is essential to maintain a place in this market, says De Vries.

Flitsmeister has a large online community of almost 3 million monthly active users. This community has grown significantly over the years, even after introducing paid propositions. What distinguishes Flitsmeister from global players such as Google and Waze, according to De Vries, is their local embeddedness, with marketing and content that aligns with the language and use cases of users in the Benelux. They also collaborate with governments through partnerships, allowing them to offer specific local services, such as warnings for emergency services. Technically, competitors might be able to do this, says De Vries, but it probably isn’t a high priority because it’s local; Flitsmeister, however, believes that you have to dare to go all the way to properly serve a market, even if this requires investments that are only relevant for the Netherlands. Another example of local embeddedness is their presence on almost every radio station.

The Flitsmeister app now consists of eight main uses. In addition to the well-known speed cameras and track control, it includes warnings for emergency services (ambulance, fire brigade, Rijkswaterstaat vehicles) who are informed early when such a vehicle approaches with blue lights. The app also provides traffic jam information and warnings for incidents, stationary vehicles, and roadworks. Flitsmeister tries to give warnings for the start of traffic jams earlier than the flashing signs above the road, because they are not bound by the gantries where these signs are located.

Navigation is an added feature. In addition, there is paid parking at the end of the journey. Flitsmeister also has links with so-called smart traffic lights, where they receive data about the status of the light and share data with the intersection to optimize it. This can, for example, lead to a green light if you approach an intersection at night and there is no other traffic. More than 1500 smart intersections in the Netherlands are already equipped. Flitsmeister also receives data from matrix signs, including red crosses, arrows, and adjusted maximum speeds.

Privacy is a crucial topic when bringing consumers and data together. Flitsmeister has seen privacy from the start as a Unique Selling Point (USP) if handled correctly. Especially in countries like Germany, this is more active than in the Benelux, and privacy-friendly companies have a plus in the eyes of the consumer. Large players such as Google and Waze have the same legal playing field as Flitsmeister, but differ in what they want, can, and do.

Flitsmeister does collect live GPS data that provides a lot of insight into traffic movements. They are working with Rijkswaterstaat and their parent company Bmobile on pilots, including on the A9, where they combine loop data in the asphalt with their real-time data. This provides a more accurate and cost-efficient picture than road loops alone, which are expensive to maintain and measure limitedly. This combination allows them to provide relevant information, even between the road loops, leading to more accurate and cost-efficient traffic information.

Flitsmeister also works with data that detects real-time situations and provides early advice. They are doing pilots with ‘trigger based rerouting’, where users are proactively rerouted if a reported incident on their route is likely to affect their travel time, even if the travel time has not yet changed at that moment. The challenge here is that people must be receptive to this and understand the rationale behind the rerouting.

Although there is a lot of talk about connected vehicle data, Flitsmeister’s focus is more on strengthening the relationship with the driver than with the vehicle itself. Jorn de Vries believes that the driver will ultimately lead, as the need for mobility comes from the individual and the vehicle facilitates this.

The video Data is the new headliner at dance festivals can be watched for free. The collection Customer data: trends, innovation and future will be supplemented in the coming months and can be viewed for free after registration.

Source: Kagi Translate |(Emerce TV): music festivals want to collect data with RFID wristbands

Clearview AI faces criminal heat for ignoring EU data fines – wait: these creeps still exist?

Privacy advocates at Noyb filed a criminal complaint against Clearview AI for scraping social media users’ faces without consent to train its AI algorithms.

Austria-based Noyb (None of Your Business) is targeting the US company and its executives, arguing that if successful, individuals who authorized the data collection could face criminal penalties, including imprisonment.

The complaint focuses largely on Clearview’s apparent disregard for fines from France, Greece, Italy, the Netherlands, and the UK. Aside from the UK — where Clearview recently lost its appeal of a $10 million fine from the Information Commissioner’s Office — the company has yet to pay other fines totaling more than $100 million, Noyb claims.

“EU data protection authorities did not come up with a way to enforce its fines and bans against the US company, allowing Clearview AI to effectively dodge the law,” said Noyb in its announcement today.

Max Schrems, privacy lawyer and founder of Noyb, said: “Clearview AI seems to simply ignore EU fundamental rights and just spits in the face of EU authorities.”

The criminal complaint, filed with Austrian public prosecutors, hinges on Article 84 of the GDPR, which allows EU member states to seek proportionate punishments for data protection violations, including through criminal proceedings.

Clearview AI claims it has collected more than 60 billion images to help law enforcement agencies improve facial recognition tech.

Scraping data is not inherently illegal, however, Clearview’s sweeping collection of social media photos for commercial gain has repeatedly violated GDPR regulations across Europe.

Austria ruled the company’s practices illegal in 2023, though it imposed no fine.

Noyb is using a provision in Austria’s own implementation of the GDPR that allows criminal proceedings to be brought against managers of organizations that flout data protection laws.

“We even run cross-border criminal procedures for stolen bikes, so we hope that the public prosecutor also takes action when the personal data of billions of people was stolen – as has been confirmed by multiple authorities,” said Schrems.

Source: Clearview AI faces criminal heat for ignoring EU data fines • The Register

CBP will photograph non-citizens entering and exiting the US for its facial recognition database

The US Customs and Border Protection (CBP) submitted a new measure that allows it to photograph any non-US citizen who enters or exits the country for facial recognition purposes. According to a filing with the government’s Federal Register, CBP and the Department of Homeland Security are looking to crack down on threats of terrorism, fraudulent use of travel documents and anyone who overstays their authorized stay.

The filing detailed that CBP will “implement an integrated, automated entry and exit data system to match records, including biographic data and biometrics, of aliens entering and departing the United States.” The government agency already has the ability to request photos and fingerprints from anyone entering the country, but this new rule change would allow for requiring photos of anyone exiting as well. These photos would “create galleries of images associated with individuals, including photos taken by border agents, and from passports or other travel documents,” according to the filing, adding that these galleries would be compared to live photos at entry and exit points.

These new requirements are scheduled to go into effect on December 26, but CBP will need some time to implement a system to handle the extra demand. According to the filing, the agency said “a biometric entry-exit system can be fully implemented at all commercial airports and sea ports for both entry and exit within the next three to five years.”

Source: CBP will photograph non-citizens entering and exiting the US for its facial recognition database

Microsoft illegally tracked students via 365 Education, must now say what it did with the data

An Austrian digital privacy group has claimed victory over Microsoft after the country’s data protection regulator ruled the software giant “illegally” tracked students via its 365 Education platform and used their data.

noyb said the ruling [PDF] by the Austrian Data Protection Authority also confirmed that Microsoft had tried to shift responsibility for access requests to local schools, and the software and cloud giant would have to explain how it used user data.

The ruling could have far-reaching effects for Microsoft and its obligations to inform Microsoft 365 users across Europe about what it is doing with their data, noyb argues.

The complaint dates back to the COVID-19 pandemic, when schools rapidly shifted to online learning, using the likes of 365 Education.

The privacy group said: “Microsoft shifted all responsibility to comply with privacy laws onto schools and national authorities – that have little to no actual control over the use of student data.”

When the complainant filed an access request to see what information was being processed, “this led to massive finger pointing: Microsoft simply referred the complainant to its local school.”

But the school and education authorities could only provide minimal information. The school, for example, could not access information that rested with Microsoft. “No one felt able to comply with GDPR rights.”

This prompted a complaint against the school, national and local education authorities, and Microsoft.

The ruling, machine translated, said: “It is determined that Microsoft, as a controller, violated the complainant’s right of access (Art. 15 GDPR) by failing to provide complete information about the data processed when using Microsoft Education 365.”

Microsoft was ordered to provide complete information about the data transmitted, and to provide clear explanations of terms such as “internal reporting,” “business modelling” and “improvement of core functionality.” It must also disclose if information was transferred to third parties.

[…]

 

Source: Microsoft ‘illegally’ tracked students via 365 Education • The Register

Germany against ChatControl: Denmark takes it off the table so the EU can’t vote against it NOW, but will re-try (3rd time lucky) later again, when the people aren’t looking.

Germany does not support the Danish proposal on the so-called CSA regulation, which is called ‘chat control’ by critics.

The proposal was to be voted on on Tuesday in the EU Council of Ministers, but it has now been taken off the table.

The Danish government, which currently holds the EU Presidency, has chosen to withdraw the proposal from the vote. This is stated in a press release from the German parliament.

[…]

Among other things, 500 researchers from 34 countries worldwide, including 25 from Danish universities, have signed a letter criticizing the CSA regulation, as they believe, among other things, that the method will be ineffective and that there will at the same time be a high risk of misuse of information.

And leading experts in encryption have compared the suggestion of placing a spy microphone in everyone’s pockets.

[…]

The Danish Minister of Justice, Peter Hummelgaard (S), confirms in a written reply to DR News that the proposal will not be discussed at the Council meeting next week.

“It’s no secret that it’s a difficult case with many considerations that needs to be balanced. This is shown by the great public debate that has been in the recent past as well.

“Since the necessary support for the current compromise proposal has not yet been established, prior to the Council meeting next week, the proposal will not be discussed by the ministers at the Council meeting,” he said.

Despite the fact that the government has not succeeded in finding the necessary support, the Minister of Justice does not give up.

– However, the Danish EU Presidency will continue to work on the Member States to find a solution, and therefore negotiations on the technical details of the proposal will continue.

[…]

“Both ministries stressed (the German Ministry of Interior and Justice) that, like many other EU countries, they do not support the Danish proposal in the current form,” it said.

Source: Tyskland fejer kontroversielt ‘chatkontrol’-forslag af bordet | Politik | DR

An absolute gutter move by Denmark, freeing them up to try again a 3rd time – and call it a second attempt. Maybe they will try over December, April or July, when the proletariat is on holiday and won’t raise such a stink about being spied on 24/7 by their own governments. There is nothing democratic about the way this is being handled.

Germany slams brakes on EU’s Chat Control snoopfest

Germany has committed to oppose the EU’s controversial “Chat Control” regulations following huge pressure from multiple activists and major organizations.

The draft regs would allow authorities to compel providers of communications services – such as WhatsApp, Signal, etc – to monitor user comms for potential child sexual abuse material. And they wouldn’t exempt encrypted services.

Jens Spahn, a member of the Bundestag for Germany’s Christian Democratic Union (CDU) – part of the ruling coalition in the country – confirmed in a statement on Tuesday that the German government would not allow the proposed regulations, which are commonly referred to as Chat Control, to become law.

“We, the CDU/CSU parliamentary group in the Bundestag, are opposed to the unwarranted monitoring of chats. That would be like opening all letters as a precautionary measure to see if there is anything illegal in them. That is not acceptable, and we will not allow it.”

As The Reg has mentioned previously, to pass the legislation, EU leaders need support from nations representing the majority of the member-state bloc’s population – which is why Germany’s is a key player.

The news follows speculation last week that Germany would reverse its stance and oppose the Child Sexual Abuse (CSA) Regulation, which EU politicians have tried to pass since it was first tabled in 2022.

Essentially, it’s the EU’s version of the UK’s long-held ambition to force encrypted messaging platforms to break end-to-end encryption (E2EE), packaged under a similar guise.

If passed, the CSA Regulation would require communications platforms to deploy AI-powered content filters to ensure CSA material was blocked, and those possessing and sharing it be brought to justice.

And, of course, would also undermine E2EE, theoretically allowing the EU to spy on any citizen’s private communications.

So far, Chat Control has naturally received similarly heated opposition as the UK’s equivalent plans, first through the Investigatory Powers Act and later through the Online Safety Act.

[…]

Source: Germany slams brakes on EU’s Chat Control snoopfest • The Register

Another Day, Another Age Verification Data Breach: Discord’s Third-Party Partner Leaked Government IDs. That didn’t take long, did it?

Once again, we’re reminded why age verification systems are fundamentally broken when it comes to privacy and security. Discord has disclosed that one of its third-party customer service providers was breached, exposing user data, including government-issued photo IDs, from users who had appealed age determinations.

Data potentially accessed by the hack includes things like names, usernames, emails, and the last four digits of credit card numbers. The unauthorized party also accessed a “small number” of images of government IDs from “users who had appealed an age determination.” Full credit card numbers and passwords were not impacted by the breach, Discord says.

Seems pretty bad.

What makes this breach particularly instructive is that it highlights the perverse incentives created by age verification mandates. Discord wasn’t collecting government IDs because they wanted to—they were responding to age determination appeals, likely driven by legal and regulatory pressures to keep underage users away from certain content. The result? A treasure trove of sensitive identity documents sitting in the systems of a third-party customer service provider that had no business being in the identity verification game.

To “protect the children” we end up putting everyone at risk.

This is exactly the kind of incident that privacy advocates have been warning about for years as lawmakers push for increasingly stringent age verification requirements across the internet. Every time these systems are implemented, we’re told they’re secure, that the data will be protected, that sophisticated safeguards are in place. And every time, we eventually get stories like this one.

The pattern reveals a fundamental misunderstanding of how security works in practice versus theory. Age verification proponents consistently treat identity document collection as a simple technical problem with straightforward solutions, ignoring the complex ecosystem these requirements create. Companies like Discord find themselves forced to collect documents they don’t want, storing them with third-party processors they don’t fully control, creating attack surfaces that wouldn’t otherwise exist.

These third parties become attractive targets precisely because they aggregate identity documents from multiple platforms—a single breach can expose IDs collected on behalf of dozens of different services. When the inevitable breach occurs, it’s not just usernames and email addresses at risk—it’s the kind of documentation that can enable identity theft and fraud for years to come, affecting people who may have forgotten they ever uploaded an ID to appeal an automated age determination.

[…]

the fundamental problem remains: we’re creating systems that require the collection and storage of highly sensitive identity documents, often by companies that aren’t primarily in the business of securing such data. This isn’t Discord’s fault specifically—they were dealing with age verification appeals, likely driven by regulatory or legal pressures to prevent underage users from accessing certain content or features.

This breach should serve as yet another data point in the growing pile of evidence that age verification systems create more problems than they solve. The irony is that lawmakers pushing these requirements often claim to be protecting children’s privacy, while simultaneously mandating the creation of vast databases of identity documents that inevitably get breached. We’ve seen similar incidents affect everything from adult websites to social media platforms to online retailers, all because policymakers have decided that collecting copies of driver’s licenses and passports is somehow a reasonable solution to online age verification.

The real tragedy is that this won’t be the last such breach we see. As long as lawmakers continue pushing for more aggressive age verification requirements without considering the privacy and security implications, we’ll keep seeing stories like this one. The question isn’t whether these systems will be breached—it’s when, and how many people’s sensitive documents will be exposed in the process.

[…]

Source: Another Day, Another Age Verification Data Breach: Discord’s Third-Party Partner Leaked Government IDs | Techdirt

If you want to look at previous articles telling you what an insanely bad idea mandatory age verification systems are and how they are insecure, you can just search this blog.

Chat Control Is Back On The Menu In The EU. It Still Must Be Stopped

The European Union Council is once again debating its controversial message scanning proposal, aka “Chat Control,” that would lead to the scanning of private conversations of billions of people.

Chat Control, which EFF has strongly opposed since it was first introduced in 2022, keeps being mildly tweaked and pushed by one Council presidency after another.

Chat Control is a dangerous legislative proposal that would make it mandatory for service providers, including end-to-end encrypted communication and storage services, to scan all communications and files to detect “abusive material.” This would happen through a method called client-side scanning, which scans for specific content on a device before it’s sent. In practice, Chat Control is chat surveillance and functions by having access to everything on a device with indiscriminate monitoring of everything. In a memo, the Danish Presidency claimed this does not break end-to-end encryption.

This is absurd.

We have written extensively that client-side scanning fundamentally undermines end-to-end encryption, and obliterates our right to private spaces. If the government has access to one of the “ends” of an end-to-end encrypted communication, that communication is no longer safe and secure. Pursuing this approach is dangerous for everyone, but is especially perilous for journalists, whistleblowers, activists, lawyers, and human rights workers.

If passed, Chat Control would undermine the privacy promises of end-to-end encrypted communication tools, like Signal and WhatsApp. The proposal is so dangerous that Signal has stated it would pull its app out of the EU if Chat Control is passed. Proponents even seem to realize how dangerous this is, because state communications are exempt from this scanning in the latest compromise proposal.

This doesn’t just affect people in the EU, it affects everyone around the world, including in the United States. If platforms decide to stay in the EU, they would be forced to scan the conversation of everyone in the EU. If you’re not in the EU, but you chat with someone who is, then your privacy is compromised too. Passing this proposal would pave the way for authoritarian and tyrannical governments around the world to follow suit with their own demands for access to encrypted communication apps.

Even if you take it in good faith that the government would never do anything wrong with this power, events like Salt Typhoon show there’s no such thing as a system that’s only for the “good guys.”

Despite strong opposition, Denmark is pushing forward and taking its current proposal to the Justice and Home Affairs Council meeting on October 14th.

We urge the Danish Presidency to drop its push for scanning our private communication and consider fundamental rights concerns. Any draft that compromises end-to-end encryption and permits scanning of our private communication should be blocked or voted down.

Phones and laptops must work for the users who own them, not act as “bugs in our pockets” in the service of governments, foreign or domestic. The mass scanning of everything on our devices is invasive, untenable, and must be rejected.

Republished from the EFF’s Deeplinks blog.

Source: Chat Control Is Back On The Menu In The EU. It Still Must Be Stopped | Techdirt

No account? No Windows 11 for you, says Microsoft

Microsoft is closing a popular loophole that allowed users to install Windows 11 without a Microsoft account.

The change has appeared in recent Insider builds of Windows 11, indicating it is likely to be included in the production version soon.

Microsoft refers to these loopholes as “known mechanisms” and is talking about local commands in this instance. You can learn all about these in our piece for getting Windows 11 installed with a local account, but suffice to say start ms-cxh:localonly is no more.

“While these mechanisms were often used to bypass Microsoft account setup, they also inadvertently skip critical setup screens, potentially causing users to exit OOBE with a device that is not fully configured for use,” Microsoft said.

“Users will need to complete OOBE with internet and a Microsoft account, to ensure [the] device is set up correctly.”

As far as Redmond is concerned, this is all for the user’s own good. It is also important to note that managed devices are not directly affected, just hardware that users want to get running with Windows 11 without having to deal with a Microsoft Account during setup.

The change is part of Microsoft’s ongoing game of Whac-A-Mole with users trying to find ways of avoiding its online services. In March, it removed the bypassnro.cmd script that allowed users to get through the Windows 11 setup without needing an internet connection. That time, Microsoft said the change was to “enhance security and user experience of Windows 11.”

There remain a number of ways to avoid the Microsoft account requirement during setup, including setting up an unattended installation, but these are more complicated. It is also clear that Microsoft is determined to continue closing loopholes where it can.

It is getting increasingly difficult to use Windows 11 on an unmanaged device without a Microsoft account. Users who don’t want to sign up should perhaps consider whether it’s time to look at an alternative operating system instead.

Source: No account? No Windows 11 for you, says Microsoft • The Register

UK government says digital ID won’t be compulsory – unless you want a job. Even Palantir steps back from this one.

The British government has finally given more details about the proposed digital ID project, directly responding to the 2.76 million naysayers that signed an online petition calling for it to be ditched.

This came a day after controversial spy-tech biz Palantir said it has no intention of helping the government implement the initiative – announced last week by prime minister Keir Starmer but not included in his political party’s manifesto at last year’s general election.

It is for this reason that Louis Mosley, UK boss at Palantir – the grandson of Sir Oswald Mosley – says his employer is not getting involved, despite being mentioned as a potential bidder.

“Digital ID is not one that was tested at the last election. It wasn’t in the manifesto. So we haven’t had a clear resounding public support at the ballot box for its implementation. So it isn’t one for us,” he told The Times

[…]

Following in the footsteps of Estonia and other nations, including China, the UK government wants to introduce a “free” digital ID card for people aged 16 and over – though it is consulting on whether this should start at 13 – to let people access public and private services “seamlessly.” It will “build on” GOV.UK One Login and the GOV.UK Wallet, we’re told.

“This system will allow people to access government services – such as benefits or tax records – without needing to remember multiple logins or provide physical documents.

[…]

The card, scheduled to be implemented by the end of the current Parliament, means employers will have to check digital ID when going through right-to-work checks, and despite previously saying the card will be mandatory, the government confirmed: “For clarity, it will not be a criminal offence to not hold a digital ID and police will not be able to demand to see a digital ID as part of a ‘stop and search.’

[…]

Big Brother Watch says the national ID system is a “serious threat to civil liberties.”

“Digital ID systems can be uniquely harmful to privacy, equality and civil liberties. They would allow the state to amass vast amounts of personal information about the public in centralised government databases. By linking government records through a unique single identifier, digital ID systems would make it very easy to build up a comprehensive picture of an individual’s life.”

[…]

Source: UK government says digital ID won’t be compulsory – honest • The Register

It also creates a single point of entry for anyone willing to hack the database. Centralised databases are incredibly broken ideas.

Also see: New digital ID will be mandatory to work in the UK. Ausweiss bitte!

And a quick search for “centralised database”

Outrage That NL Tax and Customs Authorities will give all data to US by switching to MS 365: ‘Insult to Parliament’

‘An insult not only to the House of Representatives, but also to Dutch and European businesses’, says GroenLinks-PvdA MP Barbara Kathmann about the switch of government services to Microsoft. Earlier today, outgoing State Secretary for Taxation Eugène Heijnen (BBB) informed the House of Representatives about the switch of the Tax Authorities, the Allowances department, and Customs to Microsoft 365. This means that these services will become dependent on this American software giant for their daily work.

Outrage over Tax Authorities’ switch to Microsoft: ‘An insult to the House of Representatives’

Over the past year, there have been frequent debates about the digital independence of the Netherlands, and the call to become independent from American companies is growing louder. The fact that the State Secretary is now announcing that three government services will still switch to Microsoft is causing a lot of anger among Kathmann. ‘They are essentially just ushering us into the American cloud during this caretaker period, and that is really not necessary.’ Bert Hubert, former supervisor of the intelligence services, previously stated that Dutch tax data could end up on American servers via email contact.

Cluster of European companies

Kathmann emphasizes that it would be naive to think that we could be independent of Microsoft tomorrow, but that Dutch and European businesses are capable of a lot.

[…]

According to the State Secretary, this is not possible because there are no comparable European alternatives. Kathmann explains that the intention is precisely not to become dependent on one supplier.

[…]

Stimulate development

Last week, caretaker Prime Minister Dick Schoof called on executives of large companies to become independent from non-European suppliers. Schoof also emphasized in the House two days ago that this is a priority.

[…]

the government can play an important role in stimulating the development of European and Dutch technology. ‘The government is the largest IT buyer in the Netherlands. If it becomes the largest buyer of European Dutch products, then it will really take off.’

[…]

Source: Kagi Translate

It really is amazing how at a time when everyone is talking about digital sovereignty, the Tax people – responsible for handling extremely sensitive data – decide to give it all to an increasingly untrustworthy ally.

Signal threatens to exit Germany over Chat Control vote – 14th of October we know if Denmark has managed to turn the EU into a Stazi surveillance state.

The Signal Foundation announced on October 3, 2025, that it would withdraw its encrypted messaging service from Germany and potentially all of Europe if the European Union’s Chat Control proposal passes in an upcoming vote. According to Signal President Meredith Whittaker, the messaging platform faces an existential choice between compromising its encryption integrity and leaving European markets entirely.

The German government holds a decisive position in the October 14, 2025 vote on the Chat Control regulation, which aims to combat child sexual abuse material but requires mass scanning of every message, photo, and video on users’ devices.

[…]

The Chat Control proposal mandates that messaging services like Signal, WhatsApp, Telegram, and Threema scan files on smartphones and end devices without suspicion to detect child sexual abuse material. This scanning would occur before encryption, according to technical documentation from the European Commission’s September 2020 draft on detecting such content in end-to-end encrypted communications.

[…]

The Chat Control vote reveals deep divisions among EU member states on digital privacy and surveillance. Fifteen countries support the proposal, eight oppose it, and several remain undecided as the October 14 deadline approaches.

[…]

Germany’s position remains critical and undecided. Despite expressing concerns about breaking end-to-end encryption at a September 12 Law Enforcement Working Party meeting, the government refrained from taking a definitive stance. This indecision makes Germany’s vote potentially decisive for the proposal’s fate.

Belgium, Italy, and Latvia remain undecided as of September 23, 2025. These countries express desire to reach agreement given the expiring interim regulation, with all three expressing support for the proposal’s goals while remaining formally uncommitted. Italy specifically voices doubts concerning inclusion of new child sexual abuse material in the scope of application. Latvia assesses the text positively but faces uncertainty about political support.

Poland and Austria share the desire for solutions but maintain skepticism about the current proposal’s approach. Greece’s position remains unclear, with the government evaluating technical implementation details. Sweden continues examining the compromise text and working on a position. Slovakia appears in both opposition and undecided categories depending on sources, reflecting the fluid nature of negotiations.

The arithmetic suggests that Germany’s decision could determine whether the required majority materializes. With 15 states supporting and 8 opposing, the undecided nations hold the balance.

[…]

Technical experts have warned that client-side scanning fundamentally undermines encryption security. A comprehensive 2021 study titled “Bugs in Our Pockets: The Risks of Client-Side Scanning,” authored by 14 security researchers including cryptography pioneers Whitfield Diffie and Ronald Rivest, concluded that such systems create serious security and privacy risks for all society.

The researchers explained that scanning every message—whether performed before or after encryption—negates the premise of end-to-end encryption. Instead of breaking Signal’s encryption protocol directly, hostile actors would only need to exploit access granted to the scanning system itself. Intelligence agencies have acknowledged this threat would prove catastrophic for national security, according to the technical consensus outlined in the research paper.

[…]

Germany’s historical experience with mass surveillance through the Stasi secret police informs current privacy advocacy. The country maintained principled opposition to Chat Control during the previous coalition government, though this position became uncertain after the current government took office

[…]

Denmark assumed the EU Council Presidency on July 1, 2025, and immediately reintroduced Chat Control as a legislative priority. Lawmakers targeted the October 14 adoption date if member states reach consensus. France, which previously opposed the measure, shifted to support the proposal by July 28, 2025, creating momentum for the 15 member states now backing the regulation.

[…]

Source: Signal threatens to exit Germany over Chat Control vote

Mesh-Mapper – Drone Remote ID mapping and mesh alerts

Project Overview

The FAA’s Remote ID requirement, which became mandatory for most drones in September 2023, means every compliant drone now broadcasts its location, pilot position, and identification data via WiFi or Bluetooth. While this regulation was designed for safety and accountability (or to violate pilot privacy 😊), it also creates an unprecedented opportunity for personal airspace awareness.

This project harnesses that data stream to create a comprehensive detection and tracking system that puts you in control of knowing what’s flying overhead. Built around the powerful dual-core Xiao ESP32 S3 microcontroller, the system captures Remote ID transmissions on both WiFi and Bluetooth simultaneously, feeding the data into a sophisticated Python Flask web application that provides real-time visualization and logging.

But here’s where it gets really interesting: the system also integrates with Meshtastic networks, allowing multiple detection nodes to share information across a mesh network. This means you can deploy several ESP32 nodes across your property or neighborhood and have them all contribute to a unified picture of drone activity in your area.

Why This Project Matters

Remote ID represents a fundamental shift in airspace transparency. For the first time, civilian drones are required to broadcast their identity and location continuously. This creates opportunities for:

  • Privacy Protection: Know when drones are operating near your property and who is operating them
  • Personal Security: Monitor activity around sensitive locations like your home or business
  • Community Awareness: Share drone activity information with neighbors through mesh networks
  • Research: Understand drone traffic patterns and airspace usage in your area
  • Education: Learn about wireless protocols and modern airspace management
The key difference between this system and commercial drone detection 
solutions is that it puts the power of airspace awareness directly in your 
hands, using affordable hardware and open-source software.

While you can build this project using off-the-shelf ESP32 development boards, I’ve designed custom PCBs specifically optimized for Remote ID detection integration with Meshtastic that are that are available on my Tindie store. Thank you PCBway for the awesome boards! The combination of their top tier quality, competitive pricing, fast turnaround times, and stellar customer service makes PCBWay the go-to choice for professional PCB fabrication, whether you’re prototyping innovative mesh detection systems or scaling up for full production runs.

https://www.pcbway.com/

Step 1: Hardware Preparation

If using custom MeshDetect boards from Tindie:

  • Boards come pre-assembled, flashed, and tested
  • Includes Stock 915mhz and 2.4ghz antennas
  • USB-C programming interface ready to use

If building with standard ESP32 S3:

  • Xiao ESP32 S3 development board recommended
  • USB-C cable for connection and power
  • Optional upgraded3 2.4GHz antenna for better range
  • Optional Heltec Lora V3 for Mesthastic Integration

Step 2: Firmware Installation

To install the firmware onto your device, follow these steps:

1. Clone the repository:

git clone https://github.com/colonelpanichacks/drone-mesh-mapper

Open the project in PlatformIO: You can use the PlatformIO IDE (in VS Code) or the PlatformIO CLI.

2.Select the correct environment:

This project uses the remotied_mesh_dualcore sketch, which enables both BLE and Wi-Fi functionality.Make sure the platformio.ini environment is set to remoteid_mesh_dualcore.

3. Connect you device via usb and flash

Upload the firmware:

  • In the IDE, select the remoteid_mesh_dualcore environment and click the “Upload” button.

3. Sofware Installation

Install Python dependencies:

  • flask>=2.0.0
  • flask-socketio>=5.0.0
  • requests>=2.25.0
  • urllib3>=1.26.0
  • pyserial>=3.5

Run the detection system:

python mapper.py

The web interface automatically opens at http://localhost:5000

Step 4: Device Configuration

1. Connect ESP32 via USB-C

2. Select the correct serial port in the web interface

3. Click “Connect” to start receiving data

4. Configure device aliases and settings as needed

How It Works

  • Core 0 handles WiFi monitoring in promiscuous mode, capturing Remote ID data embedded in beacon frames and processing Neighbor Awareness Networking transmissions on channel 6 by default.
  • Core 1 continuously scans for Bluetooth LE advertisements containing Remote ID data, supporting both BT 4.0 and 5.0 protocols with optimized low-power scanning.
  • Both cores feed detected Remote ID data into a unified JSON output stream via USB serial at 115200 baud. The firmware is based on Cemaxacuter’s excellent Remote ID detection work, enhanced with dual-core operation.
  • The Python Flask web application receives this data and provides real-time visualization on an interactive map, automatic logging to CSV and KML files, FAA database integration for aircraft registration lookups, support for up to 3 ESP32 devices simultaneously, live data streaming via WebSocket, and comprehensive export functions.

One of the most exciting features is Meshtastic integration. The ESP32 firmware can send compact detection messages over UART to a connected Meshtastic device. This enables:

  • Distributed Monitoring: Multiple detection nodes sharing data across your property or neighborhood
  • Extended Range: Mesh networking extends effective coverage area beyond single-device limitations
  • Redundancy: Multiple nodes provide backup coverage if one device fails
  • Low-Power Operation: Meshtastic’s LoRa radios enable remote deployment without constant power
  • Community Networks: Integration with existing Meshtastic mesh networks for broader awareness
  • Messages sent over the mesh network use a compact format optimized for LoRa bandwidth constraints:

Features in Action

Real-Time Detection and Mapping

The web interface provides a Google Maps-style view with drone markers showing current aircraft positions, pilot markers indicating operator locations, color-coded flight paths derived from device MAC addresses, signal strength indicators showing detection quality, and automatic cleanup removing stale data after 5 minutes.

Data Export and Analysis

The system continuously generates multiple data formats including timestamped CSV logs perfect for spreadsheet analysis, Google Earth compatible KML files with flight path visualization featuring individual drone paths color-coded by device and timestamped waypoints, and JSON API providing real-time data access for custom integrations with RESTful endpoints and WebSocket streams.

FAA Database Integration

One of the most powerful features is automatic FAA registration lookup that queries the FAA database using detected Remote ID information, caches results to minimize API calls and improve performance, enriches detection data with aircraft registration details, and includes configurable rate limiting to respect API guidelines.

Multi-Device Coordination

The system supports up to three ESP32 devices simultaneously with automatic device discovery and connection, individual device health monitoring, load balancing across multiple receivers, and unified data view combining all devices.

Performance and Optimization

Reception Range

Testing has shown effective detection ranges of 5 Km in urban environments, 10-15 kilometers in open areas with good antennas, overlapping coverage that eliminates dead zones when using multiple devices, and significant improvement with external antennas compared to built-in antennas.

System Resources

The Python application is optimized for continuous operation with efficient memory management for large datasets, automatic log rotation to prevent disk space issues, WebSocket connection pooling for multiple clients, and configurable data retention policies.

For remote deployments, Meshtastic integration enables off-grid operation, webhook retry logic ensures reliable alert delivery, local data storage prevents data loss during network outages, and bandwidth optimization handles limited connections.

Privacy and Security Considerations

This system puts powerful airspace monitoring capabilities in individual hands, but it’s important to use it responsibly. The detection data contains location information about both drones and their operators, so implement appropriate data retention policies and be aware of local privacy regulations.

For network security, remember that the Flask development server is not production-ready, so consider a reverse proxy for production use and implement authentication for sensitive deployments. Use HTTPS for webhook communications and monitor for unauthorized access attempts.

The system enables you to know what’s flying over your property while respecting the legitimate privacy expectations of drone operators. It’s about transparency and awareness, not surveillance.

Conclusion

This Remote ID detection system represents a significant step forward in personal airspace awareness. The combination of dual-core ESP32 processing, comprehensive web-based interface, Meshtastic mesh integration, and professional data export features creates a platform that’s both accessible to makers and powerful enough for serious privacy protection applications.

The availability of custom-designed PCBs on Tindie removes the barrier of hardware design, while the open-source firmware and software ensure complete customizability. Whether you’re building a single-node setup for personal property monitoring or deploying a mesh network for neighborhood-wide awareness, this system provides the foundation for comprehensive drone detection and tracking.

As more drones come online with Remote ID compliance, having your own detection system becomes increasingly valuable for maintaining privacy and situational awareness of your local airspace

Mesh Mapper Github : https://github.com/colonelpanichacks/drone-mesh-mapper

Mesh Detect Github (all firmware for Mesh Detect boards: https://github.com/colonelpanichacks/mesh-detect

Mesh Detect SMA mount clip SMA mount clip for the Mesh Destect board by OrdoOuroboros https://www.printables.com/model/1294183-mesh-detect-board-sma-mount

Build Your Own

Ready to start monitoring your local airspace? The combination of affordable hardware, open-source software, and comprehensive documentation makes this project accessible to makers of all skill levels. Start with a single ESP32 device to learn the system, then expand to multiple nodes and Meshtastic integration as your privacy protection needs grow.

The future of airspace monitoring is distributed, affordable, and puts control back in the hands of individuals and communities. Join the movement building these next-generation detection systems!

Source: Mesh-Mapper – Drone Remote ID mapping and mesh alerts – Hackster.io

Detecting Surveillance Cameras With The ESP32 from Colonel.Panic

These days, surveillance cameras are all around us, and they’re smarter than ever. In particular, many of them are running advanced algorithms to recognize faces and scan license plates, compiling ever-greater databases on the movements and lives of individuals. Flock You is a project that aims to, at the very least, catalogue this part of the surveillance state, by detecting these cameras out in the wild.

The system is most specifically set up to detect surveillance cameras from Flock Safety, though it’s worth noting a wide range of companies produce plate-reading cameras and associated surveillance systems these days. The device uses an ESP32 microcontroller to detect these devices, relying on the in-built wireless hardware to do the job. The project can be built on a Oui-Spy device from Colonel Panic, or just by using a standard Xiao ESP32 S3 if so desired. By looking at Wi-Fi probe requests and beacon frames, as well as Bluetooth advertisements, it’s possible for the device to pick up telltale transmissions from a range of these cameras, with various pattern-matching techniques and MAC addresses used to filter results in this regard. When the device finds a camera, it sounds a buzzer notifying the user of this fact.

Meanwhile, if you’re interested in just how prevalent plate-reading cameras really are, you might also find deflock.me interesting. It’s a map of ALPR camera locations all over the world,  and you can submit your own findings if so desired. The techniques used by in the Flock You project are based on learnings from the DeFlock project. Meanwhile, if you want to join the surveillance state on your own terms, you can always build your own license plate reader instead!

Source: Detecting Surveillance Cameras With The ESP32 | Hackaday

EU becomes a little more fascist and starts collecting fingerprints at the border

The new Entry/Exit System (EES) will start operations on 12 October 2025. European countries using the EES will introduce the system gradually at their external borders. This means that data collection will be gradually introduced at border crossing points with full implementation by 10 April 2026.

Source: What is the EES?

You need to provide your personal data each time you reach the external borders of the European countries using the EES. For more information – see What does progressive start of the EES mean? 
The EES collects, records and stores: 

  • data listed in your travel document(s) (e.g. full name, date of birth, etc.)
  • date and place of each entry and exit 
  • facial image and fingerprints (called ‘biometric data’)
  • whether you were refused entry.

On the basis of the collected biometric data, biometric templates will be created and stored in the shared Biometric Matching Service (see footnote).

If you hold a short-stay visa to enter the Schengen area, your fingerprints will already be stored in the Visa Information System (VIS) and will not be stored again in the EES.

Depending on your particular situation, the system also collects your personal information from:

[…]

If you refuse to provide your biometric data, you will be denied entry into the territory of the European countries using the EES.

Who can access your personal data?

  • Border, visa and immigration authorities in the European countries using the EES for the purpose of verifying your identity and understanding whether you should be allowed to enter or stay on the territory.
  • Law enforcement authorities of the countries using the EES and Europol for law enforcement purposes. 
  • Under strict conditions, your data may be transferred to another country (inside or outside the EU) or international organisation (listed in Annex I of Regulation (EU) 2017/2226 – a UN organisation, the International Organisation for Migration, or the International Committee of the Red Cross) for return (Article 41(1) and (2), and Article 42) and/or law enforcement purposes (Article 41(6)).
  • Transport carriers will only be able to verify whether short-stay visa holders have already used the number of entries authorised by their visa and will not be able to access any further personal data.

[…]

Your data cannot be transferred to third parties – whether public or private entities – except in certain cases. See Who can access your personal data

[…]

So lots of data collected, and loads of people who can access this data – exceptions are absolutely everywhere. And for what? To satisfy far right fantasies about migration running rampant.

US, CA and EU Airlines Sell 5 Billion Plane Ticket Records to the Government For Warrantless Searching

A data broker owned by the country’s major airlines, including American Airlines, United, and Delta, [and Air France, Lufthansa, JetBlue] is selling access to five billion plane ticketing records to the government for warrantless searching and monitoring of peoples’ movements, including by the FBI, Secret Service, ICE, and many other agencies, according to a new contract and other records reviewed by 404 Media.
The contract provides new insight into the scale of the sale of passengers’ data by the Airlines Reporting Corporation (ARC), the airlines-owned data broker. The contract shows ARC’s data includes information related to more than 270 carriers and is sourced through more than 12,800 travel agencies. ARC has previously told the government to not reveal to the public where this passenger data came from, which includes peoples’ names, full flight itineraries, and financial details.
“Americans’ privacy rights shouldn’t depend on whether they bought their tickets directly from the airline or via a travel agency. ARC’s sale of data to U.S. government agencies is yet another example of why Congress needs to close the data broker loophole by passing my bipartisan bill, the Fourth Amendment Is Not For Sale Act,” Senator Ron Wyden told 404 Media in a statement.
ARC is owned and operated by at least eight major U.S. airlines, publicly released documents show. Its board of directors includes representatives from American Airlines, Delta, United, Southwest, Alaska Airlines, JetBlue, and European airlines Air France and Lufthansa, and Canada’s Air Canada. ARC acts as a bridge between airlines and travel agencies, in which it helps with fraud prevention and finds trends in travel data. ARC also sells passenger data to the government as part of what it calls the Travel Intelligence Program (TIP).
TIP is updated every day with the previous day’s ticket sales and can show a person’s paid intent to travel. Government agencies can then search this data by name, credit card, airline, and more.
The new contract shows that ARC has access to much more data than previously reported. Earlier coverage found TIP contained more than one billion records spanning more than 3 years of past and future travel. The new contract says ARC provides the government with “5 billion ticketing records for searching capabilities.”
Gallery Image
Gallery Image
Screenshots of the documents obtained by 404 Media.
404 Media obtained the contract through a Freedom of Information Act (FOIA) with the Secret Service. The contract indicates the Secret Service plans to pay ARC $885,000 for access to the data stretching into 2028.
[…]
An ARC spokesperson told 404 Media in an email that TIP “was established by ARC after the September 11, 2001, terrorist attacks and has since been used by the U.S. intelligence and law enforcement community to support national security and prevent criminal activity with bipartisan support. Over the years, TIP has likely contributed to the prevention and apprehension of criminals involved in human trafficking, drug trafficking, money laundering, sex trafficking, national security threats, terrorism and other imminent threats of harm to the United States.”
The spokesperson added “Pursuant to ARC’s privacy policy, consumers may ask ARC to refrain from selling their personal data.”
After media coverage and scrutiny from Senator Wyden’s office of the little-known data selling, ARC finally registered as a data broker in the state of California in June. Senator Wyden previously said it appeared ARC had been in violation of Californian law for not registering while selling airline customers’ data for years.

Source: Airlines Sell 5 Billion Plane Ticket Records to the Government For Warrantless Searching

Supposedly you can opt out by emailing them at privacy@arccorp.com

Danish Minister of Justice and chief architect of the current Chat Control proposal, Peter Hummelgaard:

Danish Minister of Justice, Peter Hummelgaard.

“We must break with the totally erroneous perception that it is everyone’s civil liberty to communicate on encrypted messaging services.”

Share your thoughts via https://fightchatcontrol.eu/, or to jm@jm.dk directly.

Source: https://www.ft.dk/samling/20231/almdel/REU/spm/1426/index.htm

In the answers he cites “but we must protect the children” – as soon as that argument is trotted out have a good look at what they are taking away from you. After all, who can be against the safety of children? But blanket surveillance is bad for children and awful for society. If you know you are being watched, you can’t speak freely, you can’t voice your opinion and democracy cannot function. THAT is bad for the children.

There is something rotten in the state of Denmark. Big Brother, 1984, they were warnings, not manuals.

Source: https://mastodon.social/@chatcontrol/115204439983078498

More discussion: https://www.reddit.com/r/europe/comments/1nhdtoz/danish_minister_of_justice_we_must_break_with_the/

PS I would not buy a used camel from this creep.

Swiss government may disable privacy tech, stoking fears of mass surveillance

The Swiss government could soon require service providers with more than 5,000 users to collect government-issued identification, retain subscriber data for six months and, in many cases, disable encryption.

The proposal, which is not subject to parliamentary approval, has alarmed privacy and digital-freedoms advocates worldwide because of how it will destroy anonymity online, including for people located outside of Switzerland.

A large number of virtual private network (VPN) companies and other privacy-preserving firms are headquartered in the country because it has historically had liberal digital privacy laws alongside its famously discreet banking ecosystem.

Proton, which offers secure and end-to-end encrypted email along with an ultra-private VPN and cloud storage, announced on July 23 that it is moving most of its physical infrastructure out of Switzerland due to the proposed law.

The company is investing more than €100 million in the European Union, the announcement said, and plans to help develop a “sovereign EuroStack for the future of our home continent.” Switzerland is not a member of the EU.

Proton said the decision was prompted by the Swiss government’s attempt to “introduce mass surveillance.”

Proton founder and CEO Andy Yen told Radio Télévision Suisse (RTS) that the suggested regulation would be illegal in the EU and United States.

“The only country in Europe with a roughly equivalent law is Russia,” Yen said.

[…]

Internet users would no longer be able to register for a service with just an email address or anonymously and would instead have to provide their passport, drivers license or another official ID to subscribe, said Chloé Berthélémy, senior policy adviser at European Digital Rights (eDRI), an association of civil and human rights organizations from across Europe.

The regulation also includes a mass data retention obligation requiring that service providers keep users’ email addresses, phone numbers and names along with IP addresses and device port numbers for six months, Berthélémy said. Port numbers are unique identifiers that send data to a specific application or service on a computer.

All authorities would need to do to obtain the data, Berthélémy said, is make a simple request that would circumvent existing legal control mechanisms such as court orders.

“The right to anonymity is supporting a very wide range of communities and individuals who are seeking safety online,” Berthélémy said.

“In a world where we have increasing attacks from governments on specific minority groups, on human rights defenders, journalists, any kind of watchdogs and anyone who holds those in power accountable, it’s very crucial that we … preserve our privacy online in order to do those very crucial missions.”

Source: Swiss government looks to undercut privacy tech, stoking fears of mass surveillance | The Record from Recorded Future News

Proton Mail Suspended Journalist Accounts at Request of some Cybersecurity Agency without any process

The company behind the Proton Mail email service, Proton, describes itself as a “neutral and safe haven for your personal data, committed to defending your freedom.”

But last month, Proton disabled email accounts belonging to journalists reporting on security breaches of various South Korean government computer systems following a complaint by an unspecified cybersecurity agency. After a public outcry, and multiple weeks, the journalists’ accounts were eventually reinstated — but the reporters and editors involved still want answers on how and why Proton decided to shut down the accounts in the first place.

Martin Shelton, deputy director of digital security at the Freedom of the Press Foundation, highlighted that numerous newsrooms use Proton’s services as alternatives to something like Gmail “specifically to avoid situations like this,” pointing out that “While it’s good to see that Proton is reconsidering account suspensions, journalists are among the users who need these and similar tools most.” Newsrooms like The Intercept, the Boston Globe, and the Tampa Bay Times all rely on Proton Mail for emailed tip submissions.

Shelton noted that perhaps Proton should “prioritize responding to journalists about account suspensions privately, rather than when they go viral.”

On Reddit, Proton’s official account stated that “Proton did not knowingly block journalists’ email accounts” and that the “situation has unfortunately been blown out of proportion.” Proton did not respond to The Intercept’s request for comment.

The two journalists whose accounts were disabled were working on an article published in the August issue of the long-running hacker zine Phrack. The story described how a sophisticated hacking operation — what’s known in cybersecurity parlance as an APT, or advanced persistent threat — had wormed its way into a number of South Korean computer networks, including those of the Ministry of Foreign Affairs and the military Defense Counterintelligence Command, or DCC.

The journalists, who published their story under the names Saber and cyb0rg, describe the hack as being consistent with the work of Kimsuky, a notorious North Korean state-backed APT sanctioned by the U.S. Treasury Department in 2023.

As they pieced the story together, emails viewed by The Intercept show that the authors followed cybersecurity best practices and conducted what’s known as responsible disclosure: notifying affected parties that a vulnerability has been discovered in their systems prior to publicizing the incident.

Saber and cyb0rg created a dedicated Proton Mail account to coordinate the responsible disclosures, then proceeded to notify the impacted parties, including the Ministry of Foreign Affairs and the DCC, and also notified South Korean cybersecurity organizations like the Korea Internet and Security Agency, and KrCERT/CC, the state-sponsored Computer Emergency Response Team. According to emails viewed by The Intercept, KrCERT wrote back to the authors, thanking them for their disclosure.

A note on cybersecurity jargon: CERTs are agencies consisting of cybersecurity experts specializing in dealing with and responding to security incidents. CERTs exist in over 70 countries — with some countries having multiple CERTs each specializing in a particular field such as the financial sector — and may be government-sponsored or private organizations. They adhere to a set of formal technical standards, such as being expected to react to reported cybersecurity threats and security incidents. A high-profile example of a CERT agency in the U.S. is the Cybersecurity and Infrastructure Agency, which has recently been gutted by the Trump administration.

A week after the print issue of Phrack came out, and a few days before the digital version was released, Saber and cyb0rg found that the Proton account they had set up for the responsible disclosure notifications had been suspended. A day later, Saber discovered that his personal Proton Mail account had also been suspended. Phrack posted a timeline of the account suspensions at the top of the published article, and later highlighted the timeline in a viral social media post. Both accounts were suspended owing to an unspecified “potential policy violation,” according to screenshots of account login attempts reviewed by The Intercept.

The suspension notice instructed the authors to fill out Proton’s abuse appeals form if they believed the suspension was in error. Saber did so, and received a reply from a member of Proton Mail’s Abuse Team who went by the name Dante.

In an email viewed by The Intercept, Dante told Saber that their account “has been disabled as a result of a direct connection to an account that was taken down due to violations of our terms and conditions while being used in a malicious manner.” Dante also provided a link to Proton’s terms of service, going on to state, “We have clearly indicated that any account used for unauthorized activities, will be sanctioned accordingly.” The response concluded by stating, “We consider that allowing access to your account will cause further damage to our service, therefore we will keep the account suspended.”

On August 22, a Phrack editors reached out to Proton, writing that no hacked data was passed through the suspended email accounts, and asked if the account suspension incident could be deescalated. After receiving no response from Proton, the editor sent a follow-up email on September 6. Proton once again did not reply to the email.

On September 9, the official Phrack X account made a post asking Proton’s official account asking why Proton was “cancelling journalists and ghosting us,” adding: “need help calibrating your moral compass?” The post quickly went viral, garnering over 150,000 views.

Proton’s official account replied the following day, stating that Proton had been “alerted by a CERT that certain accounts were being misused by hackers in violation of Proton’s Terms of Service. This led to a cluster of accounts being disabled. Our team is now reviewing these cases individually to determine if any can be restored.” Proton then stated that they “stand with journalists” but “cannot see the content of accounts and therefore cannot always know when anti-abuse measures may inadvertently affect legitimate activism.”

Proton did not publicly specify which CERT had alerted them, and didn’t answer The Intercept’s request for the name of the specific CERT which had sent the alert. KrCERT also did not reply to The Intercept’s question about whether they were the CERT that had sent the alert to Proton.

Later in the day, Proton’s founder and CEO Andy Yen posted on X that the two accounts had been reinstated. Neither Yen nor Proton explained why the accounts had been reinstated, whether they had been found to not violate the terms of service after all, why had they been suspended in the first place, or why a member of the Proton Abuse Team reiterated that the accounts had violated the terms of service during Saber’s appeals process.

Phrack noted that the account suspensions created a “real impact to the author. The author was unable to answer media requests about the article.” The co-authors, Phrack pointed out, were also in the midst of the responsible disclosure process and working together with the various affected South Korean organizations to help fix their systems. “All this was denied and ruined by Proton,” Phrack stated.

Phrack editors said that the incident leaves them “concerned what this means to other whistleblowers or journalists. The community needs assurance that Proton does not disable accounts unless Proton has a court order or the crime (or ToS violation) is apparent.”

Source: Proton Mail Suspended Journalist Accounts at Request of Cybersecurity Agency

If Proton can’t view the content of accounts, how did Proton verify some random CERTs claims to make the decision to close the accounts? And how did Proton review to see if they could be restored? Is it Proton policy to decide that people are guilty before proven innocent? This attitude justifies people blowing up about this incident – because it shows how vulnerable they are to random whims of Proton instead of any kind of transparent diligent process.

We beat Chat Control but the fight isn’t over – another surveillance law that mandates companies to save user data for Europol is making its way right now and there is less than 24 hours to give the EU feedback!

Please follow this link to the questionnaire and help save our future – otherwise total surveillance like never seen before will strip you of every privacy and later fundamental rights you have as a EU citizen

++++++++++++++++++++++++++++

Information

The previous data retention law was declared illegal in 2014 by CJEU (EU’s highest court) for being mass surveillance and violating human rights.

Since most EU states refused to follow the court order and the EU commission refused to enforce it, CJEU recently caved in to political pressure and changed their stance on mass surveillance, making it legal.

And that instantly spawned this data retention law that is more far fetching than the original, that was deemed illegal. Here you can read the entire plan that EU is following. Briefly:

they want to sanction unlicensed messaging apps, hosting services and websites that don’t spy on users (and impose criminal penalties)

mandatory data retention, all your online activity must be tied to your identity

end of privacy friendly VPN’s and other services

cooperate with hardware manufacturers to ensure lawful access by design (backdoors for phones and computers)

prison for everybody who doesn’t comply

If you don’t know what the best options for some questions are, privacy wise, check out this answering guide by Edri(european digital rights organization)

Source: https://www.reddit.com/r/BuyFromEU/comments/1neecov/we_beat_chat_control_but_the_fight_isnt_over/

18 popular VPNs turn out to belong to 3 different owners – and contain insecurities as well

A new peer-reviewed study alleges that 18 of the 100 most-downloaded virtual private network (VPN) apps on the Google Play Store are secretly connected in three large families, despite claiming to be independent providers. The paper doesn’t indict any of our picks for the best VPN, but the services it investigates are popular, with 700 million collective downloads on Android alone.

The study, published in the journal of the Privacy Enhancing Technologies Symposium (PETS), doesn’t just find that the VPNs in question failed to disclose behind-the-scenes relationships, but also that their shared infrastructures contain serious security flaws. Well-known services like Turbo VPN, VPN Proxy Master and X-VPN were found to be vulnerable to attacks capable of exposing a user’s browsing activity and injecting corrupted data.

Titled “Hidden Links: Analyzing Secret Families of VPN apps,” the paper was inspired by an investigation by VPN Pro, which found that several VPN companies each were selling multiple apps without identifying the connections between them. This spurred the “Hidden Links” researchers to ask whether the relationships between secretly co-owned VPNs could be documented systematically.

[…]

Family A consists of Turbo VPN, Turbo VPN Lite, VPN Monster, VPN Proxy Master, VPN Proxy Master Lite, Snap VPN, Robot VPN and SuperNet VPN. These were found to be shared between three providers — Innovative Connecting, Lemon Clove and Autumn Breeze. All three have all been linked to Qihoo 360, a firm based in mainland China and identified as a “Chinese military company” by the US Department of Defense.

Family B consists of Global VPN, XY VPN, Super Z VPN, Touch VPN, VPN ProMaster, 3X VPN, VPN Inf and Melon VPN. These eight services, which are shared between five providers, all use the same IP addresses from the same hosting company.

Family C consists of X-VPN and Fast Potato VPN. Although these two apps each come from a different provider, the researchers found that both used very similar code and included the same custom VPN protocol.

If you’re a VPN user, this study should concern you for two reasons. The first problem is that companies entrusted with your private activities and personal data are not being honest about where they’re based, who owns them or who they might be sharing your sensitive information with. Even if their apps were all perfect, this would be a severe breach of trust.

But their apps are far from perfect, which is the second problem. All 18 VPNs across all three families use the Shadowsocks protocol with a hard-coded password, which makes them susceptible to takeover from both the server side (which can be used for malware attacks) and the client side (which can be used to eavesdrop on web activity).

[…]

 

Source: Researchers find alarming overlaps among 18 popular VPNs

So Spotify Public Links Now Show Your Personal Information. You Need to Disable Spotify DMs To Get Rid Of It.

Spotify wants to be yet another messaging platform, but its new DM system has a quirk that makes me hesitant to recommend it. Spotify used to be a non-identity based platform, but things changed once it added messaging. Now, the Spotify DM system is attaching account information to song links and putting it in front of users’ eyes. That means it can accidentally leak the name and profile picture of whoever shared a link, even if they didn’t intend to give out their account information, too. Thankfully there’s a way to make links more private, and to disable Spotify DMs altogether.

How Spotify is accidentally leaking users’ information

It all starts with tracking URLs. Many major companies on the web use these. They embed information at the end of a URL to track where clicks on it came from. Which website, which page, or in Spotify’s case, which user. If you’ve generated a Share link for a song or playlist in the past, it contained your user identity string at the end. And when someone accessed and acted on that link, by adding the song or playing it, your account information was saved in their account’s identity as a connection of sorts. Maybe a little invasive, but because users couldn’t do much with that information, it was mostly just a way for Spotify to track how often people were sharing music between each other.

Before, this happened in the background and no one really cared. But with the new Spotify DM feature, connections made via tracking links are suddenly being put front and center right before users’ eyes. As spotted by Reddit user u/sporoni122, these connections are now showing up in a “Suggested” section when using Spotify DMs, even if you just happened to click on a public link once and never heard of the person who shared it. Alternatively, you might have shared a link in the past, and could be shown account information for people who clicked on it.

Even if an account is public, I could see how this would be annoying. Imagine you share a song in a Discord server where you go by an anonymous name, but someone clicks on it and finds your Spotify account, where you might go by your real name. Bam, they suddenly know who you are.

Reddit user u/Reeceeboii added that Spotify is using this URL tracking behavior to populate a list of songs and playlists shared between two users even if they happened via third-party messaging services like WhatsApp.

So, if you don’t want others to find your Spotify account through your shared songs, what do you do? Well, before posting in anonymous communities like Discord or X, try cleaning up your links first.

My colleagues and I have previously written about how you can remove tracking information from a URL automatically on iPhone, how you can use a Mac app to clean links without any effort, or how you can use an all-in one extension to get the job done regardless of platform. You can also use a website like Link Cleaner to clean up your links.

Or you can take the manual approach. In your Spotify link, remove everything at the end starting with the question mark.

What do you think so far?

So this tracked link:

https://open.spotify.com/playlist/74BUi79BzFKW7IVJBShrFD?si=28575ba800324

Becomes this clean link:

https://open.spotify.com/playlist/74BUi79BzFKW7IVJBShrFD

Here, the part with “si=“ is your identifier. Of course, if it’s a playlist you’re sharing, it will still show your name and your profile picture—that’s how the platform has always worked. So if you want to stay truly anonymous, you’ll want to keep your playlists private.

How to disable Spotify DMs

If you don’t see yourself using Spotify DMs, it might also be a good idea to just get rid of them entirely. You’ll probably still want to remove tracking information from your URLs before sharing, just for due diligence. But if you don’t want to worry about getting DMs on Spotify or having your account show up as a Suggested contact to strangers, you should also go to Settings > Privacy and social > Social features and disable Messages. That’ll opt you out of the DM feature altogether.

Disable Spotify DM.
Credit: Michelle Ehrhardt

Source: If You’ve Ever Shared a Spotify Link Publicly, You Need to Disable Spotify DMs

Age verification legislation is tanking traffic to sites that comply, and rewarding those that don’t

A new report suggests that the UK’s age verification measures may be having unforeseen knock-on effects on web traffic, with the real winners being sites that flout the law entirely.

[…]

Sure, there are ways around this if you’d rather not feed your personal data to a platform’s third-party age verification vendor. However, sites are seeing more significant consequences beyond just locking you out of your DMs. For a start, The Washington post reports web traffic to pornography sites implementing age verification has taken a totally predictable hit—but those flouting the new age check requirements have seen traffic as much as triple compared to the same time last year.

The Washington Post looked at the 90 most visited porn sites based on UK visitor data from Similarweb. Of the 90 total sites, 14 hadn’t yet deployed ‘scan your face’ age checks. The publication found that while traffic from British IP addresses to sites requiring age verification had cratered, the 14 sites without age checks “have been rewarded with a flood of traffic” from UK-based users.

It’s worth noting that VPN usage might distort the the location data of users. Still, such a surge of traffic likely brings with it a surge in income in the form of ad-revenue. Ofcom, the UK’s government-approved regulatory communications office overseeing everything from TV to the internet, may have something to say about that though. Meanwhile, sites that comply with the rules are not only losing out on ad-revenue, but are also expected to pay for the legally required age verification services on top.

[…]

Alright, stop snickering about the mental image of someone perusing porn sites professionally, and let me tell you why this is important. You may have already read that while a lot of Brits support the age verification measures broadly speaking, a sizable portion feels they’ve been implemented poorly. Indeed, a lot of the aforementioned sites that complied with the law also criticised it by linking to a petition seeking its repeal. The UK government has responded to this petition by saying it has “no plans to repeal the Online Safety Act” despite, at time of writing, over 500,000 signatures urging it to do just that.

[…]

Source: Age verification legislation is tanking traffic to sites that comply, and rewarding those that don’t | PC Gamer

Of course age verification isn’t just hitting porn sites. It is also hitting LGBTQ+ sites, public health forums, conflict reporting and global journalism and more.

And there is no way to do Age Verification privately.

Europol wants to keep all data forever for law  enforcement, says unnamed(!) official. E.U. Court of Human Rights backed encryption as basic to privacy rights in 2024 and now Big Brother Chat Control is on the agenda again (EU consultation feedback link at end)

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

In the Russian case, the users relied on Telegram’s optional “secret chat” functions, which are also end-to-end encrypted. Telegram had refused to break into chats of a handful of users, telling a Moscow court that it would have to install a back door that would work against everyone. It lost in Russian courts but did not comply, leaving it subject to a ban that has yet to be enforced.
The European court backed the Russian users, finding that law enforcement having such blanket access “impairs the very essence of the right to respect for private life” and therefore would violate Article 8 of the European Convention, which enshrines the right to privacy except when it conflicts with laws established “in the interests of national security, public safety or the economic well-being of the country.”
The court praised end-to-end encryption generally, noting that it “appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information.”
In addition to prior cases, the judges cited work by the U.N. human rights commissioner, who came out strongly against encryption bans in 2022, saying that “the impact of most encryption restrictions on the right to privacy and associated rights are disproportionate, often affecting not only the targeted individuals but the general population.”
High Commissioner Volker Türk said he welcomed the ruling, which he promoted during a recent visit to tech companies in Silicon Valley. Türk told The Washington Post that“encryption is a key enabler of privacy and security online and is essential for safeguarding rights, including the rights to freedom of opinion and expression, freedom of association and peaceful assembly, security, health and nondiscrimination.”
[…]
Even as the fight over encryption continues in Europe, police officials there have talked about overriding end-to-end encryption to collect evidence of crimes other than child sexual abuse — or any crime at all, according to an investigative report by the Balkan Investigative Reporting Network, a consortium of journalists in Southern and Eastern Europe.
“All data is useful and should be passed on to law enforcement, there should be no filtering … because even an innocent image might contain information that could at some point be useful to law enforcement,” an unnamed Europol police official said in 2022 meeting minutes released under a freedom of information request by the consortium.

Source: E.U. Court of Human Rights backs encryption as basic to privacy rights – The Washington Post

An ‘unnamed’ Europol police official is peak irony in this context.

Remember to leave your feedback where you can, in this case: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/14680-Impact-assessment-on-retention-of-data-by-service-providers-for-criminal-proceedings-/public-consultation_en