India demands smartphone makers install government app

India’s government has issued a directive that requires all smartphone manufacturers to install a government app on every handset in the country and has given them 90 days to get the job done – and to ensure users can’t remove the code.

The app is called “Sanchar Saathi” and is a product of India’s Department of Telecommunications (DoT).

On Google Play and Apple’s App Store, the Department describes the app as “a citizen centric initiative … to empower mobile subscribers, strengthen their security and increase awareness about citizen centric initiatives.”

The app does those jobs by allowing users to report incoming calls or messages – even on WhatsApp – they suspect are attempts at fraud. Users can also report incoming calls for which caller ID reveals the +91 country code, as India’s government thinks that’s an indicator of a possible illegal telecoms operator.

Users can also block their device if they lose it or suspect it was stolen, an act that will prevent it from working on any mobile network in India.

Another function allows lookup of IMEI numbers so users can verify if their handset is genuine.

Spam and scams delivered by calls or TXTs are pervasive around the world, and researchers last year found that most Indian netizens receive three or more dodgy communiqués every day. This app has obvious potential to help reduce such attacks.

An announcement from India’s government states that cybersecurity at telcos is another reason for the requirement to install the app.

“Spoofed/ Tampered IMEIs in telecom network leads to situation where same IMEI is working in different devices at different places simultaneously and pose challenges in action against such IMEIs,” according to the announcement. “India has [a] big second-hand mobile device market. Cases have also been observed where stolen or blacklisted devices are being re-sold. It makes the purchaser abettor in crime and causes financial loss to them. The blocked/blacklisted IMEIs can be checked using Sanchar Saathi App.”

That motive is likely the reason India has required handset-makers to install Sanchar Saathi on existing handsets with a software update.

The directive also requires the app to be pre-installed, “visible, functional, and enabled for users at first setup.” Manufacturers may not disable or restrict its features and “must ensure the App is easily accessible during device setup.”

Those functions mean India’s government will soon have a means of accessing personal info on hundreds of millions of devices.

Apar Gupta, founder and director of India’s Internet Freedom Foundation, has criticized the directive on grounds that Sanchar Saathi isn’t fit for purpose. “Rather than resorting to coercion and mandating it to be installed the focus should be on improving it,” he wrote.

[…]

Source: India demands smartphone makers install government app • The Register

Canadian data order risks blowing a hole in EU sovereignty

A Canadian court has ordered French cloud provider OVHcloud to hand over customer data stored in Europe, potentially undermining the provider’s claims about digital sovereignty protections.

According to documents seen by The Register, the Royal Canadian Mounted Police (RCMP) issued a Production Order in April 2024 demanding subscriber and account data linked to four IP addresses on OVH servers in France, the UK, and Australia as part of a criminal investigation.

OVH has a Canadian arm, which was the jumping-off point for the courts, but OVH Group is a French company, so the data in France should be protected from prying eyes. Or perhaps not.

Rather than using established Mutual Legal Assistance Treaties (MLAT) between Canada and France, the RCMP sought direct disclosure through OVH’s Canadian subsidiary.

This puts OVH in an impossible position. French law prohibits such data sharing outside official treaties, with penalties up to €90,000 and six months imprisonment. But refusing the Canadian order risks contempt of court charges.

[…]

Under Trump 2.0, economic and geopolitical relations between Europe and the US have become increasingly volatile, something Microsoft acknowledged in April.

Against this backdrop, concerns about the US CLOUD Act are growing. Through the legislation, US authorities can request – via warrant or subpoena – access to data hosted by US corporations regardless of where in the world that data is stored. Hyperscalers claim they have received no such requests with respect to European customers, but the risk remains and European cloud providers have used this as a sales tactic by insisting digital information they hold is protected.

In the OVH case, if Canadian authorities are able to force access to data held on European servers rather than navigate official channels (for example, international treaties), the implications could be severe.

[…]

Earlier this week, GrapheneOS announced it no longer had active servers in France and was in the process of leaving OVH.

The privacy-focused mobile outfit said, “France isn’t a safe country for open source privacy projects. They expect backdoors in encryption and for device access too. Secure devices and services are not going to be allowed. We don’t feel safe using OVH for even a static website with servers in Canada/US via their Canada/US subsidiaries.”

In August, an OVH legal representative crowed over the admission by Microsoft that it could not guarantee data sovereignty.

It would be deeply ironic if OVH were unable to guarantee the same thing because the company has a subsidiary in Canada.

[…]

Source: Canadian data order risks blowing a hole in EU sovereignty • The Register

That didn’t take long: A few days after Chat Control, European Parliament implements Age Verification on Social Media, 16+

On Wednesday, MEPs adopted a non-legislative report by 483 votes in favour, 92 against and with 86 abstentions, expressing deep concern over the physical and mental health risks minors face online and calling for stronger protection against the manipulative strategies that can increase addiction and that are detrimental to children’s ability to concentrate and engage healthily with online content.


Minimum age for social media platforms

To help parents manage their children’s digital presence and ensure age-appropriate online engagement, Parliament proposes a harmonised EU digital minimum age of 16 for access to social media, video-sharing platforms and AI companions, while allowing 13- to 16-year-olds access with parental consent.

Expressing support for the Commission’s work to develop an EU age verification app and the European digital identity (eID) wallet, MEPs insist that age assurance systems must be accurate and preserve minors’ privacy. Such systems do not relieve platforms of their responsibility to ensure their products are safe and age-appropriate by design, they add.

To incentivise better compliance with the EU’s Digital Services Act (DSA) and other relevant laws, MEPs suggest senior managers could be made personally liable in cases of serious and persistent non-compliance, with particular respect to protection of minors and age verification.

[…]

According to the 2025 Eurobarometer, over 90% of Europeans believe action to protect children online is a matter of urgency, not least in relation to social media’s negative impact on mental health (93%), cyberbullying (92%) and the need for effective ways to restrict access to age-inappropriate content (92%).

Member states are starting to take action and responding with measures such as age limits and verification systems.

Source: Children should be at least 16 to access social media, say MEPs | News | European Parliament

Expect to see manadatory surveillance on social media (whatever they define that to be) soon as it is clearly “risky”.

The problem is real, but age verification is not the way to solve the problem. Rather, it will make it much, much worse as well as adding new problems entirely.

See also: https://www.linkielist.com/?s=age+verification&submit=Search

See also: Europen Council decides to implement Mass Surveillance and Age Verification through law protecting children from online abuse

Welcome to a new fascist thought controlled Europe, heralded by Denmark.

Chat Control: EU lawmakers finally agree on the “voluntary” scanning of your private chats

[…] The EU Council has finally reached an agreement on the controversial Child Sexual Abuse Regulation (CSAR) after more than three years of failed attempts.

Nicknamed Chat Control by its critics, the agreement has kept cryptographers, technologists, encrypted service providers, and privacy experts alike in turmoil since its inception.

Presidency after presidency, the bill has taken many shapes. But its most controversial feature is an obligation for all messaging service providers operating in the EU – including those using end-to-end-encryption – to scan their users’ private chats on the lookout for child sexual abuse material (CSAM).

At the beginning of the month, the Danish Presidency decided to change its approach with a new compromise text that makes the chat scanning voluntary, instead. That turned to be a winning move, with the proposal managing to reach an agreement in the Council on Wednesday, November 26, 2025.

Privacy experts are unlikely to celebrate, though. The decision came a few days after a group of scientists wrote yet another open letter warning that the latest text still “brings high risks to society.” That’s after other privacy experts deemed the new proposal a “political deception” rather than an actual fix.

The EU Council is now preparing to start negotiations with the European Parliament, hoping to agree on the final terms of the regulation.

What we know about the Council agreement

As per the EU Council announcement, the new law imposes a series of obligations on digital companies. Under the new rules, online service providers will be required to assess how their platforms could be misused and, based on the results, may need to “implement mitigating measures to counter that risk,” the Council notes.

Source: Chat Control: EU lawmakers finally agree on the voluntary scanning of your private chats | TechRadar

A “risk mitigation obligation” can be used to explain anything and obligate spying through whatever services the EU says there is “risk”

Considering the whole proposal was shot down several times in the past years and even past month, using a back door rush to push this through is not how a democracy is supposed to function at all. And this is how fascism grips it’s iron claws. What is going on in Demark?

Europen Council decides to implement Mass Surveillance and Age Verification through law protecting children from online abuse

[…]

Under the new rules, online service providers will be required to assess the risk that their services could be misused for the dissemination of child sexual abuse material or for the solicitation of children. On the basis of this assessment, they will have to implement mitigating measures to counter that risk. Such measures could include making available tools that enable users to report online child sexual abuse, to control what content about them is shared with others and to put in place default privacy settings for children.

Member states will designate national authorities (‘coordinating and other competent authorities’) responsible for assessing these risk assessments and mitigating measures, with the possibility of obliging providers to carry out mitigating measures.

[…]

The Council also wants to make permanent a currently temporary measure that allows companies to – voluntarily – scan their services for child sexual abuse. At present, providers of messaging services, for instance, may voluntarily check content shared on their platforms for online child sexual abuse material,

[Note here: if it is deemed “risky” then the voluntary part is scrubbed and it becomes mandatory. Anything can be called “risky” very easily (just look at the data slurping that goes on in Terms of Services through the text “improving our product”).]

The new law provides for the setting up of a new EU agency, the EU Centre on Child Sexual Abuse, to support the implementation of the regulation.

The EU Centre will assess and process the information supplied by the online providers about child sexual abuse material identified on services, and will create, maintain and operate a database for reports submitted to it by providers. It will further support the national authorities in assessing the risk that services could be used for spreading child sexual abuse material.

The Centre is also responsible for sharing companies’ information with Europol and national law enforcement bodies. Furthermore, it will establish a database of child sexual abuse indicators, which companies can use for their voluntary activities.

Source: Child sexual abuse: Council reaches position on law protecting children from online abuse – Consilium

The article does not mention how you can find out if someone is a child: that is age verification. Which comes with huge rafts of problems, such as censorship (there go the LGBTQ crowd!), hacks (Discord) stealing all the government IDs used to verify ages, and of course ways that people find to circumvent age verification (VPNs, which increase internet traffic, meme pictures of Donald Trump) which causes them to behave in a more unpredictable way, thus harming the kids this is supposed to protect.

Of course, this law has been shot down several times in the past 3 years by the EU, but that didn’t stop Denmark from finding a way to implement it nonetheless in a back door shotgun kind of way.

Age Verification, Estimation, Assurance, Oh My! A Guide To The Terminology

If you’ve been following the wave of age-gating laws sweeping across the country and the globe, you’ve probably noticed that lawmakers, tech companies, and advocates all seem to be using different terms for what sounds like the same thing. Age verification, age assurance, age estimation, age gating—they get thrown around interchangeably, but they technically mean different things. And those differences matter a lot when we’re talking about your rights, your privacy, your data, and who gets to access information online.

[click the source link below to read the different definitions – ed]

Why This Confusion Matters

Politicians and tech companies love using these terms interchangeably because it obscures what they’re actually proposing. A law that requires “age assurance” sounds reasonable and moderate. But if that law defines age assurance as requiring government ID verification, it’s not moderate at all—it’s mass surveillance. Similarly, when Instagram says it’s using “age estimation” to protect teens, that sounds privacy-friendly. But when their estimation fails and forces you to upload your driver’s license instead, the privacy promise evaporates.

Here’s the uncomfortable truth: most lawmakers writing these bills have no idea how any of this technology actually works. They don’t know that age estimation systems routinely fail for people of color, trans individuals, and people with disabilities. They don’t know that verification systems have error rates. They don’t even seem to understand that the terms they’re using mean different things. The fact that their terminology is all over the place—using “age assurance,” “age verification,” and “age estimation” interchangeably—makes this ignorance painfully clear, and leaves the onus on platforms to choose whichever option best insulates them from liability.

Language matters because it shapes how we think about these systems. “Assurance” sounds gentle. “Verification” sounds official. “Estimation” sounds technical and impersonal, and also admits its inherent imprecision. But they all involve collecting your data and create a metaphysical age gate to the internet. The terminology is deliberately confusing, but the stakes are clear: it’s your privacy, your data, and your ability to access the internet without constant identity checks. Don’t let fuzzy language disguise what these systems really do.

Republished from EFF’s Deeplinks blog.

Source: Age Verification, Estimation, Assurance, Oh My! A Guide To The Terminology | Techdirt

Danish manage to bypass democracy to implement mass EU surveillance, says it is “voluntary”

The EU states agree on a common position on chat control. Internet services should be allowed to read communication voluntarily, but will not be obliged [*cough – see bold and end of document: Ed*] to do so. We publish the classified negotiating protocol and bill. After the formal decision, the trilogue negotiations begin.

18.11.2025 at 14:03– Andre Meister – in surveillanceno additions

Man in suit at lectern, behind him flags.
Presidency of the Council: Danish Minister of Justice Hummelgaard. – CC-BY-NC-ND 4.0 Danish Presidency

The EU states have agreed on a common position on chat control. We publish the bill.

Last week, the Council working group discussed the law. We shall once again publish the classified minutes of the meeting.

Tomorrow, the Permanent Representatives want to officially decide on the position.

Update 19.10.: A Council spokesperson tells us, “The agenda item has been postponed until next week.”

Three years of dispute

For three and a half years, the EU institutions have been arguing over chat control. The Commission intends to oblige Internet services to search the content of their users without cause for information on criminal offences and to send them to authorities if suspected.

Parliament calls this mass surveillance and calls for only unencrypted content from suspects to be scanned.

A majority of EU countries want mandatory chat control. However, a blocking minority rejects this. Now the Council has agreed on a compromise. Internet services are not required to chat control, but may carry out a voluntary chat control.

Absolute red lines

The Danish Presidency wants to bring the draft law through the Council “as soon as possible” so that the trilogue negotiations can be started in a timely manner. The feedback from the states should be limited to “absolute red lines”.

The majority of states “supported the compromise proposal.” At least 15 spoke out in favour, including Germany and France.

Germany “welcomed both the deletion of the mandatory measures and the permanent anchoring of voluntary measures.”

Italy also sees voluntary chat control as skeptical. “We fear that the instrument could also be extended to other crimes, so we have difficulty supporting the proposal.” Politicians have already called for chat control to be extended to other content.

Absolute minimum consensus

Other states called the compromise “an absolute minimum consensus.” They “actually wanted more – especially in the sense of commitments.” Some states “showed themselves clearly disappointed by the cancellations made.”

Spain, in particular, “still considered mandatory measures to be necessary, unfortunately, a comprehensive agreement on this was not possible.” Hungary, too, “saw volunteerism as the sole concept as too little.”

Spain, Hungary and Bulgaria proposed “an obligation for providers to have to expose at least in open areas.” The Danish Presidency “described the proposal as ambitious, but did not take it up to avoid further discussion.”

Denmark explicitly pointed to the review clause. Thus, “the possibility of detection orders is kept open at a later date.” Hungary stressed that “this possibility must also be used.”

No obligation

The Danish Presidency had publicly announced that the chat control should not be mandatory, but voluntary.

However, the formulated compromise proposal was contradictory. She had deleted the article on mandatory chat control. However, another article said services should also carry out voluntary measures.

Several states have asked whether these formulations “could lead to a de facto obligation.” The Legal Services agreed: “The wording can be interpreted in both directions.” The Presidency of the Council “clarified that the text only had a risk mitigation obligation, but not a commitment to detection.”

The day after the meeting, the presidency of the Council sent out the likely final draft law of the Council. It states explicitly: ‘No provision of this Regulation shall be interpreted as imposing obligations of detection obligations on providers’.

Damage and abuse

Mandatory chat control is not the only issue in the planned law. Voluntary chat control is also prohibited. The European Commission cannot prove its proportionality. Many oppose voluntary chat control, including the EU Commission, the European Data Protection Supervisor and the German Data Protection Supervisor.

A number of scientists are critical of the compromise proposal. The voluntary chat control does not designate it to be appropriate. “Their benefit is not proven, while the potential for harm and abuse is enormous.”

The law also calls for mandatory age checks. The scientists criticize that age checks “bring with it an inherent and disproportionate risk of serious data breaches and discrimination without guaranteeing their effectiveness.” The Federal Data Protection Officer also fears a “large-scale abolition of anonymity on the Internet.”

Now follows Trilog

The EU countries will not discuss these points further. The Danish Presidency “reaffirmed its commitment to the compromise proposal without the Spanish proposals.”

The Permanent Representatives of the EU States will meet next week. In December, the justice and interior ministers meet. These two bodies are to adopt the bill as the official position of the Council.

This is followed by the trilogue. There, the Commission, Parliament and the Council negotiate to reach a compromise from their three separate bills.

[…]

A “risk mitigation obligation” can be used to explain anything and obligate spying through whatever services the EU says there is “risk”

Source: Translated from EU states agree on voluntary chat control

Considering the whole proposal was shot down several times in the past years and even past month, using a back door rush to push this through is not how a democracy is supposed to function at all. And this is how fascism grips it’s iron claws. What is going on in Demark?

For more information on the history of Chat Control click here

EU proposes doing away with constant cookies requests by setting the “No” in your browser settings

People will no longer be bombarded by constant requests to accept or reject “cookies” when browsing the internet, under proposed changes to the European Union’s strict data privacy laws.

The pop-up prompts asking internet users to consent to cookies when they visit a website are widely seen as a nuisance, undermining the original privacy intentions of the digital rules.

[I don’t think this undermines anything – cookie consent got rid of a LOT of spying and everyone now just automatically clicks on NO or uses addons to do this (well, if you are using Firefox as a browser). The original purpose: stop companies spying has been achieved]

Brussels officials have now tabled changes that would allow people to accept or reject cookies for a six-month period, and potentially set their internet browser to automatically opt-in or out, to avoid being repeatedly asked whether they consent to websites remembering information about their past visits.

Cookies allow websites to keep track of a user’s previous activity, allowing sites to pull up items added to an online shopping cart that were not purchased, or remember whether someone had logged in to an account on the site before, as well as target advertisements.

[…]

Source: EU proposes doing away with constant internet ‘cookies’ requests – The Irish Times

Switzerland plans surveillance worse than US

In Switzerland, a country known for its love for secrecy, particularly when it comes to banking, the tides have turned: An update to the VÜPF surveillance law directly targets privacy and anonymity services such as VPNs as well as encrypted chat apps and email providers. Right now the law is still under discussion in the Swiss Bundesrat.

[…]

While Swiss privacy has been overhyped, legislative rules in Switzerland are currently decent and comparable to German data protection laws. This update to the VÜPF, which could come into force by 2026, would change data protection legislation in Switzerland dramatically.

Why the update is dangerous

If the law passes in its current form,

  • Swiss email and VPN providers with just 5,000 users are forced to log IP addresses and retain the data for six months – while data retention in Germany is illegal for email providers.
  • ID or driver’s license, maybe a phone number, are required for the registration process of various services – rendering the anonymous usage impossible.
  • Data must be delivered upon request in plain text, meaning providers must be able to decrypt user data on their end (except for end-to-end encrypted messages exchanged between users).

What is more, the law is not introduced by or via the Parliament, but instead the Swiss government, the Federal Council and the Federal Department of Justice and Police (FDJP), want to massively expand internet surveillance by updating the VÜPF – without Parliament having a say. This comes as a shock in a country proud of its direct democracy with regular people’s decisions on all kinds of laws. However, in 2016 the Swiss actually voted for more surveillance, so direct democracy might not help here.

History of surveillance in Switzerland

In 2016, Swiss Parliament updated its data retention law BÜPF to enforce data retention for all communication data (post, email, phone, text messages, ip addresses). In 2018, the revision of the VÜPF translated this into administrative obligations for ISPs, email providers, and others, with exceptions in regard to the size of the provider and whether they were classified as telecommunications service providers or communications services.

This led to the fact that services such as Threema and ProtonMail were exempt from some of the obligations that providers such as Swisscom, Salt, and Sunrise had to comply with – even though the Swiss government would have liked to classify them as quasi network operators and telecommunications providers as well. The currently discussed update of the VÜPF seems to directly target smaller providers as well as providers of anonymous services and VPNs.

The Swiss surveillance state has always sought a lot of power, and had to be called back by the Federal Supreme Court in the past to put surveillance on a sound legal basis.

But now, article 50a of the VÜPF reform mandates that providers must be able to remove “the encryption provided by them or on their behalf”, basically asking for backdoor access to encryption. However, end-to-end encrypted messages exchanged between users do not fall under this decryption obligation. Yet, even Swiss email provider Proton Mail says to Der Bund that “Swiss surveillance would be much stricter than in the USA and the EU, and Switzerland would lose its competitiveness as a business location.”

Because of this upcoming legal change in Switzerland, Proton has started to move its server from Switzerland to the EU.

Source: Switzerland plans surveillance worse than US | Tuta

Roblox begins asking tens of millions of children to send it a selfie, for “age verification”.

Roblox is starting to roll out the mandatory age checks that will require all of its users to submit an ID or scan their face in order to access the platform’s chat features. The updated policy, which the company announced earlier this year, will be enforced first in Australia, New Zealand and the Netherlands and will expand to all other markets by early next year.

The company also detailed a new “age-based chat” system, which will limit users’ ability to interact with people outside of their age group. After verifying or estimating a user’s age, Roblox will assign them to an age group ranging from 9 years and younger to 21 years and older (there are six total age groups). Teens and children will then be limited from connecting with people that aren’t in or close to their estimated age group in in-game chats.

Unlike most social media apps which have a minimum age of 13, Roblox permits much younger children to use its platform. Since most children and many teens don’t have IDs, the company uses “age estimation” tech provided by identity company Persona. The checks, which use video selfies, are conducted within Roblox’s app and the company says that images of users’ faces are immediately deleted after completing the process.

[…]

Source: Roblox begins asking tens of millions of children to verify their age with a selfie

Deleted by Roblox itself, but also by Persona? Pretty scary, 1. having a database of all these kiddies faces and their online persona’s, ways of talking and typing, and 2. that even if the data is deleted, it could be intercepted as it is sent to Roblox and on to the verifier.

Google is collecting troves of data from downgraded Nest thermostats

Google officially turned off remote control functionality for early Nest Learning Thermostats last month, but it hasn’t stopped collecting a stream of data from these downgraded devices. After digging into the backend, security researcher Cody Kociemba found that the first- and second-generation Nest Learning Thermostats are still sending Google information about manual temperature changes, whether a person is present in the room, if sunlight is hitting the device, and more.

[…]

fter cloning Google’s API to create this custom software, he started receiving a trove of logs from customer devices, which he turned off. “On these devices, while they [Google] turned off access to remotely control them, they did leave in the ability for the devices to upload logs. And the logs are pretty extensive,” Kociemba tells The Verge.

[…]

Google is still getting all the information collected by Nest Learning Thermostats, including data measured by their sensors, such as temperature, humidity, ambient light, and motion. “I was under the impression that the Google connection would be severed along with the remote functionality, however that connection is not severed, and instead is a one-way street,” Kociemba says.

[…]

Source: Google is collecting troves of data from downgraded Nest thermostats | The Verge

Unremovable Spyware on Samsung Devices Comes Pre-installed on Galaxy Series Devices

The software in question, AppCloud, developed by the mobile analytics firm IronSource, has been embedded in devices sold primarily in the Middle East and North Africa (MENA) region.

Security researchers and privacy advocates warn that it quietly collects sensitive user data, fueling fears of surveillance in politically volatile areas.

AppCloud tracks users’ locations, app usage patterns, and device information without seeking ongoing consent after initial setup. Even more concerning, attempts to uninstall it often fail due to its deep integration into Samsung’s One UI operating system.

Reports indicate the app reactivates automatically following software updates or factory resets, making it virtually unremovable for average users. This has sparked outrage among consumers in countries such as Egypt, Saudi Arabia, and the UAE, where affordable Galaxy models are popular entry points into Android.

The issue came to light through investigations by SMEX, a Lebanon-based digital rights group focused on MENA privacy. In a recent report, SMEX highlighted how AppCloud’s persistence could enable third-party unauthorized data harvesting, posing significant risks in regions with histories of government overreach.

“This isn’t just bloatware, it’s a surveillance enabler baked into the hardware,” said a SMEX spokesperson. The group called on Samsung to issue a global patch and disclose the full scope of data shared with ironSource.

[…]

Source: Unremovable Spyware on Samsung Devices Comes Pre-installed on Galaxy Series Devices

Denmark rises again, finds another way to try to introduce 100% surveillance state in EU after public backlash stopped the last attempt at chat control. Send emails to your MEPs easily!

Thanks to public pressure, the Danish Presidency has been forced to revise its text, explicitly stating that any detection obligations are voluntary. While much better, the text continues to both (a) effectively outlaw anonymous communication through mandatory age verification; and (b) include planned voluntary mass scannings. The Council is expected to formally adopt its position on Chat Control the 18th or 19th of November. Trilogue with the European Parliament will commence soon after.

The EU (still) wants to scan
your private messages and photos

The “Chat Control” proposal would mandate scanning of all private digital communications, including encrypted messages and photos. This threatens fundamental privacy rights and digital security for all EU citizens.

You Will Be Impacted

Every photo, every message, every file you send will be automatically scanned—without your consent or suspicion. This is not about catching criminals. It is mass surveillance imposed on all 450 million citizens of the European Union.

Source: Fight Chat Control – Protect Digital Privacy in the EU

The site linked will allow you to very easily send an email to your representatives by clicking a few times. Take the time to ensure they understand that people have a voice!

“This is a political deception” − Denmark gives New Chat Control another shot. Mass surveillance for all from behind closed doors.

It’s official, a revised version of the CSAM scanning proposal is back on the EU lawmakers’ table − and is keeping privacy experts worried.

The Law Enforcement Working Party met again this morning (November 12) in the EU Council to discuss what’s been deemed by critics the Chat Control bill.

This follows a meeting the group held on November 5, and comes as the Denmark Presidency put forward a new compromise after withdrawing mandatory chat scanning.

As reported by Netzpolitik, the latest Child Sexual Abuse Regulation (CSAR) proposal was received with broad support during the November 5 meeting, “without any dissenting votes” nor further changes needed.

The new text, which removes all provisions on detection obligations included in the bill and makes CSAM scanning voluntary, seems to be the winning path to finally find an agreement after over three years of trying.

Privacy experts and technologists aren’t quite on board, though, with long-standing Chat Control critic and digital rights jurist, Patrick Breyer, deeming the proposal “a political deception of the highest order.”

Chat Control − what’s changing and what are the risk

As per the latest version of the text, messaging service providers won’t be forced to scan all URLs, pictures, and videos shared by users, but rather choose to perform voluntary CSAM scanning.

There’s a catch, though. Article 4 will include a possible “mitigation measure” that could be applied to high-risk services to require them to take “all appropriate risk mitigation measures.”

According to Breyer, such a loophole could make the removal of detection obligations “worthless” by negating their voluntary nature. He said: “Even client-side scanning (CSS) on our smartphones could soon become mandatory – the end of secure encryption.”

Breaking encryption, the tech that security software like the best VPNs, Signal, and WhatsApp use to secure our private communications, has been the strongest argument against the proposal so far.

Breyer also warns that the new compromise goes further than the discarded proposal, passing from AI-powered monitoring targeting shared multimedia to the scanning of private chat texts and metadata, too.

“The public is being played for fools,” warns Breyer. “Following loud public protests, several member states, including Germany, the Netherlands, Poland, and Austria, said ‘No’ to indiscriminate Chat Control. Now it’s coming back through the back door.”

Breyer is far from being the only one expressing concerns. German-based encrypted email provider, Tuta, is also raising the alarm.

“Hummelgaard doesn’t understand that no means no,” the provider writes on X.

To understand the next steps, we now need to wait and see what the outcomes from today’s meeting look like.

Source: “This is a political deception” − New Chat Control convinces lawmakers, but not privacy experts yet | TechRadar

Mozilla fellow Esra’a Al Shafei watches the spies through SurveillanceWatch

Digital rights activist Esra’a Al Shafei found FinFisher spyware on her device more than a decade ago. Now she’s made it her mission to surveil the companies providing surveillanceware, their customers, and their funders.

“You cannot resist what you do not know, and the more you know, the better you can protect yourself and resist against the normalization of mass surveillance today,” she told The Register.

To this end, the Mozilla fellow founded Surveillance Watch last year. It’s an interactive map that documents the growing number of surveillance software providers, which regions use the various products, and the investors funding them. Since its launch, the project has grown from mapping connections between 220 spyware and surveillance entities to 695 today.

These include the very well known spy tech like NSO Group’s Pegasus and Cytrox’s Predator, both famously used to monitor politicians, journalists and activists in the US, UK, and around the world.

They also include companies with US and UK government contracts, like Palantir, which recently inked a $10 billion deal with the US Army and pledged a £1.5 billion ($2 billion) investment in the UK after winning a new Ministry of Defense contract. Then there’s Paragon, an Israeli company with a $2 million Immigration and Customs Enforcement (ICE) contract for its Graphite spyware, which lets law enforcement hack smartphones to access content from encrypted messaging apps once the device is compromised.

Even LexisNexis made the list. “People think of LexisNexis and academia,” Al Shafei said. “They don’t immediately draw the connection to their product called Accurint, which collects data from both public and non-public sources and offers them for sale, primarily to government agencies and law enforcement.”

Accurint compiles information from government databases, utility bills, phone records, license plate tracking, and other sources, and it also integrates analytics tools to create detailed location mapping and pattern recognition.

“And they’re also an ICE contractor, so that’s another company that you wouldn’t typically associate with surveillance, but they are one of the biggest surveillance agencies out there,” Al Shafei said.

It also tracks funders. Paragon’s spyware is boosted by AE Industrial Partners, a Florida-based investment group specializing in “national security” portfolios. Other major backers of surveillance technologies include CIA-affiliated VC firm In-Q-Tel, Andreessen Horowitz (also known as a16z), and mega investment firm BlackRock.

This illustrates another trend: It’s not just authoritarian countries using and investing in these snooping tools. In fact, America now leads the world in surveillance investment, with the Atlantic Council think tank identifying 20 new US investors in the past year.

[…]

They know who you are’

The Surveillance Watch homepage announces: “They know who you are. It’s time to uncover who they are.”

It’s creepy and accurate, and portrays all of the feelings that Al Shafei has around her spyware encounters. Her Majal team has “faced persistent targeting by sophisticated spyware technologies, firsthand, for a very long time, and this direct exposure to surveillance threats really led us to launch Surveillance Watch,” she said. “We think it’s very important for people to understand exactly how they’re being surveilled, regardless of the why.”

The reality is, everybody – not just activists and politicians – is subject to surveillance, whether it’s from smart-city technologies, Ring doorbell cameras, or connected cars. Users will always choose simplicity over security, and the same can be said for data privacy.

“We want to show that when surveillance goes not just unnoticed, but when we start normalizing it in our everyday habits, we look at a new, shiny AI tool, and we say, ‘Yes, of course, take access to all my data,'” Al Shafei said. “There’s a convenience that comes with using all of these apps, tracking all these transactions, and people don’t realize that this data can and does get weaponized against you, and not just against you, but also your loved ones.”

Source: Mozilla fellow Esra’a Al Shafei watches the watchers • The Register

Critics call proposed changes to landmark EU privacy law ‘death by a thousand cuts’ – “legitimate interest” would allow personal data exfiltration

Privacy activists say proposed changes to Europe’s landmark privacy law, including making it easier for Big Tech to harvest Europeans’ personal data for AI training, would flout EU case law and gut the legislation.
The changes proposed by the European Commission are part of a drive to simplify a slew of laws adopted in recent years on technology, environmental and financial issues which have in turn faced pushback from companies and the U.S. government.
Sign up here.
EU antitrust chief Henna Virkkunen will present the Digital Omnibus, in effect proposals to cut red tape and overlapping legislation such as the General Data Protection Regulation, the Artificial Intelligence Act, the e-Privacy Directive and the Data Act, on November 19.
According to the plans, Google (GOOGL.O)

, opens new tab, Meta Platforms (META.O)

, opens new tab, OpenAI and other tech companies may be allowed to use Europeans’ personal data to train their AI models based on legitimate interest.
In addition, companies may be exempted from the ban on processing special categories of personal data “in order not to disproportionately hinder the development and operation of AI and taking into account the capabilities of the controller to identify and remove special categories of personal data”.
“The draft Digital Omnibus proposes countless changes to many different articles of the GDPR. In combination this amounts to a death by a thousand cuts,” Austrian privacy group noyb said in a statement.
Noyb is known for filing complaints against American companies such as Apple (AAPL.O)
, opens new tab, Alphabet and Meta that have triggered several investigations and resulted in billions of dollars in fines.
“This would be a massive downgrading of Europeans’ privacy 10 years after the GDPR was adopted,” noyb’s Max Schrems said.
European Digital Rights, an association of civil and human rights organisations across Europe, slammed a proposal to merge the ePrivacy Directive, known as the cookie law that resulted in the proliferation of cookie consent pop-ups, into the GDPR.
“These proposals would change how the EU protects what happens inside your phone, computer and connected devices,” EDRi policy advisor Itxaso Dominguez de Olazabal wrote in a LinkedIn post.
“That means access to your device could rely on legitimate interest or broad exemptions like security, fraud detection or audience measurement,” she said.
The proposals would need to be thrashed out with EU countries and European Parliament in the coming months before they can be implemented.

Source: Critics call proposed changes to landmark EU privacy law ‘death by a thousand cuts’ | Reuters

Anyone can claim anything as being “legitimate interest”. It is what terms and conditions have been using for decades to pass any and all data on to third parties. At least the GDPR kind of stood in the way from it going to countries like the USA and China.

DHS wants more biometric data from more people – even from citizens

If you’re filing an immigration form – or helping someone who is – the Feds may soon want to look in your eyes, swab your cheek, and scan your face. The US Department of Homeland Security wants to greatly expand biometric data collection for immigration applications, covering immigrants and even some US citizens tied to those cases.

DHS, through its component agency US Citizenship and Immigration Services, on Monday proposed a sweeping expansion of the agency’s collection of biometric data. While ostensibly about verifying identities and preventing fraud in immigration benefit applications, the proposed rule goes much further than simply ensuring applicants are who they claim to be.

First off, the rule proposes expanding when DHS can collect biometric data from immigration benefit applicants, as “submission of biometrics is currently only mandatory for certain benefit requests and enforcement actions.” DHS wants to change that, including by requiring practically everyone an immigrant is associated with to submit their biometric data.

“DHS proposes in this rule that any applicant, petitioner, sponsor, supporter, derivative, dependent, beneficiary, or individual filing or associated with a benefit request or other request or collection of information, including U.S. citizens, U.S. nationals and lawful permanent residents, and without regard to age, must submit biometrics unless DHS otherwise exempts the requirement,” the rule proposal said.

DHS also wants to require the collection of biometric data from “any alien apprehended, arrested or encountered by DHS.”

It’s not explicitly stated in the rule proposal why US citizens associated with immigrants who are applying for benefits would have to have their biometric data collected. DHS didn’t answer questions to that end, though the rule stated that US citizens would also be required to submit biometric data “when they submit a family-based visa petition.”

Give me your voice, your eye print, your DNA samples

In addition to expanded collection, the proposed rule also changes the definition of what DHS considers to be valid biometric data.

“Government agencies have grouped together identifying features and actions, such as fingerprints, photographs, and signatures under the broad term, biometrics,” the proposal states. “DHS proposes to define the term ‘biometrics’ to mean ‘measurable biological (anatomical, physiological or molecular structure) or behavioral characteristics of an individual,'” thus giving DHS broad leeway to begin collecting new types of biometric data as new technologies are developed.

The proposal mentions several new biometric technologies DHS wants the option to use, including ocular imagery, voice prints and DNA, all on the table per the new rule.

[…]

Source: DHS wants more biometric data – even from citizens • The Register

Music festivals to collect data with RFID wristbands. Also, randomly, fascinating information about data Flitsmeister collects.

This summer, Dutch music festivals will use RFID wristbands to collect visitor data. The technology has been around for a while, but the innovation lies in its application. The wristbands are anonymous by default, but users can activate them to participate in loyalty programs or unlock on-site experiences.Visitor privacy is paramount; overly invasive tracking is avoided.

This is according to Michael Guntenaar, Managing Director at Superstruct Digital Services, in the Emerce TV video ‘Data is the new headliner at dance festivals’. Superstruct is a network of approximately 80 large festivals (focused on experience and brand identity) spread across Europe and Australia. ID&T, known for events such as Sensation, Mysteryland, and Defqon.1, joined Superstruct in September 2021. Tula Daans, Data Analyst Brand Partnerships at ID&T, also joined on behalf of ID&T.

Festivals use various data sources, primarily ticket data (age, location, gender/gender identity), but also marketing data (social media), consumption data (food and drinks), and post-event surveys.

For brand partnerships, surveys are sent to visitors after the event to gauge whether they saw brands, what they thought of them, and thus gain insight into brand perception. Deliberately, no detailed feedback is requested during the festival to avoid disturbing the visitor experience, says Guntenaar.

The Netherlands is a global leader in data collection. Defqon.1 is mentioned as a breeding ground for experiments with data and technology, due to its technically advanced team and highly engaged target group.

[…]

In a second video, ‘Real-time mobility info in a complex data landscape’, Jorn de Vries, managing director at Flitsmeister, talks about mobility data and the challenges and opportunities within this market. The market for mobility data, which ranges from traffic flows to speed camera notifications, is busy with players like Garmin, Google, Waze, and TomTom.

Nevertheless, Flitsmeister still sees room for growth, because mobility is timeless and brings challenges, such as the desire to get from A to B quickly, efficiently, green, and cheaply. Innovation is essential to maintain a place in this market, says De Vries.

Flitsmeister has a large online community of almost 3 million monthly active users. This community has grown significantly over the years, even after introducing paid propositions. What distinguishes Flitsmeister from global players such as Google and Waze, according to De Vries, is their local embeddedness, with marketing and content that aligns with the language and use cases of users in the Benelux. They also collaborate with governments through partnerships, allowing them to offer specific local services, such as warnings for emergency services. Technically, competitors might be able to do this, says De Vries, but it probably isn’t a high priority because it’s local; Flitsmeister, however, believes that you have to dare to go all the way to properly serve a market, even if this requires investments that are only relevant for the Netherlands. Another example of local embeddedness is their presence on almost every radio station.

The Flitsmeister app now consists of eight main uses. In addition to the well-known speed cameras and track control, it includes warnings for emergency services (ambulance, fire brigade, Rijkswaterstaat vehicles) who are informed early when such a vehicle approaches with blue lights. The app also provides traffic jam information and warnings for incidents, stationary vehicles, and roadworks. Flitsmeister tries to give warnings for the start of traffic jams earlier than the flashing signs above the road, because they are not bound by the gantries where these signs are located.

Navigation is an added feature. In addition, there is paid parking at the end of the journey. Flitsmeister also has links with so-called smart traffic lights, where they receive data about the status of the light and share data with the intersection to optimize it. This can, for example, lead to a green light if you approach an intersection at night and there is no other traffic. More than 1500 smart intersections in the Netherlands are already equipped. Flitsmeister also receives data from matrix signs, including red crosses, arrows, and adjusted maximum speeds.

Privacy is a crucial topic when bringing consumers and data together. Flitsmeister has seen privacy from the start as a Unique Selling Point (USP) if handled correctly. Especially in countries like Germany, this is more active than in the Benelux, and privacy-friendly companies have a plus in the eyes of the consumer. Large players such as Google and Waze have the same legal playing field as Flitsmeister, but differ in what they want, can, and do.

Flitsmeister does collect live GPS data that provides a lot of insight into traffic movements. They are working with Rijkswaterstaat and their parent company Bmobile on pilots, including on the A9, where they combine loop data in the asphalt with their real-time data. This provides a more accurate and cost-efficient picture than road loops alone, which are expensive to maintain and measure limitedly. This combination allows them to provide relevant information, even between the road loops, leading to more accurate and cost-efficient traffic information.

Flitsmeister also works with data that detects real-time situations and provides early advice. They are doing pilots with ‘trigger based rerouting’, where users are proactively rerouted if a reported incident on their route is likely to affect their travel time, even if the travel time has not yet changed at that moment. The challenge here is that people must be receptive to this and understand the rationale behind the rerouting.

Although there is a lot of talk about connected vehicle data, Flitsmeister’s focus is more on strengthening the relationship with the driver than with the vehicle itself. Jorn de Vries believes that the driver will ultimately lead, as the need for mobility comes from the individual and the vehicle facilitates this.

The video Data is the new headliner at dance festivals can be watched for free. The collection Customer data: trends, innovation and future will be supplemented in the coming months and can be viewed for free after registration.

Source: Kagi Translate |(Emerce TV): music festivals want to collect data with RFID wristbands

Clearview AI faces criminal heat for ignoring EU data fines – wait: these creeps still exist?

Privacy advocates at Noyb filed a criminal complaint against Clearview AI for scraping social media users’ faces without consent to train its AI algorithms.

Austria-based Noyb (None of Your Business) is targeting the US company and its executives, arguing that if successful, individuals who authorized the data collection could face criminal penalties, including imprisonment.

The complaint focuses largely on Clearview’s apparent disregard for fines from France, Greece, Italy, the Netherlands, and the UK. Aside from the UK — where Clearview recently lost its appeal of a $10 million fine from the Information Commissioner’s Office — the company has yet to pay other fines totaling more than $100 million, Noyb claims.

“EU data protection authorities did not come up with a way to enforce its fines and bans against the US company, allowing Clearview AI to effectively dodge the law,” said Noyb in its announcement today.

Max Schrems, privacy lawyer and founder of Noyb, said: “Clearview AI seems to simply ignore EU fundamental rights and just spits in the face of EU authorities.”

The criminal complaint, filed with Austrian public prosecutors, hinges on Article 84 of the GDPR, which allows EU member states to seek proportionate punishments for data protection violations, including through criminal proceedings.

Clearview AI claims it has collected more than 60 billion images to help law enforcement agencies improve facial recognition tech.

Scraping data is not inherently illegal, however, Clearview’s sweeping collection of social media photos for commercial gain has repeatedly violated GDPR regulations across Europe.

Austria ruled the company’s practices illegal in 2023, though it imposed no fine.

Noyb is using a provision in Austria’s own implementation of the GDPR that allows criminal proceedings to be brought against managers of organizations that flout data protection laws.

“We even run cross-border criminal procedures for stolen bikes, so we hope that the public prosecutor also takes action when the personal data of billions of people was stolen – as has been confirmed by multiple authorities,” said Schrems.

Source: Clearview AI faces criminal heat for ignoring EU data fines • The Register

CBP will photograph non-citizens entering and exiting the US for its facial recognition database

The US Customs and Border Protection (CBP) submitted a new measure that allows it to photograph any non-US citizen who enters or exits the country for facial recognition purposes. According to a filing with the government’s Federal Register, CBP and the Department of Homeland Security are looking to crack down on threats of terrorism, fraudulent use of travel documents and anyone who overstays their authorized stay.

The filing detailed that CBP will “implement an integrated, automated entry and exit data system to match records, including biographic data and biometrics, of aliens entering and departing the United States.” The government agency already has the ability to request photos and fingerprints from anyone entering the country, but this new rule change would allow for requiring photos of anyone exiting as well. These photos would “create galleries of images associated with individuals, including photos taken by border agents, and from passports or other travel documents,” according to the filing, adding that these galleries would be compared to live photos at entry and exit points.

These new requirements are scheduled to go into effect on December 26, but CBP will need some time to implement a system to handle the extra demand. According to the filing, the agency said “a biometric entry-exit system can be fully implemented at all commercial airports and sea ports for both entry and exit within the next three to five years.”

Source: CBP will photograph non-citizens entering and exiting the US for its facial recognition database

Microsoft illegally tracked students via 365 Education, must now say what it did with the data

An Austrian digital privacy group has claimed victory over Microsoft after the country’s data protection regulator ruled the software giant “illegally” tracked students via its 365 Education platform and used their data.

noyb said the ruling [PDF] by the Austrian Data Protection Authority also confirmed that Microsoft had tried to shift responsibility for access requests to local schools, and the software and cloud giant would have to explain how it used user data.

The ruling could have far-reaching effects for Microsoft and its obligations to inform Microsoft 365 users across Europe about what it is doing with their data, noyb argues.

The complaint dates back to the COVID-19 pandemic, when schools rapidly shifted to online learning, using the likes of 365 Education.

The privacy group said: “Microsoft shifted all responsibility to comply with privacy laws onto schools and national authorities – that have little to no actual control over the use of student data.”

When the complainant filed an access request to see what information was being processed, “this led to massive finger pointing: Microsoft simply referred the complainant to its local school.”

But the school and education authorities could only provide minimal information. The school, for example, could not access information that rested with Microsoft. “No one felt able to comply with GDPR rights.”

This prompted a complaint against the school, national and local education authorities, and Microsoft.

The ruling, machine translated, said: “It is determined that Microsoft, as a controller, violated the complainant’s right of access (Art. 15 GDPR) by failing to provide complete information about the data processed when using Microsoft Education 365.”

Microsoft was ordered to provide complete information about the data transmitted, and to provide clear explanations of terms such as “internal reporting,” “business modelling” and “improvement of core functionality.” It must also disclose if information was transferred to third parties.

[…]

 

Source: Microsoft ‘illegally’ tracked students via 365 Education • The Register

Germany against ChatControl: Denmark takes it off the table so the EU can’t vote against it NOW, but will re-try (3rd time lucky) later again, when the people aren’t looking.

Germany does not support the Danish proposal on the so-called CSA regulation, which is called ‘chat control’ by critics.

The proposal was to be voted on on Tuesday in the EU Council of Ministers, but it has now been taken off the table.

The Danish government, which currently holds the EU Presidency, has chosen to withdraw the proposal from the vote. This is stated in a press release from the German parliament.

[…]

Among other things, 500 researchers from 34 countries worldwide, including 25 from Danish universities, have signed a letter criticizing the CSA regulation, as they believe, among other things, that the method will be ineffective and that there will at the same time be a high risk of misuse of information.

And leading experts in encryption have compared the suggestion of placing a spy microphone in everyone’s pockets.

[…]

The Danish Minister of Justice, Peter Hummelgaard (S), confirms in a written reply to DR News that the proposal will not be discussed at the Council meeting next week.

“It’s no secret that it’s a difficult case with many considerations that needs to be balanced. This is shown by the great public debate that has been in the recent past as well.

“Since the necessary support for the current compromise proposal has not yet been established, prior to the Council meeting next week, the proposal will not be discussed by the ministers at the Council meeting,” he said.

Despite the fact that the government has not succeeded in finding the necessary support, the Minister of Justice does not give up.

– However, the Danish EU Presidency will continue to work on the Member States to find a solution, and therefore negotiations on the technical details of the proposal will continue.

[…]

“Both ministries stressed (the German Ministry of Interior and Justice) that, like many other EU countries, they do not support the Danish proposal in the current form,” it said.

Source: Tyskland fejer kontroversielt ‘chatkontrol’-forslag af bordet | Politik | DR

An absolute gutter move by Denmark, freeing them up to try again a 3rd time – and call it a second attempt. Maybe they will try over December, April or July, when the proletariat is on holiday and won’t raise such a stink about being spied on 24/7 by their own governments. There is nothing democratic about the way this is being handled.

Germany slams brakes on EU’s Chat Control snoopfest

Germany has committed to oppose the EU’s controversial “Chat Control” regulations following huge pressure from multiple activists and major organizations.

The draft regs would allow authorities to compel providers of communications services – such as WhatsApp, Signal, etc – to monitor user comms for potential child sexual abuse material. And they wouldn’t exempt encrypted services.

Jens Spahn, a member of the Bundestag for Germany’s Christian Democratic Union (CDU) – part of the ruling coalition in the country – confirmed in a statement on Tuesday that the German government would not allow the proposed regulations, which are commonly referred to as Chat Control, to become law.

“We, the CDU/CSU parliamentary group in the Bundestag, are opposed to the unwarranted monitoring of chats. That would be like opening all letters as a precautionary measure to see if there is anything illegal in them. That is not acceptable, and we will not allow it.”

As The Reg has mentioned previously, to pass the legislation, EU leaders need support from nations representing the majority of the member-state bloc’s population – which is why Germany’s is a key player.

The news follows speculation last week that Germany would reverse its stance and oppose the Child Sexual Abuse (CSA) Regulation, which EU politicians have tried to pass since it was first tabled in 2022.

Essentially, it’s the EU’s version of the UK’s long-held ambition to force encrypted messaging platforms to break end-to-end encryption (E2EE), packaged under a similar guise.

If passed, the CSA Regulation would require communications platforms to deploy AI-powered content filters to ensure CSA material was blocked, and those possessing and sharing it be brought to justice.

And, of course, would also undermine E2EE, theoretically allowing the EU to spy on any citizen’s private communications.

So far, Chat Control has naturally received similarly heated opposition as the UK’s equivalent plans, first through the Investigatory Powers Act and later through the Online Safety Act.

[…]

Source: Germany slams brakes on EU’s Chat Control snoopfest • The Register

Another Day, Another Age Verification Data Breach: Discord’s Third-Party Partner Leaked Government IDs. That didn’t take long, did it?

Once again, we’re reminded why age verification systems are fundamentally broken when it comes to privacy and security. Discord has disclosed that one of its third-party customer service providers was breached, exposing user data, including government-issued photo IDs, from users who had appealed age determinations.

Data potentially accessed by the hack includes things like names, usernames, emails, and the last four digits of credit card numbers. The unauthorized party also accessed a “small number” of images of government IDs from “users who had appealed an age determination.” Full credit card numbers and passwords were not impacted by the breach, Discord says.

Seems pretty bad.

What makes this breach particularly instructive is that it highlights the perverse incentives created by age verification mandates. Discord wasn’t collecting government IDs because they wanted to—they were responding to age determination appeals, likely driven by legal and regulatory pressures to keep underage users away from certain content. The result? A treasure trove of sensitive identity documents sitting in the systems of a third-party customer service provider that had no business being in the identity verification game.

To “protect the children” we end up putting everyone at risk.

This is exactly the kind of incident that privacy advocates have been warning about for years as lawmakers push for increasingly stringent age verification requirements across the internet. Every time these systems are implemented, we’re told they’re secure, that the data will be protected, that sophisticated safeguards are in place. And every time, we eventually get stories like this one.

The pattern reveals a fundamental misunderstanding of how security works in practice versus theory. Age verification proponents consistently treat identity document collection as a simple technical problem with straightforward solutions, ignoring the complex ecosystem these requirements create. Companies like Discord find themselves forced to collect documents they don’t want, storing them with third-party processors they don’t fully control, creating attack surfaces that wouldn’t otherwise exist.

These third parties become attractive targets precisely because they aggregate identity documents from multiple platforms—a single breach can expose IDs collected on behalf of dozens of different services. When the inevitable breach occurs, it’s not just usernames and email addresses at risk—it’s the kind of documentation that can enable identity theft and fraud for years to come, affecting people who may have forgotten they ever uploaded an ID to appeal an automated age determination.

[…]

the fundamental problem remains: we’re creating systems that require the collection and storage of highly sensitive identity documents, often by companies that aren’t primarily in the business of securing such data. This isn’t Discord’s fault specifically—they were dealing with age verification appeals, likely driven by regulatory or legal pressures to prevent underage users from accessing certain content or features.

This breach should serve as yet another data point in the growing pile of evidence that age verification systems create more problems than they solve. The irony is that lawmakers pushing these requirements often claim to be protecting children’s privacy, while simultaneously mandating the creation of vast databases of identity documents that inevitably get breached. We’ve seen similar incidents affect everything from adult websites to social media platforms to online retailers, all because policymakers have decided that collecting copies of driver’s licenses and passports is somehow a reasonable solution to online age verification.

The real tragedy is that this won’t be the last such breach we see. As long as lawmakers continue pushing for more aggressive age verification requirements without considering the privacy and security implications, we’ll keep seeing stories like this one. The question isn’t whether these systems will be breached—it’s when, and how many people’s sensitive documents will be exposed in the process.

[…]

Source: Another Day, Another Age Verification Data Breach: Discord’s Third-Party Partner Leaked Government IDs | Techdirt

If you want to look at previous articles telling you what an insanely bad idea mandatory age verification systems are and how they are insecure, you can just search this blog.

Chat Control Is Back On The Menu In The EU. It Still Must Be Stopped

The European Union Council is once again debating its controversial message scanning proposal, aka “Chat Control,” that would lead to the scanning of private conversations of billions of people.

Chat Control, which EFF has strongly opposed since it was first introduced in 2022, keeps being mildly tweaked and pushed by one Council presidency after another.

Chat Control is a dangerous legislative proposal that would make it mandatory for service providers, including end-to-end encrypted communication and storage services, to scan all communications and files to detect “abusive material.” This would happen through a method called client-side scanning, which scans for specific content on a device before it’s sent. In practice, Chat Control is chat surveillance and functions by having access to everything on a device with indiscriminate monitoring of everything. In a memo, the Danish Presidency claimed this does not break end-to-end encryption.

This is absurd.

We have written extensively that client-side scanning fundamentally undermines end-to-end encryption, and obliterates our right to private spaces. If the government has access to one of the “ends” of an end-to-end encrypted communication, that communication is no longer safe and secure. Pursuing this approach is dangerous for everyone, but is especially perilous for journalists, whistleblowers, activists, lawyers, and human rights workers.

If passed, Chat Control would undermine the privacy promises of end-to-end encrypted communication tools, like Signal and WhatsApp. The proposal is so dangerous that Signal has stated it would pull its app out of the EU if Chat Control is passed. Proponents even seem to realize how dangerous this is, because state communications are exempt from this scanning in the latest compromise proposal.

This doesn’t just affect people in the EU, it affects everyone around the world, including in the United States. If platforms decide to stay in the EU, they would be forced to scan the conversation of everyone in the EU. If you’re not in the EU, but you chat with someone who is, then your privacy is compromised too. Passing this proposal would pave the way for authoritarian and tyrannical governments around the world to follow suit with their own demands for access to encrypted communication apps.

Even if you take it in good faith that the government would never do anything wrong with this power, events like Salt Typhoon show there’s no such thing as a system that’s only for the “good guys.”

Despite strong opposition, Denmark is pushing forward and taking its current proposal to the Justice and Home Affairs Council meeting on October 14th.

We urge the Danish Presidency to drop its push for scanning our private communication and consider fundamental rights concerns. Any draft that compromises end-to-end encryption and permits scanning of our private communication should be blocked or voted down.

Phones and laptops must work for the users who own them, not act as “bugs in our pockets” in the service of governments, foreign or domestic. The mass scanning of everything on our devices is invasive, untenable, and must be rejected.

Republished from the EFF’s Deeplinks blog.

Source: Chat Control Is Back On The Menu In The EU. It Still Must Be Stopped | Techdirt