That didn’t take long: A few days after Chat Control, European Parliament implements Age Verification on Social Media, 16+

On Wednesday, MEPs adopted a non-legislative report by 483 votes in favour, 92 against and with 86 abstentions, expressing deep concern over the physical and mental health risks minors face online and calling for stronger protection against the manipulative strategies that can increase addiction and that are detrimental to children’s ability to concentrate and engage healthily with online content.


Minimum age for social media platforms

To help parents manage their children’s digital presence and ensure age-appropriate online engagement, Parliament proposes a harmonised EU digital minimum age of 16 for access to social media, video-sharing platforms and AI companions, while allowing 13- to 16-year-olds access with parental consent.

Expressing support for the Commission’s work to develop an EU age verification app and the European digital identity (eID) wallet, MEPs insist that age assurance systems must be accurate and preserve minors’ privacy. Such systems do not relieve platforms of their responsibility to ensure their products are safe and age-appropriate by design, they add.

To incentivise better compliance with the EU’s Digital Services Act (DSA) and other relevant laws, MEPs suggest senior managers could be made personally liable in cases of serious and persistent non-compliance, with particular respect to protection of minors and age verification.

[…]

According to the 2025 Eurobarometer, over 90% of Europeans believe action to protect children online is a matter of urgency, not least in relation to social media’s negative impact on mental health (93%), cyberbullying (92%) and the need for effective ways to restrict access to age-inappropriate content (92%).

Member states are starting to take action and responding with measures such as age limits and verification systems.

Source: Children should be at least 16 to access social media, say MEPs | News | European Parliament

Expect to see manadatory surveillance on social media (whatever they define that to be) soon as it is clearly “risky”.

The problem is real, but age verification is not the way to solve the problem. Rather, it will make it much, much worse as well as adding new problems entirely.

See also: https://www.linkielist.com/?s=age+verification&submit=Search

See also: Europen Council decides to implement Mass Surveillance and Age Verification through law protecting children from online abuse

Welcome to a new fascist thought controlled Europe, heralded by Denmark.

Chat Control: EU lawmakers finally agree on the “voluntary” scanning of your private chats

[…] The EU Council has finally reached an agreement on the controversial Child Sexual Abuse Regulation (CSAR) after more than three years of failed attempts.

Nicknamed Chat Control by its critics, the agreement has kept cryptographers, technologists, encrypted service providers, and privacy experts alike in turmoil since its inception.

Presidency after presidency, the bill has taken many shapes. But its most controversial feature is an obligation for all messaging service providers operating in the EU – including those using end-to-end-encryption – to scan their users’ private chats on the lookout for child sexual abuse material (CSAM).

At the beginning of the month, the Danish Presidency decided to change its approach with a new compromise text that makes the chat scanning voluntary, instead. That turned to be a winning move, with the proposal managing to reach an agreement in the Council on Wednesday, November 26, 2025.

Privacy experts are unlikely to celebrate, though. The decision came a few days after a group of scientists wrote yet another open letter warning that the latest text still “brings high risks to society.” That’s after other privacy experts deemed the new proposal a “political deception” rather than an actual fix.

The EU Council is now preparing to start negotiations with the European Parliament, hoping to agree on the final terms of the regulation.

What we know about the Council agreement

As per the EU Council announcement, the new law imposes a series of obligations on digital companies. Under the new rules, online service providers will be required to assess how their platforms could be misused and, based on the results, may need to “implement mitigating measures to counter that risk,” the Council notes.

Source: Chat Control: EU lawmakers finally agree on the voluntary scanning of your private chats | TechRadar

A “risk mitigation obligation” can be used to explain anything and obligate spying through whatever services the EU says there is “risk”

Considering the whole proposal was shot down several times in the past years and even past month, using a back door rush to push this through is not how a democracy is supposed to function at all. And this is how fascism grips it’s iron claws. What is going on in Demark?

Europen Council decides to implement Mass Surveillance and Age Verification through law protecting children from online abuse

[…]

Under the new rules, online service providers will be required to assess the risk that their services could be misused for the dissemination of child sexual abuse material or for the solicitation of children. On the basis of this assessment, they will have to implement mitigating measures to counter that risk. Such measures could include making available tools that enable users to report online child sexual abuse, to control what content about them is shared with others and to put in place default privacy settings for children.

Member states will designate national authorities (‘coordinating and other competent authorities’) responsible for assessing these risk assessments and mitigating measures, with the possibility of obliging providers to carry out mitigating measures.

[…]

The Council also wants to make permanent a currently temporary measure that allows companies to – voluntarily – scan their services for child sexual abuse. At present, providers of messaging services, for instance, may voluntarily check content shared on their platforms for online child sexual abuse material,

[Note here: if it is deemed “risky” then the voluntary part is scrubbed and it becomes mandatory. Anything can be called “risky” very easily (just look at the data slurping that goes on in Terms of Services through the text “improving our product”).]

The new law provides for the setting up of a new EU agency, the EU Centre on Child Sexual Abuse, to support the implementation of the regulation.

The EU Centre will assess and process the information supplied by the online providers about child sexual abuse material identified on services, and will create, maintain and operate a database for reports submitted to it by providers. It will further support the national authorities in assessing the risk that services could be used for spreading child sexual abuse material.

The Centre is also responsible for sharing companies’ information with Europol and national law enforcement bodies. Furthermore, it will establish a database of child sexual abuse indicators, which companies can use for their voluntary activities.

Source: Child sexual abuse: Council reaches position on law protecting children from online abuse – Consilium

The article does not mention how you can find out if someone is a child: that is age verification. Which comes with huge rafts of problems, such as censorship (there go the LGBTQ crowd!), hacks (Discord) stealing all the government IDs used to verify ages, and of course ways that people find to circumvent age verification (VPNs, which increase internet traffic, meme pictures of Donald Trump) which causes them to behave in a more unpredictable way, thus harming the kids this is supposed to protect.

Of course, this law has been shot down several times in the past 3 years by the EU, but that didn’t stop Denmark from finding a way to implement it nonetheless in a back door shotgun kind of way.

Age Verification, Estimation, Assurance, Oh My! A Guide To The Terminology

If you’ve been following the wave of age-gating laws sweeping across the country and the globe, you’ve probably noticed that lawmakers, tech companies, and advocates all seem to be using different terms for what sounds like the same thing. Age verification, age assurance, age estimation, age gating—they get thrown around interchangeably, but they technically mean different things. And those differences matter a lot when we’re talking about your rights, your privacy, your data, and who gets to access information online.

[click the source link below to read the different definitions – ed]

Why This Confusion Matters

Politicians and tech companies love using these terms interchangeably because it obscures what they’re actually proposing. A law that requires “age assurance” sounds reasonable and moderate. But if that law defines age assurance as requiring government ID verification, it’s not moderate at all—it’s mass surveillance. Similarly, when Instagram says it’s using “age estimation” to protect teens, that sounds privacy-friendly. But when their estimation fails and forces you to upload your driver’s license instead, the privacy promise evaporates.

Here’s the uncomfortable truth: most lawmakers writing these bills have no idea how any of this technology actually works. They don’t know that age estimation systems routinely fail for people of color, trans individuals, and people with disabilities. They don’t know that verification systems have error rates. They don’t even seem to understand that the terms they’re using mean different things. The fact that their terminology is all over the place—using “age assurance,” “age verification,” and “age estimation” interchangeably—makes this ignorance painfully clear, and leaves the onus on platforms to choose whichever option best insulates them from liability.

Language matters because it shapes how we think about these systems. “Assurance” sounds gentle. “Verification” sounds official. “Estimation” sounds technical and impersonal, and also admits its inherent imprecision. But they all involve collecting your data and create a metaphysical age gate to the internet. The terminology is deliberately confusing, but the stakes are clear: it’s your privacy, your data, and your ability to access the internet without constant identity checks. Don’t let fuzzy language disguise what these systems really do.

Republished from EFF’s Deeplinks blog.

Source: Age Verification, Estimation, Assurance, Oh My! A Guide To The Terminology | Techdirt

Danish manage to bypass democracy to implement mass EU surveillance, says it is “voluntary”

The EU states agree on a common position on chat control. Internet services should be allowed to read communication voluntarily, but will not be obliged [*cough – see bold and end of document: Ed*] to do so. We publish the classified negotiating protocol and bill. After the formal decision, the trilogue negotiations begin.

18.11.2025 at 14:03– Andre Meister – in surveillanceno additions

Man in suit at lectern, behind him flags.
Presidency of the Council: Danish Minister of Justice Hummelgaard. – CC-BY-NC-ND 4.0 Danish Presidency

The EU states have agreed on a common position on chat control. We publish the bill.

Last week, the Council working group discussed the law. We shall once again publish the classified minutes of the meeting.

Tomorrow, the Permanent Representatives want to officially decide on the position.

Update 19.10.: A Council spokesperson tells us, “The agenda item has been postponed until next week.”

Three years of dispute

For three and a half years, the EU institutions have been arguing over chat control. The Commission intends to oblige Internet services to search the content of their users without cause for information on criminal offences and to send them to authorities if suspected.

Parliament calls this mass surveillance and calls for only unencrypted content from suspects to be scanned.

A majority of EU countries want mandatory chat control. However, a blocking minority rejects this. Now the Council has agreed on a compromise. Internet services are not required to chat control, but may carry out a voluntary chat control.

Absolute red lines

The Danish Presidency wants to bring the draft law through the Council “as soon as possible” so that the trilogue negotiations can be started in a timely manner. The feedback from the states should be limited to “absolute red lines”.

The majority of states “supported the compromise proposal.” At least 15 spoke out in favour, including Germany and France.

Germany “welcomed both the deletion of the mandatory measures and the permanent anchoring of voluntary measures.”

Italy also sees voluntary chat control as skeptical. “We fear that the instrument could also be extended to other crimes, so we have difficulty supporting the proposal.” Politicians have already called for chat control to be extended to other content.

Absolute minimum consensus

Other states called the compromise “an absolute minimum consensus.” They “actually wanted more – especially in the sense of commitments.” Some states “showed themselves clearly disappointed by the cancellations made.”

Spain, in particular, “still considered mandatory measures to be necessary, unfortunately, a comprehensive agreement on this was not possible.” Hungary, too, “saw volunteerism as the sole concept as too little.”

Spain, Hungary and Bulgaria proposed “an obligation for providers to have to expose at least in open areas.” The Danish Presidency “described the proposal as ambitious, but did not take it up to avoid further discussion.”

Denmark explicitly pointed to the review clause. Thus, “the possibility of detection orders is kept open at a later date.” Hungary stressed that “this possibility must also be used.”

No obligation

The Danish Presidency had publicly announced that the chat control should not be mandatory, but voluntary.

However, the formulated compromise proposal was contradictory. She had deleted the article on mandatory chat control. However, another article said services should also carry out voluntary measures.

Several states have asked whether these formulations “could lead to a de facto obligation.” The Legal Services agreed: “The wording can be interpreted in both directions.” The Presidency of the Council “clarified that the text only had a risk mitigation obligation, but not a commitment to detection.”

The day after the meeting, the presidency of the Council sent out the likely final draft law of the Council. It states explicitly: ‘No provision of this Regulation shall be interpreted as imposing obligations of detection obligations on providers’.

Damage and abuse

Mandatory chat control is not the only issue in the planned law. Voluntary chat control is also prohibited. The European Commission cannot prove its proportionality. Many oppose voluntary chat control, including the EU Commission, the European Data Protection Supervisor and the German Data Protection Supervisor.

A number of scientists are critical of the compromise proposal. The voluntary chat control does not designate it to be appropriate. “Their benefit is not proven, while the potential for harm and abuse is enormous.”

The law also calls for mandatory age checks. The scientists criticize that age checks “bring with it an inherent and disproportionate risk of serious data breaches and discrimination without guaranteeing their effectiveness.” The Federal Data Protection Officer also fears a “large-scale abolition of anonymity on the Internet.”

Now follows Trilog

The EU countries will not discuss these points further. The Danish Presidency “reaffirmed its commitment to the compromise proposal without the Spanish proposals.”

The Permanent Representatives of the EU States will meet next week. In December, the justice and interior ministers meet. These two bodies are to adopt the bill as the official position of the Council.

This is followed by the trilogue. There, the Commission, Parliament and the Council negotiate to reach a compromise from their three separate bills.

[…]

A “risk mitigation obligation” can be used to explain anything and obligate spying through whatever services the EU says there is “risk”

Source: Translated from EU states agree on voluntary chat control

Considering the whole proposal was shot down several times in the past years and even past month, using a back door rush to push this through is not how a democracy is supposed to function at all. And this is how fascism grips it’s iron claws. What is going on in Demark?

For more information on the history of Chat Control click here

DOGE Is Officially Dead, all government data still in Musk’s hands though

After months of controversy, Elon Musk and Donald Trump’s failed passion project to cut costs across the federal government is officially dead, ahead of schedule.

Earlier this month, Office of Personnel Management director Scott Kupor told Reuters that the Department of Government Efficiency “doesn’t exist.”

Even though there are eight more months left on its mandate, DOGE is no longer a “centralized entity,” according to Kupor. Instead, the Office of Personnel Management, an existing independent agency that has been overseeing the federal workforce for decades, will be taking over most of DOGE’s functions

[…]

DOGE had a short but eventful life. Trump announced the creation of the “agency” immediately after his election last year. The cuts began shortly after Trump took office, with Musk taking a figurative and literal chainsaw to the federal government. With DOGE, Musk completely gutted the Department of Education, laid off a good chunk of the government’s cybersecurity officials, caused the deaths of an estimated 638 thousand people around the world with funding cuts to USAID, and stripped more than a quarter of the Internal Revenue Service’s workforce (most of these positions are now reportedly being filled by AI agents). Several DOGE staffers have also since ended up practically taking over other federal agencies like the Department of Health and Human Services and the Social Security Administration.

All that carnage ended up being for practically nothing. A Politico analysis from earlier this year claimed that even though DOGE purported to have saved Americans billions of dollars, only a fraction of that has been realized. Another report, this time by the Senate Permanent Subcommittee on Investigations, said that DOGE ended up spending more money than it saved while trying to downsize the government. Musk Watch, a tracker set up by veteran independent journalists, has been able to verify $16.3 billion in federal cuts, significantly less than the $165 billion that DOGE has claimed in the past, and a drop in the bucket compared to DOGE’s original claim that it would eliminate $2 trillion in spending.

[…]

Source: DOGE Is Officially Dead

Why is nobody talking about the datagrab that Musk has performed?

European Commission in pocket of US Big Tech  to massively rollback digital protections in Digital domain

The European Commission has been accused of “a massive rollback” of the EU’s digital rules after announcing proposals to delay central parts of the Artificial Intelligence Act and water down its landmark data protection regulation.

If agreed, the changes would make it easier for tech firms to use personal data to train AI models without asking for consent, and try to end “cookie banner fatigue” by reducing the number times internet users have to give their permission to being tracked on the internet.

The commission also confirmed the intention to delay the introduction of central parts of the AI Act, which came into force in August 2024 and does not yet fully apply to companies.

Companies making high-risk AI systems, namely those posing risks to health, safety or fundamental rights, such as those used in exam scoring or surgery, would get up to 18 months longer to comply with the rules.

The plans were part of the commission’s “digital omnibus”, which tries to streamline tech rules including GDPR, the AI Act, the ePrivacy directive and the Data Act.

After a long period of rule-making, the EU agenda has shifted since the former Italian prime minister Mario Draghi warned in a report last autumn that Europe had fallen behind the US and China in innovation and was weak in the emerging technologies that would drive future growth, such as AI. The EU has also come under heavy pressure from the Trump administration to rein in digital laws.

[…]

They are part of the bloc’s wider drive for “simplification”, with plans under way to scale back regulation on the environment, company reporting on supply chains and agriculture. Like these other proposals, the digital omnibus will need to be approved by EU minsters and the European parliament.

European Digital Rights (EDRi), a pan-European network of NGOs, described the plans as “a major rollback of EU digital protections” that risked dismantling “the very foundations of human rights and tech policy in the EU”.

In particular, it said that changes to GDPR would allow “the unchecked use of people’s most intimate data for training AI systems” and that a wide range of exemptions proposed to online privacy rules would mean businesses would be able to read data on phones and browsers without asking.

European business groups welcomed the proposals but said they did not go far enough. A representative from the Computer and Communications Industry Association, whose members include Amazon, Apple, Google and Meta, said: “Efforts to simplify digital and tech rules cannot stop here.” The CCIA urged “a more ambitious, all-encompassing review of the EU’s entire digital rulebook”.

Critics of the shake-up included the EU’s former commissioner for enterprise, Thierry Breton, who wrote in the Guardian that Europe should resist attempts to unravel its digital rulebook “under the pretext of simplification or remedying an alleged ‘anti-innovation’ bias. No one is fooled over the transatlantic origin of these attempts.”

[…]

Source: European Commission accused of ‘massive rollback’ of digital protections | European Commission | The Guardian

Yes, the simplification change allowing cookie consent to be stored in the browser is a good one. Allowing AI systems to run amok without proper oversight, especially in high risk domains and allowing large companies to do so without rules only benefits the players that can afford to play in these domains: namely the far right by introducing more mass surveillance tools and big (US) tech.

EU proposes doing away with constant cookies requests by setting the “No” in your browser settings

People will no longer be bombarded by constant requests to accept or reject “cookies” when browsing the internet, under proposed changes to the European Union’s strict data privacy laws.

The pop-up prompts asking internet users to consent to cookies when they visit a website are widely seen as a nuisance, undermining the original privacy intentions of the digital rules.

[I don’t think this undermines anything – cookie consent got rid of a LOT of spying and everyone now just automatically clicks on NO or uses addons to do this (well, if you are using Firefox as a browser). The original purpose: stop companies spying has been achieved]

Brussels officials have now tabled changes that would allow people to accept or reject cookies for a six-month period, and potentially set their internet browser to automatically opt-in or out, to avoid being repeatedly asked whether they consent to websites remembering information about their past visits.

Cookies allow websites to keep track of a user’s previous activity, allowing sites to pull up items added to an online shopping cart that were not purchased, or remember whether someone had logged in to an account on the site before, as well as target advertisements.

[…]

Source: EU proposes doing away with constant internet ‘cookies’ requests – The Irish Times

Tokyo Court Finds Cloudflare Liable For All Content it Allows Access to, Verification of all Users of the service and Should Follow Lawyers Requests without Court Verdicts in Manga Piracy Lawsuit

Japanese manga publishers have declared victory over Cloudflare in a long-running copyright infringement liability dispute. Kadokawa, Kodansha, Shueisha and Shogakukan say that Cloudflare’s refusal to stop manga piracy sites, meant they were left with no other choice but to take legal action. The Tokyo District Court rendered its decision this morning, finding Cloudflare liable for damages after it failed to sufficiently prevent piracy.

[…]

After a wait of more than three and a half years, the Tokyo District Court rendered its decision this morning. In a statement provided to TorrentFreak by the publishers, they declare “Victory Against Cloudflare” after the Court determined that Cloudflare is indeed liable for the pirate sites’ activities.

In a statement provided to TorrentFreak, the publishers explain that they alerted Cloudflare to the massive scale of the infringement, involving over 4,000 works and 300 million monthly visits, but their requests to stop distribution were ignored.

“We requested that the company take measures such as stopping the distribution of pirated content from servers under its management. However, Cloudflare continued to provide services to the manga piracy sites even after receiving notices from the plaintiffs,” the group says.

The publishers add that Cloudflare continued to provide services even after receiving information disclosure orders from U.S. courts, leaving them with “no choice but to file this lawsuit.”

Factors Considered in Determining Liability

Decisions in favor of Cloudflare in the United States have proven valuable over the past several years. Yet while the Tokyo District Court considered many of the same key issues, various factors led to a finding of liability instead, the publishers note.

“The judgment recognized that Cloudflare’s failure to take timely and appropriate action despite receiving infringement notices from the plaintiffs, and its negligent continuation of pirated content distribution, constituted aiding and abetting copyright infringement, and that Cloudflare bears liability for damages to the plaintiffs,” they write.

“The judgment, in that regard, attached importance to the fact that Cloudflare, without conducting any identity verification procedures, had enabled a massive manga piracy site to operate ‘under circumstances where strong anonymity was secured,’ as a basis for recognizing the company’s liability.”

[…]

According to Japanese media, Cloudflare plans to appeal the verdict, which was expected. In comments to the USTR last month, Cloudflare referred to a long-running dispute in Japan with the potential to negatively affect future business.

“One particular dispute reflects years of effort by Japan’s government and its publishing industry to impose additional obligations on intermediaries like CDNs,” the company’s submission reads (pdf).

“A fully adjudicated ruling that finds CDNs liable for monetary damages for infringing material would set a dangerous global precedent and necessitate U.S. CDN providers to limit the provision of global services to avoid liability, severely restricting market growth and expansion into Asian Pacific markets.”

Whether that heralds Cloudflare’s exit from the region is unclear.

[…]

Source: Tokyo Court Finds Cloudflare Liable For Manga Piracy in Long-Running Lawsuit * TorrentFreak

How Trademark Ruined Colorado-Style Pizza

You’ve heard of New York style, Chicago deep dish, Detroit square pans. But Colorado-style pizza? Probably not. And there’s a perfectly ridiculous reason why this regional style never spread beyond a handful of restaurants in the Rocky Mountains: one guy trademarked it and scared everyone else away from making it.

This story comes via a fascinating Sporkful podcast episode where reporter Paul Karolyi spent years investigating why Colorado-style pizza remains trapped in obscurity while other regional styles became national phenomena.

The whole episode is worth listening to for the detective work alone, but the trademark angle reveals something important about how intellectual property thinking can strangle cultural movements in their cradle.

Here’s the thing about pizza “styles”: they become styles precisely because they spread. New York, Chicago, Detroit, New Haven—these aren’t just individual restaurant concepts, they’re cultural phenomena adopted and adapted by hundreds of restaurants. That widespread adoption creates the network effects that make a “style” valuable: customers seek it out, restaurants compete to perfect it, food writers chronicle its evolution.

Colorado-style pizza never got that chance. When Karolyi dug into why, he discovered that Beau Jo’s—the restaurant credited with inventing the style—had locked it up legally. When he asked the owner’s daughter if other restaurants were making Colorado-style pizza, her response was telling:

We’re um a trademark, so they cannot.

Really?

Yes.

Beau owns a trademark for Colorado style pizza.

Yep.

When Karolyi finally tracked down the actual owner, Chip (after years of trying, which is its own fascinating subplot), he expected to hear about some grand strategic vision behind the trademark. Instead, he got a masterclass in reflexive IP hoarding:

Cuz it’s different and nobody else is doing that. So, why not do it Colorado style? I mean, there’s Chicago style and there’s Pittsburgh style and Detroit and everything else. Um, and we were doing something that was what was definitely different and um um licensing attorney said, “Yeah, we can do it” and we were able to.

That’s it. No business plan. No licensing strategy. Just “some lawyer said we can do it” so they did. This is the IP-industrial complex in microcosm: lawyers selling trademark applications because they can, not because they should.

I pressed my case to Chip that abandoning the trademark so others could also use it could actually be good for his business.

“If more places made Colorado style pizza, the style itself would become more famous, which would make more people come to Beau Jo’s to try the original. If imitation is the highest form of flattery, like everyone would know that Beau Jo was the originator. Like, do you ever worry or maybe do you think that the trademark has possibly hindered the spread of this style of pizza that you created that you should be getting credit for?”

“Never thought about it.”

“Well, what do you think about it now?”

“I don’t know. I have to think about that. It’s an interesting thought. I’ve never thought about it. I’m going to look into it. I’m going to look into it. I’m going to talk to some people and um I’m not totally opposed to it. I don’t know that it would be a good idea for us, but I’m willing to look at it.”

A few weeks later, Karolyi followed up with Chip. Predictably, the business advisors had circled the wagons. They “unanimously” told him not to give up the trademark—because of course they did. These are the same people who profit from maintaining artificial scarcity, even when it demonstrably hurts the very thing they’re supposedly protecting.

And so Colorado-style pizza remains trapped in its legal cage, known only to a handful of tourists who stumble across Beau Jo’s locations. A culinary innovation that could have sparked a movement instead became a cautionary tale about how IP maximalism kills the things it claims to protect.

This case perfectly illustrates the perverse incentives of modern IP thinking. We’ve created an entire industry of lawyers and consultants whose job is to convince business owners to “protect everything” on the off chance they might license it later. Never mind that this protection often destroys the very value they’re trying to capture.

The trademark didn’t just fail to help Beau Jo’s—it actively harmed them. As Karolyi documents in the podcast, the legal lockup has demonstrably scared off other restaurateurs from experimenting with Colorado-style pizza, ensuring the “style” remains a curiosity rather than a movement. Fewer competitors means less innovation, less media attention, and fewer customers seeking out “the original.” It’s a masterclass in how to turn potential network effects into network defects.

Compare this to the sriracha success story. David Tran of Huy Fong Foods deliberately avoided trademarking “sriracha” early on, allowing dozens of competitors to enter the market. The result? Sriracha became a cultural phenomenon, and Huy Fong’s distinctive rooster bottle became the most recognizable brand in a category they helped create. Even as IP lawyers kept circling, Tran understood what Chip apparently doesn’t:

“Everyone wants to jump in now,” said Tran, 70. “We have lawyers come and say ‘I can represent you and sue’ and I say ‘No. Let them do it.’” Tran is so proud of the condiment’s popularity that he maintains a daily ritual of searching the Internet for the latest Sriracha spinoff.

Sometimes the best way to protect your creation is to let it go. But decades of IP maximalist indoctrination have made this counterintuitive wisdom almost impossible to hear. Even when presented with a clear roadmap for how abandoning the trademark could grow his business, Chip couldn’t break free from the sunk-cost fallacy and his advisors’ self-interested counsel.

The real tragedy isn’t just that Colorado-style pizza remains obscure. It’s that this story plays out thousands of times across industries, with creators choosing artificial scarcity over organic growth, protection over proliferation. Every time someone trademarks a taco style or patents an obvious business method, they’re making the same mistake Chip made: confusing ownership with value creation.

Source: How Trademark Ruined Colorado-Style Pizza | Techdirt

Summarising a Book is now Potentially Copyright Infringing

A federal judge just ruled that computer-generated summaries of novels are “very likely infringing,” which would effectively outlaw many book reports. That seems like a problem.

The Authors Guild has one of the many lawsuits against OpenAI, and law professor Matthew Sag has the details on a ruling in that case that, if left in place, could mean that any attempt to merely summarize any copyright covered work is now possibly infringing. You can read the ruling itself here.

This isn’t just about AI—it’s about fundamentally redefining what copyright protects. And once again, something that should be perfectly fine is being treated as an evil that must be punished, all because some new machine did it.

But, I guess elementary school kids can rejoice that they now have an excuse not to do a book report.

[…]

Sag highlights how it could have a much more dangerous impact beyond getting kids out of their homework: making much of Wikipedia infringing.

A new ruling in Authors Guild v. OpenAI has major implications for copyright law, well beyond artificial intelligence. On October 27, 2025, Judge Sidney Stein of the Southern District of New York denied OpenAI’s motion to dismiss claims that ChatGPT outputs infringed the rights of authors such as George R.R. Martin and David Baldacci. The opinion suggests that short summaries of popular works of fiction are very likely infringing (unless fair use comes to the rescue).

This is a fundamental assault on the idea, expression, distinction as applied to works of fiction. It places thousands of Wikipedia entries in the copyright crosshairs and suggests that any kind of summary or analysis of a work of fiction is presumptively infringing.

Short summaries of copyright-covered works should not impact copyright in any way. Yes, as Sag points out, “fair use” can rescue in some cases, but the old saw remains that “fair use is just the right to hire a lawyer.” And when the process is the punishment, saying that fair use will save you in these cases is of little comfort. Getting a ruling on fair use will run you hundreds of thousands of dollars at least.

Copyright is supposed to stop the outright copying of the copyright-protected expression. A summary is not that. It should not implicate the copyright in any form, and it shouldn’t require fair use to come to the rescue.

Sag lays out the details of what happened in this case:

Judge Stein then went on to evaluate one of the more detailed chat-GPT generated summaries relating to A Game of Thrones, the 694 page novel by George R. R. Martin which eventually became the famous HBO series of the same name. Even though this was only a motion to dismiss, where the cards are stacked against the defendant, I was surprised by how easily the judge could conclude that:

“A more discerning observer could easily conclude that this detailed summary is substantially similar to Martin’s original work, including because the summary conveys the overall tone and feel of the original work by parroting the plot, characters, and themes of the original.”

The judge described the ChatGPT summaries as:

“most certainly attempts at abridgment or condensation of some of the central copyrightable elements of the original works such as setting, plot, and characters”

He saw them as:

“conceptually similar to—although admittedly less detailed than—the plot summaries in Twin Peaks and in Penguin Random House LLC v. Colting, where the district court found that works that summarized in detail the plot, characters, and themes of original works were substantially similar to the original works.” (emphasis added).

To say that the less than 580-word GPT summary of A Game of Thrones is “less detailed” than the 128-page Welcome to Twin Peaks Guide in the Twin Peaks case, or the various children’s books based on famous works of literature in the Colting case, is a bit of an understatement.

[…]

As Sag makes clear, there are few people out there who would legitimately think that the Wikipedia summary should be deemed infringing, which is why this ruling is notable. It again highlights how lots of people, including the media, lawmakers, and now (apparently) judges, get so distracted by the “but this new machine is bad!” in looking at LLM technology that they seem to completely lose the plot.

And that’s dangerous for the future of speech in general. We shouldn’t be tossing out fundamental key concepts in speech (“you can summarize a work of art without fear”) just because some new kind of summarization tool exists.

Source: Book Reports Potentially Copyright Infringing, Thanks To Court Attacks On LLMs | Techdirt

Switzerland plans surveillance worse than US

In Switzerland, a country known for its love for secrecy, particularly when it comes to banking, the tides have turned: An update to the VÜPF surveillance law directly targets privacy and anonymity services such as VPNs as well as encrypted chat apps and email providers. Right now the law is still under discussion in the Swiss Bundesrat.

[…]

While Swiss privacy has been overhyped, legislative rules in Switzerland are currently decent and comparable to German data protection laws. This update to the VÜPF, which could come into force by 2026, would change data protection legislation in Switzerland dramatically.

Why the update is dangerous

If the law passes in its current form,

  • Swiss email and VPN providers with just 5,000 users are forced to log IP addresses and retain the data for six months – while data retention in Germany is illegal for email providers.
  • ID or driver’s license, maybe a phone number, are required for the registration process of various services – rendering the anonymous usage impossible.
  • Data must be delivered upon request in plain text, meaning providers must be able to decrypt user data on their end (except for end-to-end encrypted messages exchanged between users).

What is more, the law is not introduced by or via the Parliament, but instead the Swiss government, the Federal Council and the Federal Department of Justice and Police (FDJP), want to massively expand internet surveillance by updating the VÜPF – without Parliament having a say. This comes as a shock in a country proud of its direct democracy with regular people’s decisions on all kinds of laws. However, in 2016 the Swiss actually voted for more surveillance, so direct democracy might not help here.

History of surveillance in Switzerland

In 2016, Swiss Parliament updated its data retention law BÜPF to enforce data retention for all communication data (post, email, phone, text messages, ip addresses). In 2018, the revision of the VÜPF translated this into administrative obligations for ISPs, email providers, and others, with exceptions in regard to the size of the provider and whether they were classified as telecommunications service providers or communications services.

This led to the fact that services such as Threema and ProtonMail were exempt from some of the obligations that providers such as Swisscom, Salt, and Sunrise had to comply with – even though the Swiss government would have liked to classify them as quasi network operators and telecommunications providers as well. The currently discussed update of the VÜPF seems to directly target smaller providers as well as providers of anonymous services and VPNs.

The Swiss surveillance state has always sought a lot of power, and had to be called back by the Federal Supreme Court in the past to put surveillance on a sound legal basis.

But now, article 50a of the VÜPF reform mandates that providers must be able to remove “the encryption provided by them or on their behalf”, basically asking for backdoor access to encryption. However, end-to-end encrypted messages exchanged between users do not fall under this decryption obligation. Yet, even Swiss email provider Proton Mail says to Der Bund that “Swiss surveillance would be much stricter than in the USA and the EU, and Switzerland would lose its competitiveness as a business location.”

Because of this upcoming legal change in Switzerland, Proton has started to move its server from Switzerland to the EU.

Source: Switzerland plans surveillance worse than US | Tuta

Roblox begins asking tens of millions of children to send it a selfie, for “age verification”.

Roblox is starting to roll out the mandatory age checks that will require all of its users to submit an ID or scan their face in order to access the platform’s chat features. The updated policy, which the company announced earlier this year, will be enforced first in Australia, New Zealand and the Netherlands and will expand to all other markets by early next year.

The company also detailed a new “age-based chat” system, which will limit users’ ability to interact with people outside of their age group. After verifying or estimating a user’s age, Roblox will assign them to an age group ranging from 9 years and younger to 21 years and older (there are six total age groups). Teens and children will then be limited from connecting with people that aren’t in or close to their estimated age group in in-game chats.

Unlike most social media apps which have a minimum age of 13, Roblox permits much younger children to use its platform. Since most children and many teens don’t have IDs, the company uses “age estimation” tech provided by identity company Persona. The checks, which use video selfies, are conducted within Roblox’s app and the company says that images of users’ faces are immediately deleted after completing the process.

[…]

Source: Roblox begins asking tens of millions of children to verify their age with a selfie

Deleted by Roblox itself, but also by Persona? Pretty scary, 1. having a database of all these kiddies faces and their online persona’s, ways of talking and typing, and 2. that even if the data is deleted, it could be intercepted as it is sent to Roblox and on to the verifier.

Google is collecting troves of data from downgraded Nest thermostats

Google officially turned off remote control functionality for early Nest Learning Thermostats last month, but it hasn’t stopped collecting a stream of data from these downgraded devices. After digging into the backend, security researcher Cody Kociemba found that the first- and second-generation Nest Learning Thermostats are still sending Google information about manual temperature changes, whether a person is present in the room, if sunlight is hitting the device, and more.

[…]

fter cloning Google’s API to create this custom software, he started receiving a trove of logs from customer devices, which he turned off. “On these devices, while they [Google] turned off access to remotely control them, they did leave in the ability for the devices to upload logs. And the logs are pretty extensive,” Kociemba tells The Verge.

[…]

Google is still getting all the information collected by Nest Learning Thermostats, including data measured by their sensors, such as temperature, humidity, ambient light, and motion. “I was under the impression that the Google connection would be severed along with the remote functionality, however that connection is not severed, and instead is a one-way street,” Kociemba says.

[…]

Source: Google is collecting troves of data from downgraded Nest thermostats | The Verge

Unremovable Spyware on Samsung Devices Comes Pre-installed on Galaxy Series Devices

The software in question, AppCloud, developed by the mobile analytics firm IronSource, has been embedded in devices sold primarily in the Middle East and North Africa (MENA) region.

Security researchers and privacy advocates warn that it quietly collects sensitive user data, fueling fears of surveillance in politically volatile areas.

AppCloud tracks users’ locations, app usage patterns, and device information without seeking ongoing consent after initial setup. Even more concerning, attempts to uninstall it often fail due to its deep integration into Samsung’s One UI operating system.

Reports indicate the app reactivates automatically following software updates or factory resets, making it virtually unremovable for average users. This has sparked outrage among consumers in countries such as Egypt, Saudi Arabia, and the UAE, where affordable Galaxy models are popular entry points into Android.

The issue came to light through investigations by SMEX, a Lebanon-based digital rights group focused on MENA privacy. In a recent report, SMEX highlighted how AppCloud’s persistence could enable third-party unauthorized data harvesting, posing significant risks in regions with histories of government overreach.

“This isn’t just bloatware, it’s a surveillance enabler baked into the hardware,” said a SMEX spokesperson. The group called on Samsung to issue a global patch and disclose the full scope of data shared with ironSource.

[…]

Source: Unremovable Spyware on Samsung Devices Comes Pre-installed on Galaxy Series Devices

Russia imposes 24-hour mobile internet blackout for travelers returning home to “guard against drones”. Which don’t need SIM cards. Also just blacks out sim coverage in certain areas.

Russian telecom operators have begun cutting mobile internet access for 24 hours for citizens returning to the country from abroad, in what officials say is an effort to prevent Ukrainian drones from using domestic SIM cards for navigation.

“When a SIM card enters Russia from abroad, the user has to confirm that it’s being used by a person — not installed in a drone,” the Digital Development Ministry said in a statement earlier this week. Users can restore access sooner by solving a captcha or calling their operator for identification.

Authorities said the temporary blackout is meant to “ensure the safety of Russian citizens” and prevent SIM cards from being embedded in “enemy drones.”

The new rule has led to unexpected outages for residents in border regions, whose phones can automatically connect to foreign carriers. Officials advised users to switch to manual network selection to avoid being cut off.

The so-called “cooling-off period” comes a month after Moscow imposed a similar 24-hour blackout for people entering Russia with foreign SIM cards, again citing the threat of Ukrainian drone warfare.

At the same time, the Kremlin is seeking to expand the powers of its domestic intelligence service, the FSB, allowing it to order shutdowns of mobile and internet access over loosely defined “emerging threats.” The proposed legal changes would give the FSB direct authority over local telecoms.

In several regions, including the western city of Ulyanovsk, officials said mobile internet restrictions would remain in place until the end of the war in Ukraine. Access will be limited “around critical facilities of special importance, not across entire regions.”

[…]

Digital rights groups say many of the blackouts appear arbitrary and politically motivated. They noted that most drones used in the war do not rely on mobile internet connections to operate, suggesting that local officials may be imposing restrictions to signal loyalty to the Kremlin rather than address real security threats.

Source: Russia imposes 24-hour mobile internet blackout for travelers returning home | The Record from Recorded Future News

Denmark rises again, finds another way to try to introduce 100% surveillance state in EU after public backlash stopped the last attempt at chat control. Send emails to your MEPs easily!

Thanks to public pressure, the Danish Presidency has been forced to revise its text, explicitly stating that any detection obligations are voluntary. While much better, the text continues to both (a) effectively outlaw anonymous communication through mandatory age verification; and (b) include planned voluntary mass scannings. The Council is expected to formally adopt its position on Chat Control the 18th or 19th of November. Trilogue with the European Parliament will commence soon after.

The EU (still) wants to scan
your private messages and photos

The “Chat Control” proposal would mandate scanning of all private digital communications, including encrypted messages and photos. This threatens fundamental privacy rights and digital security for all EU citizens.

You Will Be Impacted

Every photo, every message, every file you send will be automatically scanned—without your consent or suspicion. This is not about catching criminals. It is mass surveillance imposed on all 450 million citizens of the European Union.

Source: Fight Chat Control – Protect Digital Privacy in the EU

The site linked will allow you to very easily send an email to your representatives by clicking a few times. Take the time to ensure they understand that people have a voice!

“This is a political deception” − Denmark gives New Chat Control another shot. Mass surveillance for all from behind closed doors.

It’s official, a revised version of the CSAM scanning proposal is back on the EU lawmakers’ table − and is keeping privacy experts worried.

The Law Enforcement Working Party met again this morning (November 12) in the EU Council to discuss what’s been deemed by critics the Chat Control bill.

This follows a meeting the group held on November 5, and comes as the Denmark Presidency put forward a new compromise after withdrawing mandatory chat scanning.

As reported by Netzpolitik, the latest Child Sexual Abuse Regulation (CSAR) proposal was received with broad support during the November 5 meeting, “without any dissenting votes” nor further changes needed.

The new text, which removes all provisions on detection obligations included in the bill and makes CSAM scanning voluntary, seems to be the winning path to finally find an agreement after over three years of trying.

Privacy experts and technologists aren’t quite on board, though, with long-standing Chat Control critic and digital rights jurist, Patrick Breyer, deeming the proposal “a political deception of the highest order.”

Chat Control − what’s changing and what are the risk

As per the latest version of the text, messaging service providers won’t be forced to scan all URLs, pictures, and videos shared by users, but rather choose to perform voluntary CSAM scanning.

There’s a catch, though. Article 4 will include a possible “mitigation measure” that could be applied to high-risk services to require them to take “all appropriate risk mitigation measures.”

According to Breyer, such a loophole could make the removal of detection obligations “worthless” by negating their voluntary nature. He said: “Even client-side scanning (CSS) on our smartphones could soon become mandatory – the end of secure encryption.”

Breaking encryption, the tech that security software like the best VPNs, Signal, and WhatsApp use to secure our private communications, has been the strongest argument against the proposal so far.

Breyer also warns that the new compromise goes further than the discarded proposal, passing from AI-powered monitoring targeting shared multimedia to the scanning of private chat texts and metadata, too.

“The public is being played for fools,” warns Breyer. “Following loud public protests, several member states, including Germany, the Netherlands, Poland, and Austria, said ‘No’ to indiscriminate Chat Control. Now it’s coming back through the back door.”

Breyer is far from being the only one expressing concerns. German-based encrypted email provider, Tuta, is also raising the alarm.

“Hummelgaard doesn’t understand that no means no,” the provider writes on X.

To understand the next steps, we now need to wait and see what the outcomes from today’s meeting look like.

Source: “This is a political deception” − New Chat Control convinces lawmakers, but not privacy experts yet | TechRadar

Ryanair tries forcing spyware app downloads by eliminating paper boarding passes

Ryanair is trying to force users to download its mobile app by eliminating paper boarding passes, starting on November 12.

As announced in February and subsequently delayed from earlier start dates, Europe’s biggest airline is moving to digital-only boarding passes, meaning customers will no longer be able to print physical ones. In order to access their boarding passes, Ryanair flyers will have to download Ryanair’s app.

“Almost 100 percent of passengers have smartphones, and we want to move everybody onto that smartphone technology,” Ryanair CEO Michael O’Leary said recently on The Independent’s daily travel podcast.

Customers are encouraged to check in online via Ryanair’s website or app before getting to the airport. People who don’t check in online before getting to the airport will have to pay the airport a check-in fee

[…]

The policy change is also meant to get people to do more with Ryanair’s app, like order food and drinks, view real-time flight information, and receive notifications during delays.

[…]

Eliminating paper boarding passes may create numerous inconveniences. To start, not everyone wants Ryanair’s app on their personal device. And many future customers, especially those who don’t fly with Ryanair frequently or who don’t fly much at all, may be unaware of the change, creating confusion during travel, which can already be inherently stressful.

Also, there are places where Ryanair flies that don’t accept digital boarding passes, including some airports in Albania and Morocco.

[…]

People who are less technically savvy or who don’t have a smart device or whose device has died won’t be completely out of luck. Ryanair says it will accommodate people without access to a smartphone with “a free of charge boarding pass at the airport” if they’ve checked in online “before arriving at the airport.”

[…]

Source: Ryanair tries forcing app downloads by eliminating paper boarding passes – Ars Technica

And of course, because apps run under different regulations and restrictions than websites, Ryanair can collect information about “lifestyle”, such as location, what other apps are running and who knows what else. Apps are pretty scary stuff, which is why so many companies are pushing these things on you in lieu of their websites.

Mozilla fellow Esra’a Al Shafei watches the spies through SurveillanceWatch

Digital rights activist Esra’a Al Shafei found FinFisher spyware on her device more than a decade ago. Now she’s made it her mission to surveil the companies providing surveillanceware, their customers, and their funders.

“You cannot resist what you do not know, and the more you know, the better you can protect yourself and resist against the normalization of mass surveillance today,” she told The Register.

To this end, the Mozilla fellow founded Surveillance Watch last year. It’s an interactive map that documents the growing number of surveillance software providers, which regions use the various products, and the investors funding them. Since its launch, the project has grown from mapping connections between 220 spyware and surveillance entities to 695 today.

These include the very well known spy tech like NSO Group’s Pegasus and Cytrox’s Predator, both famously used to monitor politicians, journalists and activists in the US, UK, and around the world.

They also include companies with US and UK government contracts, like Palantir, which recently inked a $10 billion deal with the US Army and pledged a £1.5 billion ($2 billion) investment in the UK after winning a new Ministry of Defense contract. Then there’s Paragon, an Israeli company with a $2 million Immigration and Customs Enforcement (ICE) contract for its Graphite spyware, which lets law enforcement hack smartphones to access content from encrypted messaging apps once the device is compromised.

Even LexisNexis made the list. “People think of LexisNexis and academia,” Al Shafei said. “They don’t immediately draw the connection to their product called Accurint, which collects data from both public and non-public sources and offers them for sale, primarily to government agencies and law enforcement.”

Accurint compiles information from government databases, utility bills, phone records, license plate tracking, and other sources, and it also integrates analytics tools to create detailed location mapping and pattern recognition.

“And they’re also an ICE contractor, so that’s another company that you wouldn’t typically associate with surveillance, but they are one of the biggest surveillance agencies out there,” Al Shafei said.

It also tracks funders. Paragon’s spyware is boosted by AE Industrial Partners, a Florida-based investment group specializing in “national security” portfolios. Other major backers of surveillance technologies include CIA-affiliated VC firm In-Q-Tel, Andreessen Horowitz (also known as a16z), and mega investment firm BlackRock.

This illustrates another trend: It’s not just authoritarian countries using and investing in these snooping tools. In fact, America now leads the world in surveillance investment, with the Atlantic Council think tank identifying 20 new US investors in the past year.

[…]

They know who you are’

The Surveillance Watch homepage announces: “They know who you are. It’s time to uncover who they are.”

It’s creepy and accurate, and portrays all of the feelings that Al Shafei has around her spyware encounters. Her Majal team has “faced persistent targeting by sophisticated spyware technologies, firsthand, for a very long time, and this direct exposure to surveillance threats really led us to launch Surveillance Watch,” she said. “We think it’s very important for people to understand exactly how they’re being surveilled, regardless of the why.”

The reality is, everybody – not just activists and politicians – is subject to surveillance, whether it’s from smart-city technologies, Ring doorbell cameras, or connected cars. Users will always choose simplicity over security, and the same can be said for data privacy.

“We want to show that when surveillance goes not just unnoticed, but when we start normalizing it in our everyday habits, we look at a new, shiny AI tool, and we say, ‘Yes, of course, take access to all my data,'” Al Shafei said. “There’s a convenience that comes with using all of these apps, tracking all these transactions, and people don’t realize that this data can and does get weaponized against you, and not just against you, but also your loved ones.”

Source: Mozilla fellow Esra’a Al Shafei watches the watchers • The Register

Critics call proposed changes to landmark EU privacy law ‘death by a thousand cuts’ – “legitimate interest” would allow personal data exfiltration

Privacy activists say proposed changes to Europe’s landmark privacy law, including making it easier for Big Tech to harvest Europeans’ personal data for AI training, would flout EU case law and gut the legislation.
The changes proposed by the European Commission are part of a drive to simplify a slew of laws adopted in recent years on technology, environmental and financial issues which have in turn faced pushback from companies and the U.S. government.
Sign up here.
EU antitrust chief Henna Virkkunen will present the Digital Omnibus, in effect proposals to cut red tape and overlapping legislation such as the General Data Protection Regulation, the Artificial Intelligence Act, the e-Privacy Directive and the Data Act, on November 19.
According to the plans, Google (GOOGL.O)

, opens new tab, Meta Platforms (META.O)

, opens new tab, OpenAI and other tech companies may be allowed to use Europeans’ personal data to train their AI models based on legitimate interest.
In addition, companies may be exempted from the ban on processing special categories of personal data “in order not to disproportionately hinder the development and operation of AI and taking into account the capabilities of the controller to identify and remove special categories of personal data”.
“The draft Digital Omnibus proposes countless changes to many different articles of the GDPR. In combination this amounts to a death by a thousand cuts,” Austrian privacy group noyb said in a statement.
Noyb is known for filing complaints against American companies such as Apple (AAPL.O)
, opens new tab, Alphabet and Meta that have triggered several investigations and resulted in billions of dollars in fines.
“This would be a massive downgrading of Europeans’ privacy 10 years after the GDPR was adopted,” noyb’s Max Schrems said.
European Digital Rights, an association of civil and human rights organisations across Europe, slammed a proposal to merge the ePrivacy Directive, known as the cookie law that resulted in the proliferation of cookie consent pop-ups, into the GDPR.
“These proposals would change how the EU protects what happens inside your phone, computer and connected devices,” EDRi policy advisor Itxaso Dominguez de Olazabal wrote in a LinkedIn post.
“That means access to your device could rely on legitimate interest or broad exemptions like security, fraud detection or audience measurement,” she said.
The proposals would need to be thrashed out with EU countries and European Parliament in the coming months before they can be implemented.

Source: Critics call proposed changes to landmark EU privacy law ‘death by a thousand cuts’ | Reuters

Anyone can claim anything as being “legitimate interest”. It is what terms and conditions have been using for decades to pass any and all data on to third parties. At least the GDPR kind of stood in the way from it going to countries like the USA and China.

The FBI Is Trying to Unmask the Registrar Behind Archive.Today

The FBI is looking to ascertain the identity of the creator of a long-running archiving site that is used by millions of people all over the world.

Archive.Today is a popular archiving website—similar in many ways to the Internet Archive’s Wayback Machine—that keeps copies of news articles and government websites that users have submitted. The site can also be used for skirting paywalls. However, it can also be useful for documenting government websites that may be subject to change. The big difference is that the Internet Archive is a transparent and legitimate non-profit that gives websites the option to opt-out of having their content stored on its platform.

If you haven’t heard of Archive.Today, you may have run into mirror sites hosted at Archive.is or Archive.ph.

About a week ago, the X account belonging to Archive posted a link to a federal subpoena, which is dated October 30th. The subpoena, which was originally spotted by a German news site, is for a Canadian web registration company called Tucows, and demands that the company turn over “customer or subscriber name, address of service, and billing address” as well as an extensive list of other information related to the “customer behind archive.today.”

404 Media notes that Archive.Today has hundreds of millions of webpages saved. The outlet further notes that “very little is known about the person or people who work on archive.today.” There is a modest FAQ page on the site, but it doesn’t offer anything in the way of identifying information about the creator of the site.

The subpoena states:

The information sought through this subpoena relates to a federal criminal investigation being conducted by the FBI. Your company is required to furnish this information. You are requested not to disclose the existence of this subpoena indefinitely as any such disclosure could interfere with an ongoing investigation and enforcement of the law.

Well, I guess that ship has sailed.

Source: The FBI Is Trying to Unmask the Registrar Behind Archive.Today

DHS wants more biometric data from more people – even from citizens

If you’re filing an immigration form – or helping someone who is – the Feds may soon want to look in your eyes, swab your cheek, and scan your face. The US Department of Homeland Security wants to greatly expand biometric data collection for immigration applications, covering immigrants and even some US citizens tied to those cases.

DHS, through its component agency US Citizenship and Immigration Services, on Monday proposed a sweeping expansion of the agency’s collection of biometric data. While ostensibly about verifying identities and preventing fraud in immigration benefit applications, the proposed rule goes much further than simply ensuring applicants are who they claim to be.

First off, the rule proposes expanding when DHS can collect biometric data from immigration benefit applicants, as “submission of biometrics is currently only mandatory for certain benefit requests and enforcement actions.” DHS wants to change that, including by requiring practically everyone an immigrant is associated with to submit their biometric data.

“DHS proposes in this rule that any applicant, petitioner, sponsor, supporter, derivative, dependent, beneficiary, or individual filing or associated with a benefit request or other request or collection of information, including U.S. citizens, U.S. nationals and lawful permanent residents, and without regard to age, must submit biometrics unless DHS otherwise exempts the requirement,” the rule proposal said.

DHS also wants to require the collection of biometric data from “any alien apprehended, arrested or encountered by DHS.”

It’s not explicitly stated in the rule proposal why US citizens associated with immigrants who are applying for benefits would have to have their biometric data collected. DHS didn’t answer questions to that end, though the rule stated that US citizens would also be required to submit biometric data “when they submit a family-based visa petition.”

Give me your voice, your eye print, your DNA samples

In addition to expanded collection, the proposed rule also changes the definition of what DHS considers to be valid biometric data.

“Government agencies have grouped together identifying features and actions, such as fingerprints, photographs, and signatures under the broad term, biometrics,” the proposal states. “DHS proposes to define the term ‘biometrics’ to mean ‘measurable biological (anatomical, physiological or molecular structure) or behavioral characteristics of an individual,'” thus giving DHS broad leeway to begin collecting new types of biometric data as new technologies are developed.

The proposal mentions several new biometric technologies DHS wants the option to use, including ocular imagery, voice prints and DNA, all on the table per the new rule.

[…]

Source: DHS wants more biometric data – even from citizens • The Register

Music festivals to collect data with RFID wristbands. Also, randomly, fascinating information about data Flitsmeister collects.

This summer, Dutch music festivals will use RFID wristbands to collect visitor data. The technology has been around for a while, but the innovation lies in its application. The wristbands are anonymous by default, but users can activate them to participate in loyalty programs or unlock on-site experiences.Visitor privacy is paramount; overly invasive tracking is avoided.

This is according to Michael Guntenaar, Managing Director at Superstruct Digital Services, in the Emerce TV video ‘Data is the new headliner at dance festivals’. Superstruct is a network of approximately 80 large festivals (focused on experience and brand identity) spread across Europe and Australia. ID&T, known for events such as Sensation, Mysteryland, and Defqon.1, joined Superstruct in September 2021. Tula Daans, Data Analyst Brand Partnerships at ID&T, also joined on behalf of ID&T.

Festivals use various data sources, primarily ticket data (age, location, gender/gender identity), but also marketing data (social media), consumption data (food and drinks), and post-event surveys.

For brand partnerships, surveys are sent to visitors after the event to gauge whether they saw brands, what they thought of them, and thus gain insight into brand perception. Deliberately, no detailed feedback is requested during the festival to avoid disturbing the visitor experience, says Guntenaar.

The Netherlands is a global leader in data collection. Defqon.1 is mentioned as a breeding ground for experiments with data and technology, due to its technically advanced team and highly engaged target group.

[…]

In a second video, ‘Real-time mobility info in a complex data landscape’, Jorn de Vries, managing director at Flitsmeister, talks about mobility data and the challenges and opportunities within this market. The market for mobility data, which ranges from traffic flows to speed camera notifications, is busy with players like Garmin, Google, Waze, and TomTom.

Nevertheless, Flitsmeister still sees room for growth, because mobility is timeless and brings challenges, such as the desire to get from A to B quickly, efficiently, green, and cheaply. Innovation is essential to maintain a place in this market, says De Vries.

Flitsmeister has a large online community of almost 3 million monthly active users. This community has grown significantly over the years, even after introducing paid propositions. What distinguishes Flitsmeister from global players such as Google and Waze, according to De Vries, is their local embeddedness, with marketing and content that aligns with the language and use cases of users in the Benelux. They also collaborate with governments through partnerships, allowing them to offer specific local services, such as warnings for emergency services. Technically, competitors might be able to do this, says De Vries, but it probably isn’t a high priority because it’s local; Flitsmeister, however, believes that you have to dare to go all the way to properly serve a market, even if this requires investments that are only relevant for the Netherlands. Another example of local embeddedness is their presence on almost every radio station.

The Flitsmeister app now consists of eight main uses. In addition to the well-known speed cameras and track control, it includes warnings for emergency services (ambulance, fire brigade, Rijkswaterstaat vehicles) who are informed early when such a vehicle approaches with blue lights. The app also provides traffic jam information and warnings for incidents, stationary vehicles, and roadworks. Flitsmeister tries to give warnings for the start of traffic jams earlier than the flashing signs above the road, because they are not bound by the gantries where these signs are located.

Navigation is an added feature. In addition, there is paid parking at the end of the journey. Flitsmeister also has links with so-called smart traffic lights, where they receive data about the status of the light and share data with the intersection to optimize it. This can, for example, lead to a green light if you approach an intersection at night and there is no other traffic. More than 1500 smart intersections in the Netherlands are already equipped. Flitsmeister also receives data from matrix signs, including red crosses, arrows, and adjusted maximum speeds.

Privacy is a crucial topic when bringing consumers and data together. Flitsmeister has seen privacy from the start as a Unique Selling Point (USP) if handled correctly. Especially in countries like Germany, this is more active than in the Benelux, and privacy-friendly companies have a plus in the eyes of the consumer. Large players such as Google and Waze have the same legal playing field as Flitsmeister, but differ in what they want, can, and do.

Flitsmeister does collect live GPS data that provides a lot of insight into traffic movements. They are working with Rijkswaterstaat and their parent company Bmobile on pilots, including on the A9, where they combine loop data in the asphalt with their real-time data. This provides a more accurate and cost-efficient picture than road loops alone, which are expensive to maintain and measure limitedly. This combination allows them to provide relevant information, even between the road loops, leading to more accurate and cost-efficient traffic information.

Flitsmeister also works with data that detects real-time situations and provides early advice. They are doing pilots with ‘trigger based rerouting’, where users are proactively rerouted if a reported incident on their route is likely to affect their travel time, even if the travel time has not yet changed at that moment. The challenge here is that people must be receptive to this and understand the rationale behind the rerouting.

Although there is a lot of talk about connected vehicle data, Flitsmeister’s focus is more on strengthening the relationship with the driver than with the vehicle itself. Jorn de Vries believes that the driver will ultimately lead, as the need for mobility comes from the individual and the vehicle facilitates this.

The video Data is the new headliner at dance festivals can be watched for free. The collection Customer data: trends, innovation and future will be supplemented in the coming months and can be viewed for free after registration.

Source: Kagi Translate |(Emerce TV): music festivals want to collect data with RFID wristbands

Clearview AI faces criminal heat for ignoring EU data fines – wait: these creeps still exist?

Privacy advocates at Noyb filed a criminal complaint against Clearview AI for scraping social media users’ faces without consent to train its AI algorithms.

Austria-based Noyb (None of Your Business) is targeting the US company and its executives, arguing that if successful, individuals who authorized the data collection could face criminal penalties, including imprisonment.

The complaint focuses largely on Clearview’s apparent disregard for fines from France, Greece, Italy, the Netherlands, and the UK. Aside from the UK — where Clearview recently lost its appeal of a $10 million fine from the Information Commissioner’s Office — the company has yet to pay other fines totaling more than $100 million, Noyb claims.

“EU data protection authorities did not come up with a way to enforce its fines and bans against the US company, allowing Clearview AI to effectively dodge the law,” said Noyb in its announcement today.

Max Schrems, privacy lawyer and founder of Noyb, said: “Clearview AI seems to simply ignore EU fundamental rights and just spits in the face of EU authorities.”

The criminal complaint, filed with Austrian public prosecutors, hinges on Article 84 of the GDPR, which allows EU member states to seek proportionate punishments for data protection violations, including through criminal proceedings.

Clearview AI claims it has collected more than 60 billion images to help law enforcement agencies improve facial recognition tech.

Scraping data is not inherently illegal, however, Clearview’s sweeping collection of social media photos for commercial gain has repeatedly violated GDPR regulations across Europe.

Austria ruled the company’s practices illegal in 2023, though it imposed no fine.

Noyb is using a provision in Austria’s own implementation of the GDPR that allows criminal proceedings to be brought against managers of organizations that flout data protection laws.

“We even run cross-border criminal procedures for stolen bikes, so we hope that the public prosecutor also takes action when the personal data of billions of people was stolen – as has been confirmed by multiple authorities,” said Schrems.

Source: Clearview AI faces criminal heat for ignoring EU data fines • The Register