How Cops Are Using Flock’s license plate camera Network To Surveil Protesters And Activists

It’s no secret that 2025 has given Americans plenty to protest about. But as news cameras showed protesters filling streets of cities across the country, law enforcement officers—including U.S. Border Patrol agents—were quietly watching those same streets through different lenses: Flock Safety automated license plate readers (ALPRs) that tracked every passing car.

Through an analysis of 10 months of nationwide searches on Flock Safety’s servers, we discovered that more than 50 federal, state, and local agencies ran hundreds of searches through Flock’s national network of surveillance data in connection with protest activity. In some cases, law enforcement specifically targeted known activist groups, demonstrating how mass surveillance technology increasingly threatens our freedom to demonstrate.

Flock Safety provides ALPR technology to thousands of law enforcement agencies. The company installs cameras throughout their jurisdictions, and these cameras photograph every car that passes, documenting the license plate, color, make, model and other distinguishing characteristics. This data is paired with time and location, and uploaded to a massive searchable database. Flock Safety encourages agencies to share the data they collect broadly with other agencies across the country. It is common for an agency to search thousands of networks nationwide even when they don’t have reason to believe a targeted vehicle left the region.

Via public records requests, EFF obtained datasets representing more than 12 million searches logged by more than 3,900 agencies between December 2024 and October 2025. The data shows that agencies logged hundreds of searches related to the 50501 protests in February, the Hands Off protests in April, the No Kings protests in June and October, and other protests in between.

[…]

While EFF and other civil liberties groups argue the law should require a search warrant for such searches, police are simply prompted to enter text into a “reason” field in the Flock Safety system. Usually this is only a few words–or even just one.

In these cases, that word was often just “protest.”

Crime does sometimes occur at protests, whether that’s property damage, pick-pocketing, or clashes between groups on opposite sides of a protest. Some of these searches may have been tied to an actual crime that occurred, even though in most cases officers did not articulate a criminal offense when running the search. But the truth is, the only reason an officer is able to even search for a suspect at a protest is because ALPRs collected data on every single person who attended the protest.

[…]

In a few cases, police were using Flock’s ALPR network to investigate threats made against attendees or incidents where motorists opposed to the protests drove their vehicle into crowds. For example, throughout June 2025, an Arizona Department of Public Safety officer logged three searches for “no kings rock threat,” and a Wichita (Kan.) Police Department officer logged 22 searches for various license plates under the reason “Crime Stoppers Tip of causing harm during protests.”

Even when law enforcement is specifically looking for vehicles engaged in potentially criminal behavior such as threatening protesters, it cannot be ignored that mass surveillance systems work by collecting data on everyone driving to or near a protest—not just those under suspicion.

Border Patrol’s Expanding Reach

As U.S. Border Patrol (USBP), ICE, and other federal agencies tasked with immigration enforcement have massively expanded operations into major cities, advocates for immigrants have responded through organized rallies, rapid-response confrontations, and extended presences at federal facilities.

USBP has made extensive use of Flock Safety’s system for immigration enforcement, but also to target those who object to its tactics. In June, a few days after the No Kings Protest, USBP ran three searches for a vehicle using the descriptor “Portland Riots.”

[…]

Fighting Back Against ALPR

ALPR systems are designed to capture information on every vehicle that passes within view. That means they don’t just capture data on “criminals” but on everyone, all the time—and that includes people engaged in their First Amendment right to publicly dissent. Police are sitting on massive troves of data that can reveal who attended a protest, and this data shows they are not afraid to use it.

Our analysis only includes data where agencies explicitly mentioned protests or related terms in the “reason” field when documenting their search. It’s likely that scores more were conducted under less obvious pretexts and search reasons. According to our analysis, approximately 20 percent of all searches we reviewed listed vague language like “investigation,” “suspect,” and “query” in the reason field. Those terms could well be cover for spying on a protest, an abortion prosecution, or an officer stalking a spouse, and no one would be the wiser–including the agencies whose data was searched. Flock has said it will now require officers to select a specific crime under investigation, but that can and will also be used to obfuscate dubious searches.

For protestors, this data should serve as confirmation that ALPR surveillance has been and will be used to target activities protected by the First Amendment. Depending on your threat model, this means you should think carefully about how you arrive at protests, and explore options such as by biking, walking, carpooling, taking public transportation, or simply parking a little further away from the action. Our Surveillance Self-Defense project has more information on steps you could take to protect your privacy when traveling to and attending a protest.

[…]

Everyone should have the right to speak up against injustice without ending up in a database.

Source: How Cops Are Using Flock Safety’s ALPR Network To Surveil Protesters And Activists | Techdirt

Australias Social Media Ban goes into Effect

The BBC has a live page following the ban and surprise surprise – it didn’t take long for people to circumvent the ban at all, with alternative social media being used (eg Lemon8, Yope, etc), VPNs being used (and the use of VPNs being threatened by ministers), pleas by campaigners asking for parents not to help circumvent the rules, etc.

That the ban won’t work is predictable. It will force kids into hiding, where they will be beyond the oversight of absolutely anyone. Worse, it will leave them with no help when things do go wrong – who is going to be complaining to their parents or the police about cyberbullying when they are using an illegal platform where they are being bullied on?

The age limit of 16 is entirely arbitrary too. Some kids develop faster than others and some very much slower. With science showing that adulthood starts at 32 (and looking at how far right politics and belief in populist nonsense is going globally, in many cases seemingly never), mature children are being punished and immature young adults are being exposed to content that they are not equipped to handle.

The goal – stopping toxic, unwanted behaviors in social media platforms – is a good one. By now we should be able to define these unwanted behaviours (eg no false news; no body shaming; no targeted abuse; no political preferences in feeds; who really needs video calls with groups of more than 6 people on a social media platform anyway? etc) and test them. To throw a random age line at the problem doesn’t solve it. How about for every instance of one of these behaviours a huge fine is levied (eg $1 million or above – the scale of the profits of the social media companies beggars belief, so only huge fines will make them feel the cost / benefit of paying the fine / fixing the problem lies in the fixing the problem side of things) – something that these behemoths cannot ignore. And if too many transgressions are detected in a certain period (eg 100 fines per week) then the platform is closed entirely for a certain period (weeks / months). This will incentivise the social media platforms to fix the problems which is what we want instead of driving kids into hiding and exposing them to a much more dangerous social media landscape.

The year age verification laws came for the open internet

When the nonprofit Freedom House recently published its annual report, it noted that 2025 marked the 15th straight year of decline for global internet freedom. The biggest decline, after Georgia and Germany, came within the United States.

Among the culprits cited in the report: age verification laws, dozens of which have come into effect over the last year. “Online anonymity, an essential enabler for freedom of expression, is entering a period of crisis as policymakers in free and autocratic countries alike mandate the use of identity verification technology for certain websites or platforms, motivated in some cases by the legitimate aim of protecting children,” the report warns.

Age verification laws are, in some ways, part of a years-long reckoning over child safety online, as tech companies have shown themselves unable to prevent serious harms to their most vulnerable users. Lawmakers, who have failed to pass data privacy regulations, Section 230 reform or any other meaningful legislation that would thoughtfully reimagine what responsibilities tech companies owe their users, have instead turned to the blunt tool of age-based restrictions — and with much greater success.

Over the last two years, 25 states have passed laws requiring some kind of age verification to access adult content online. This year, the Supreme Court delivered a major victory to backers of age verification standards when it upheld a Texas law requiring sites hosting adult content to check the ages of their users.

Age checks have also expanded to social media and online platforms more broadly. Sixteen states now have laws requiring parental controls or other age-based restrictions for social media services. (Six of these measures are currently in limbo due to court challenges.) A federal bill to ban kids younger than 13 from social media has gained bipartisan support in Congress. Utah, Texas and Louisiana passed laws requiring app stores to check the ages of their users, all of which are set to go into effect next year. California plans to enact age-based rules for app stores in 2027.

These laws have started to fragment the internet. Smaller platforms and websites that don’t have the resources to pay for third-party verification services may have no choice but to exit markets where age checks are required. Blogging service Dreamwidth pulled out of Mississippi after its age verification laws went into effect, saying that the $10,000 per user fines it could face were an “existential threat” to the company. Bluesky also opted to go dark in Mississippi rather than comply. (The service has complied with age verification laws in South Dakota and Wyoming, as well as the UK.) Pornhub, which has called existing age verification laws “haphazard and dangerous,” has blocked access in 23 states.

Pornhub is not an outlier in its assessment. Privacy advocates have long warned that age verification laws put everyone’s privacy at risk. Practically, there’s no way to limit age verification standards only to minors. Confirming the ages of everyone under 18 means you have to confirm the ages of everyone. In practice, this often means submitting a government-issued ID or allowing an app to scan your face. Both are problematic and we don’t need to look far to see how these methods can go wrong.

Discord recently revealed that around 70,000 users “may” have had their government IDs leaked due to an “incident” involving a third-party vendor the company contracts with to provide customer service related to age verification. Last year, another third-party identity provider that had worked with TikTok, Uber and other services exposed drivers’ licenses. As a growing number of platforms require us to hand over an ID, these kinds of incidents will likely become even more common.

Similar risks exist for face scans. Because most minors don’t have official IDs, platforms often rely on AI-based tools that can guess users’ ages. A face scan may seem more private than handing over a social security number, but we could be turning over far more information than we realize, according to experts at the Electronic Frontier Foundation (EFF).

“When we submit to a face scan to estimate our age, a less scrupulous company could flip a switch and use the same face scan, plus a slightly different algorithm, to guess our name or other demographics,” the organization notes. “A poorly designed system might store this personal data, and even correlate it to the online content that we look at. In the hands of an adversary, and cross-referenced to other readily available information, this information can expose intimate details about us.”

These issues aren’t limited to the United States. Australia, Denmark and Malaysia have taken steps to ban younger teens from social media entirely. Officials in France are pushing for a similar ban, as well as a “curfew” for older teens. These measures would also necessitate some form of age verification in order to block the intended users. In the UK, where the Online Safety Act went into effect earlier this year, we’ve already seen how well-intentioned efforts to protect teens from supposedly harmful content can end up making large swaths of the internet more difficult to access.

The law is ostensibly meant to “prevent young people from encountering harmful content relating to suicide, self-harm, eating disorders and pornography,” according to the BBC. But the law has also resulted in age checks that reach far beyond porn sites. Age verification is required to access music on Spotify. It will soon be required for Xbox accounts. On X, videos of protests have been blocked. Redditors have reported being blocked from a lengthy number of subreddits that are marked NSFW but don’t actually host porn, including those related to menstruation, news and addiction recovery. Wikipedia, which recently lost a challenge to be excluded from the law’s strictest requirements, is facing the prospect of being forced to verify the ages of its UK contributors, which the organization has said could have disastrous consequences.

The UK law has also shown how ineffective existing age verification methods are. Users have been able to circumvent the checks by using selfies of video game characters, AI-generated images of ID documents and, of course, Virtual Private Networks (VPNs).

As the EFF notes, VPNs are incredibly widely used. The software allows people to browse the internet while masking their actual location. They’re used by activists and students and people who want to get around geoblocks built into streaming services. Many universities and businesses (including Engadget parent company Yahoo) require their students and workers to use VPNs in order to access certain information. Blocking VPNs would have serious repercussions for all of these groups.

The makers of several popular VPN services reported major spikes in the UK following the Online Safety Act going into effect this summer, with ProtonVPN reporting a 1,400 percent surge in sign-ups. That’s also led to fears of a renewed crackdown on VPNs. Ofcom, the regulator tasked with enforcing the law, told TechRadar it was “monitoring” VPN usage, which has further fueled speculation it could try to ban or restrict their use. And here in the States, lawmakers in Wisconsin have proposed an age verification law that would require sites that host “harmful” content to also block VPNs.

While restrictions on VPNs are, for now, mostly theoretical, the fact that such measures are even being considered is alarming. Up to now, VPN bans are more closely associated with authoritarian countries without an open internet, like Russia and China. If we continue down a path of trying to put age gates up around every piece of potentially objectionable content, the internet could get a lot worse for everyone.

Source: The year age verification laws came for the open internet

New EU Jolla Phone Now Available for Pre-Order as an Independent No Spyware Linux Phone

Jolla kicked off a campaign for a new Jolla Phone, which they call the independent European Do It Together (DIT) Linux phone, shaped by the people who use it.

“The Jolla Phone is not based on Big Tech technology. It is governed by European privacy thinking and a community-led model.”

The new Jolla Phone is powered by a high-performing Mediatek 5G SoC, and features 12GB RAM, 256GB storage that can be expanded to up to 2TB with a microSDXC card, a 6.36-inch FullHD AMOLED display with ~390ppi, 20:9 aspect ratio, and Gorilla Glass, and a user-replaceable 5,500mAh battery.

The Linux phone also features 4G/5G support with dual nano-SIM and a global roaming modem configuration, Wi-Fi 6 wireless, Bluetooth 5.4, NFC, 50MP Wide and 13MP Ultrawide main cameras, front front-facing wide-lens selfie camera, fingerprint reader on the power key, a user-changeable back cover, and an RGB indication LED.

On top of that, the new Jolla Phone promises a user-configurable physical Privacy Switch that lets you turn off the microphone, Bluetooth, Android apps, or whatever you wish.

The device will be available in three colors, including Snow White, Kaamos Black, and The Orange. All the specs of the new Jolla Phone were voted on by Sailfish OS community members over the past few months.

Honouring the original Jolla Phone form factor and design, the new model ships with Sailfish OS (with support for Android apps), a Linux-based European alternative to dominating mobile operating systems that promises a minimum of 5 years of support, no tracking, no calling home, and no hidden analytics.

“Mainstream phones send vast amounts of background data. A common Android phone sends megabytes of data per day to Google even if the device is not used at all. Sailfish OS stays silent unless you explicitly allow connections,” said Jolla.

The new Jolla Phone is now available for pre-order for 99 EUR and will only be produced if at least 2000 pre-orders are reached in one month from today, until January 4th, 2026. The full price of the Linux phone will be 499 EUR (incl. local VAT), and the 99 EUR pre-order price will be fully refundable and deducted from the full price.

The device will be manufactured and sold in Europe, but Jolla says that it will design the cellular band configuration to enable global travelling as much as possible, including e.g. roaming in the U.S. carrier networks. The initial sales markets are the EU, the UK, Switzerland, and Norway.

Source: New Jolla Phone Now Available for Pre-Order as an Independent Linux Phone – 9to5Linux

New hotness in democracy: if the people say no to mass surveillance, do it again right after you have said you won’t do it. Not EU this time: it’s India

You know what they say: If at first you don’t succeed at mass government surveillance, try, try again. Only two days after India backpedaled on its plan to force smartphone makers to preinstall a state-run “cybersecurity” app, Reuters reports that the country is back at it. It’s said to be considering a telecom industry proposal with another draconian requirement. This one would require smartphone makers to enable always-on satellite-based location tracking (Assisted GPS).

The measure would require location services to remain on at all times, with no option to switch them off. The telecom industry also wants phone makers to disable notifications that alert users when their carriers have accessed their location.

[…]

Source: India is reportedly considering another draconian smartphone surveillance plan

Looks like the Indians took a page out of the Danish playbook for Chat Control and turning the EU into a 1984 Brave New World

Subaru Owners Are Ticked About In-Car Pop-Up Ads for SiriusXM

I’ve written about Stellantis brands doing this twice already in 2025, and this time, it’s Subaru sending pop-up ads for SiriusXM to owners’ infotainment screens.

The Autopian ran a story on the egregious push notifications on Monday, and it only took a short search to find more examples. It happened right around Thanksgiving, as the promotion urged drivers to “Enjoy SiriusXM FREE thru 12/1.” That day has come and gone, but not before it angered droves of Subaru owners.

“I have got this Sirius XM ad a few times over the last couple of years,” the caption on the embedded Reddit thread reads. “This last time was the final straw as I almost wrecked because of it. My entire infotainment screen changed which caused me to take my eyes off the road and since I was going 55mph in winter I swerved a bit and slid and almost went off into a ditch. Something that would not have happened had this ad not popped up.

[…]

At least one 2024 Crosstrek owner reported that the pop-up took over their screen even though they were using Apple CarPlay. To force-close an application that’s in use, solely for the sake of in-car advertising, is especially egregious.

[…]

Reddit posts dating back as far as 2023 show owners complaining about in-car notifications.

[…]

 

Source: Subaru Owners Are Ticked About In-Car Pop-Up Ads for SiriusXM

India demands smartphone makers install government app

India’s government has issued a directive that requires all smartphone manufacturers to install a government app on every handset in the country and has given them 90 days to get the job done – and to ensure users can’t remove the code.

The app is called “Sanchar Saathi” and is a product of India’s Department of Telecommunications (DoT).

On Google Play and Apple’s App Store, the Department describes the app as “a citizen centric initiative … to empower mobile subscribers, strengthen their security and increase awareness about citizen centric initiatives.”

The app does those jobs by allowing users to report incoming calls or messages – even on WhatsApp – they suspect are attempts at fraud. Users can also report incoming calls for which caller ID reveals the +91 country code, as India’s government thinks that’s an indicator of a possible illegal telecoms operator.

Users can also block their device if they lose it or suspect it was stolen, an act that will prevent it from working on any mobile network in India.

Another function allows lookup of IMEI numbers so users can verify if their handset is genuine.

Spam and scams delivered by calls or TXTs are pervasive around the world, and researchers last year found that most Indian netizens receive three or more dodgy communiqués every day. This app has obvious potential to help reduce such attacks.

An announcement from India’s government states that cybersecurity at telcos is another reason for the requirement to install the app.

“Spoofed/ Tampered IMEIs in telecom network leads to situation where same IMEI is working in different devices at different places simultaneously and pose challenges in action against such IMEIs,” according to the announcement. “India has [a] big second-hand mobile device market. Cases have also been observed where stolen or blacklisted devices are being re-sold. It makes the purchaser abettor in crime and causes financial loss to them. The blocked/blacklisted IMEIs can be checked using Sanchar Saathi App.”

That motive is likely the reason India has required handset-makers to install Sanchar Saathi on existing handsets with a software update.

The directive also requires the app to be pre-installed, “visible, functional, and enabled for users at first setup.” Manufacturers may not disable or restrict its features and “must ensure the App is easily accessible during device setup.”

Those functions mean India’s government will soon have a means of accessing personal info on hundreds of millions of devices.

Apar Gupta, founder and director of India’s Internet Freedom Foundation, has criticized the directive on grounds that Sanchar Saathi isn’t fit for purpose. “Rather than resorting to coercion and mandating it to be installed the focus should be on improving it,” he wrote.

[…]

Source: India demands smartphone makers install government app • The Register

Cowed BBC Censors Lecture Calling Trump ‘Most Openly Corrupt President’

The BBC is now voluntarily suppressing criticism of Donald Trump before it airs—and the reason is obvious: Trump threatened to sue them into oblivion, and they blinked.

Historian Rutger Bregman revealed this week that the BBC commissioned a public lecture from him last month, recorded it, then quietly cut a single sentence before broadcast. The deleted line? Calling Trump “the most openly corrupt president in American history.” Bregman posted about the capitulation, noting that the decision came from “the highest levels” of the BBC—meaning the executives dealing with Trump’s threats.

Well, at least we should call out Donald Trump as the most openly censorial president in American history.

This is the payoff from Trump’s censorship campaign against the BBC. Weeks ago, Trump threatened to sue the BBC for a billion dollars over an edit in a program it aired a year ago. The BBC apologized and fired employees associated with the project. That wasn’t enough. Trump’s FCC censorship lackey Brendan Carr launched a bullshit investigation anyway. And now the BBC is preemptively editing out true statements that might anger the thin-skinned man baby President.

Bregman posted the exact line that got cut. Here’s the full paragraph, with the censored sentence in bold:

On one side we had an establishment propping up an elderly man in obvious mental decline. On the other we had a convicted reality star who now rules as the most openly corrupt president in American history. When it comes to staffing his administration, he is a modern day Caligula, the Roman emperor who wanted to make his horse a consul. He surrounds himself with loyalists, grifters, and sycophants.

Gosh, for what reason would the BBC cut that one particular line?

The BBC admitted to this in the most mealy-mouthed way when asked by the New Republic to comment on the situation:

Asked for comment on Bregman’s charge, a spokesperson for the BBC emailed me this: “All of our programmes are required to comply with the BBC’s editorial guidelines, and we made the decision to remove one sentence from the lecture on legal advice.”

“On legal advice.” Translation: Trump’s SLAPP suit threats worked exactly as intended.

Greg Sargent, writing in the New Republic, nails why this matters:

There is something deeply perverse in this outcome. Even if you grant Trump’s criticism of the edit of his January 6 speech—never mind that as the violence raged, Trump essentially sat on his hands for hours and arguably directed the mob to target his vice president—the answer to this can’t be to let Trump bully truth-telling into self-censoring silence.

That’s plainly what happened here.

Exactly. The BBC’s initial capitulation—the apology, the firings, the groveling—was bad enough. But this is worse. This is pre-censorship. The BBC is now editing out true statements about Trump before they air, purely because they’re afraid of how he might react. That’s not “legal advice.” That’s cowardice institutionalized as policy.

Once again, I remind you that Trump’s supporters have, for years, insisted that he was “the free speech president” and have talked about academic freedom and the right to state uncomfortable ideas.

[…]

Source: BBC Pre-Edits Lecture Calling Trump ‘Most Openly Corrupt President’ | Techdirt

Nexperia accused by parent Wingtech and Chinese unit of plotting to move supply chain

BEIJING/AMSTERDAM, Nov 28 (Reuters) – Wingtech (600745.SS)

, opens new tab, the Chinese parent company of Netherlands-based Nexperia, accused its Dutch unit on Friday of conspiring to build a non-Chinese supply chain and permanently strip it of its control, escalating tensions between the two sides.
In a separate statement, Nexperia’s Chinese arm demanded the Dutch business halt overseas expansion, including in Malaysia. “Abandon improper intentions to replace Chinese capacity,” Nexperia China said.
Sign up here.
The accusations follow an open letter from Nexperia published on Thursday claiming repeated attempts to engage with its Chinese unit had failed.
Nexperia, which produces billions of chips for cars and electronics, has been in a tug-of-war since the Dutch government seized the company two months ago on economic security grounds. An Amsterdam court subsequently stripped Wingtech of control.
Beijing retaliated by halting exports of Nexperia’s finished products on October 4, leading to disruptions in global automotive supply chains.
The curbs were relaxed in early November and the Dutch government suspended the seizure last week following talks. But the court ruling remains in force.
The chipmaker’s Europe-based units and Chinese entities remain locked in a standoff. Nexperia’s Chinese arm declared itself independent from European management, which responded by stopping the shipment of wafers to the company’s plant in China.

CHINESE PARENT WARNS OF RENEWED SUPPLY CHAIN DISRUPTION

The escalating war of words casts doubt on the viability of a company-led resolution urged by China and the European Union this week.
Wingtech said on Friday that Nexperia’s Dutch unit was avoiding the issue of its “legitimate control”, making negotiations untenable.
“We need to find a way first to talk to one another constructively” a spokesperson for Nexperia’s European headquarters said on Friday.
Nexperia China said that the Dutch unit’s claim it could not contact its management was misleading, accusing it of stifling communication by deleting the email accounts of Nexperia China employees and terminating their access to IT systems.
The Chinese unit claimed that the Dutch side was engineering a breakup, citing a $300 million plan to expand a Malaysian plant, and an alleged internal goal of sourcing 90% of production outside China by mid-2026.
[…]

Source: Nexperia accused by parent Wingtech and Chinese unit of plotting to move supply chain | Reuters

Nexperia crisis: Dutch chipmaker wants continuity from China unit, which is angry that Nexperia wants to open factories outside of China

Dutch chipmaker Nexperia has publicly called on its China unit to help restore supply chain operations, warning in an open letter that customers across industries are reporting “imminent production outages.”

Nexperia’s Dutch unit said Thursday that its open letter followed “repeated attempts to establish direct communication through conventional channels” but did not have “any meaningful response.”

The letter marks the latest twist in a long-running saga that has threatened global automotive supply chains and stoked a bitter battle between Amsterdam and Beijing over technology transfer.

“We welcomed the Chinese authorities’ commitment to facilitate the resumption of exports from Nexperia’s Chinese facility and that of our subcontractors, enabling the continued flow of our products to global markets,” Nexperia’s Dutch unit said in the letter.

“Nevertheless, customers across industries are still reporting imminent production stoppages. This situation cannot persist,” they added. The group called on the leadership of Nexperia’s entities in China to take steps to restore the established supply flows without delay.

In a statement, Wingtech Technology, Nexperia’s Chinese parent company, said on Friday that the Dutch unit’s open letter contained “a large number of misleading and untrue allegations.”

It said the “unlawful deprivation of Wingtech’s control and shareholder rights over Nexperia” was the root cause of the ongoing supply chain chaos.

“Combined with the recent series of actions by the Dutch government and Nexperia B.V., we believe their true intention is to buy time for Nexperia B.V. to construct a ‘de-China-ized’ supply chain and permanently strip Wingtech of its shareholder rights,” Wingtech said.

JINAN, CHINA - OCTOBER 23: In this photo illustration, the logo of semiconductor manufacturer Nexperia is displayed on a screen on October 23, 2025 in Jinan, Shandong Province of China. (Photo by VCG/VCG via Getty Images)
In this photo illustration, the logo of semiconductor manufacturer Nexperia is displayed on a screen.
Vcg | Visual China Group | Getty Images

Nexperia manufactures billions of so-called foundation chips — transistors, diodes and power management components — that are produced in Europe, assembled and tested in China, and then re-exported to customers in Europe and elsewhere.

The chips are relatively low-tech and inexpensive but are needed in almost every device that uses electricity. In cars, those chips are used to connect the battery to motors, for lights and sensors, for braking systems, airbag controllers, entertainment systems and electric windows.

How did we get here?

The situation began in September, when the Dutch government invoked a Cold War-era law to effectively take control of Nexperia. The highly unusual move was reportedly made after the U.S. raised security concerns.

Beijing responded by moving to block its products from leaving China, which, in turn, raised the alarm among global automakers as they faced shortages of the chipmaker’s components.

In an apparent reprieve last week, however, the Dutch government said it had suspended its state intervention at Nexperia following talks with Chinese authorities. It was thought at the time that this could bring an end to the dispute and pave the way for a restoration of normal supply chains.

Rico Luman, senior sector economist for transport and logistics at Dutch bank ING, said it remains unclear how long the situation will last.

“The imposed measures to seize the Dutch Nexperia subsidiary have been lifted, but there are still talks ongoing about restoring the corporate structure and relation with parent company Wingtech,” Luman told CNBC by email.

“It’s not only about supplies of finished chips, it’s also about wafer supplies from Europe to the Chinese entity,” Luman said, adding that companies including Japan’s Nissan and German auto supplier Bosch are among the firms to have warned about looming shortages.

[…]

Source: Nexperia crisis: Dutch chipmaker issues urgent plea to its China unit

Canadian data order risks blowing a hole in EU sovereignty

A Canadian court has ordered French cloud provider OVHcloud to hand over customer data stored in Europe, potentially undermining the provider’s claims about digital sovereignty protections.

According to documents seen by The Register, the Royal Canadian Mounted Police (RCMP) issued a Production Order in April 2024 demanding subscriber and account data linked to four IP addresses on OVH servers in France, the UK, and Australia as part of a criminal investigation.

OVH has a Canadian arm, which was the jumping-off point for the courts, but OVH Group is a French company, so the data in France should be protected from prying eyes. Or perhaps not.

Rather than using established Mutual Legal Assistance Treaties (MLAT) between Canada and France, the RCMP sought direct disclosure through OVH’s Canadian subsidiary.

This puts OVH in an impossible position. French law prohibits such data sharing outside official treaties, with penalties up to €90,000 and six months imprisonment. But refusing the Canadian order risks contempt of court charges.

[…]

Under Trump 2.0, economic and geopolitical relations between Europe and the US have become increasingly volatile, something Microsoft acknowledged in April.

Against this backdrop, concerns about the US CLOUD Act are growing. Through the legislation, US authorities can request – via warrant or subpoena – access to data hosted by US corporations regardless of where in the world that data is stored. Hyperscalers claim they have received no such requests with respect to European customers, but the risk remains and European cloud providers have used this as a sales tactic by insisting digital information they hold is protected.

In the OVH case, if Canadian authorities are able to force access to data held on European servers rather than navigate official channels (for example, international treaties), the implications could be severe.

[…]

Earlier this week, GrapheneOS announced it no longer had active servers in France and was in the process of leaving OVH.

The privacy-focused mobile outfit said, “France isn’t a safe country for open source privacy projects. They expect backdoors in encryption and for device access too. Secure devices and services are not going to be allowed. We don’t feel safe using OVH for even a static website with servers in Canada/US via their Canada/US subsidiaries.”

In August, an OVH legal representative crowed over the admission by Microsoft that it could not guarantee data sovereignty.

It would be deeply ironic if OVH were unable to guarantee the same thing because the company has a subsidiary in Canada.

[…]

Source: Canadian data order risks blowing a hole in EU sovereignty • The Register

That didn’t take long: A few days after Chat Control, European Parliament implements Age Verification on Social Media, 16+

On Wednesday, MEPs adopted a non-legislative report by 483 votes in favour, 92 against and with 86 abstentions, expressing deep concern over the physical and mental health risks minors face online and calling for stronger protection against the manipulative strategies that can increase addiction and that are detrimental to children’s ability to concentrate and engage healthily with online content.


Minimum age for social media platforms

To help parents manage their children’s digital presence and ensure age-appropriate online engagement, Parliament proposes a harmonised EU digital minimum age of 16 for access to social media, video-sharing platforms and AI companions, while allowing 13- to 16-year-olds access with parental consent.

Expressing support for the Commission’s work to develop an EU age verification app and the European digital identity (eID) wallet, MEPs insist that age assurance systems must be accurate and preserve minors’ privacy. Such systems do not relieve platforms of their responsibility to ensure their products are safe and age-appropriate by design, they add.

To incentivise better compliance with the EU’s Digital Services Act (DSA) and other relevant laws, MEPs suggest senior managers could be made personally liable in cases of serious and persistent non-compliance, with particular respect to protection of minors and age verification.

[…]

According to the 2025 Eurobarometer, over 90% of Europeans believe action to protect children online is a matter of urgency, not least in relation to social media’s negative impact on mental health (93%), cyberbullying (92%) and the need for effective ways to restrict access to age-inappropriate content (92%).

Member states are starting to take action and responding with measures such as age limits and verification systems.

Source: Children should be at least 16 to access social media, say MEPs | News | European Parliament

Expect to see manadatory surveillance on social media (whatever they define that to be) soon as it is clearly “risky”.

The problem is real, but age verification is not the way to solve the problem. Rather, it will make it much, much worse as well as adding new problems entirely.

See also: https://www.linkielist.com/?s=age+verification&submit=Search

See also: Europen Council decides to implement Mass Surveillance and Age Verification through law protecting children from online abuse

Welcome to a new fascist thought controlled Europe, heralded by Denmark.

Chat Control: EU lawmakers finally agree on the “voluntary” scanning of your private chats

[…] The EU Council has finally reached an agreement on the controversial Child Sexual Abuse Regulation (CSAR) after more than three years of failed attempts.

Nicknamed Chat Control by its critics, the agreement has kept cryptographers, technologists, encrypted service providers, and privacy experts alike in turmoil since its inception.

Presidency after presidency, the bill has taken many shapes. But its most controversial feature is an obligation for all messaging service providers operating in the EU – including those using end-to-end-encryption – to scan their users’ private chats on the lookout for child sexual abuse material (CSAM).

At the beginning of the month, the Danish Presidency decided to change its approach with a new compromise text that makes the chat scanning voluntary, instead. That turned to be a winning move, with the proposal managing to reach an agreement in the Council on Wednesday, November 26, 2025.

Privacy experts are unlikely to celebrate, though. The decision came a few days after a group of scientists wrote yet another open letter warning that the latest text still “brings high risks to society.” That’s after other privacy experts deemed the new proposal a “political deception” rather than an actual fix.

The EU Council is now preparing to start negotiations with the European Parliament, hoping to agree on the final terms of the regulation.

What we know about the Council agreement

As per the EU Council announcement, the new law imposes a series of obligations on digital companies. Under the new rules, online service providers will be required to assess how their platforms could be misused and, based on the results, may need to “implement mitigating measures to counter that risk,” the Council notes.

Source: Chat Control: EU lawmakers finally agree on the voluntary scanning of your private chats | TechRadar

A “risk mitigation obligation” can be used to explain anything and obligate spying through whatever services the EU says there is “risk”

Considering the whole proposal was shot down several times in the past years and even past month, using a back door rush to push this through is not how a democracy is supposed to function at all. And this is how fascism grips it’s iron claws. What is going on in Demark?

Europen Council decides to implement Mass Surveillance and Age Verification through law protecting children from online abuse

[…]

Under the new rules, online service providers will be required to assess the risk that their services could be misused for the dissemination of child sexual abuse material or for the solicitation of children. On the basis of this assessment, they will have to implement mitigating measures to counter that risk. Such measures could include making available tools that enable users to report online child sexual abuse, to control what content about them is shared with others and to put in place default privacy settings for children.

Member states will designate national authorities (‘coordinating and other competent authorities’) responsible for assessing these risk assessments and mitigating measures, with the possibility of obliging providers to carry out mitigating measures.

[…]

The Council also wants to make permanent a currently temporary measure that allows companies to – voluntarily – scan their services for child sexual abuse. At present, providers of messaging services, for instance, may voluntarily check content shared on their platforms for online child sexual abuse material,

[Note here: if it is deemed “risky” then the voluntary part is scrubbed and it becomes mandatory. Anything can be called “risky” very easily (just look at the data slurping that goes on in Terms of Services through the text “improving our product”).]

The new law provides for the setting up of a new EU agency, the EU Centre on Child Sexual Abuse, to support the implementation of the regulation.

The EU Centre will assess and process the information supplied by the online providers about child sexual abuse material identified on services, and will create, maintain and operate a database for reports submitted to it by providers. It will further support the national authorities in assessing the risk that services could be used for spreading child sexual abuse material.

The Centre is also responsible for sharing companies’ information with Europol and national law enforcement bodies. Furthermore, it will establish a database of child sexual abuse indicators, which companies can use for their voluntary activities.

Source: Child sexual abuse: Council reaches position on law protecting children from online abuse – Consilium

The article does not mention how you can find out if someone is a child: that is age verification. Which comes with huge rafts of problems, such as censorship (there go the LGBTQ crowd!), hacks (Discord) stealing all the government IDs used to verify ages, and of course ways that people find to circumvent age verification (VPNs, which increase internet traffic, meme pictures of Donald Trump) which causes them to behave in a more unpredictable way, thus harming the kids this is supposed to protect.

Of course, this law has been shot down several times in the past 3 years by the EU, but that didn’t stop Denmark from finding a way to implement it nonetheless in a back door shotgun kind of way.

Age Verification, Estimation, Assurance, Oh My! A Guide To The Terminology

If you’ve been following the wave of age-gating laws sweeping across the country and the globe, you’ve probably noticed that lawmakers, tech companies, and advocates all seem to be using different terms for what sounds like the same thing. Age verification, age assurance, age estimation, age gating—they get thrown around interchangeably, but they technically mean different things. And those differences matter a lot when we’re talking about your rights, your privacy, your data, and who gets to access information online.

[click the source link below to read the different definitions – ed]

Why This Confusion Matters

Politicians and tech companies love using these terms interchangeably because it obscures what they’re actually proposing. A law that requires “age assurance” sounds reasonable and moderate. But if that law defines age assurance as requiring government ID verification, it’s not moderate at all—it’s mass surveillance. Similarly, when Instagram says it’s using “age estimation” to protect teens, that sounds privacy-friendly. But when their estimation fails and forces you to upload your driver’s license instead, the privacy promise evaporates.

Here’s the uncomfortable truth: most lawmakers writing these bills have no idea how any of this technology actually works. They don’t know that age estimation systems routinely fail for people of color, trans individuals, and people with disabilities. They don’t know that verification systems have error rates. They don’t even seem to understand that the terms they’re using mean different things. The fact that their terminology is all over the place—using “age assurance,” “age verification,” and “age estimation” interchangeably—makes this ignorance painfully clear, and leaves the onus on platforms to choose whichever option best insulates them from liability.

Language matters because it shapes how we think about these systems. “Assurance” sounds gentle. “Verification” sounds official. “Estimation” sounds technical and impersonal, and also admits its inherent imprecision. But they all involve collecting your data and create a metaphysical age gate to the internet. The terminology is deliberately confusing, but the stakes are clear: it’s your privacy, your data, and your ability to access the internet without constant identity checks. Don’t let fuzzy language disguise what these systems really do.

Republished from EFF’s Deeplinks blog.

Source: Age Verification, Estimation, Assurance, Oh My! A Guide To The Terminology | Techdirt

Danish manage to bypass democracy to implement mass EU surveillance, says it is “voluntary”

The EU states agree on a common position on chat control. Internet services should be allowed to read communication voluntarily, but will not be obliged [*cough – see bold and end of document: Ed*] to do so. We publish the classified negotiating protocol and bill. After the formal decision, the trilogue negotiations begin.

18.11.2025 at 14:03– Andre Meister – in surveillanceno additions

Man in suit at lectern, behind him flags.
Presidency of the Council: Danish Minister of Justice Hummelgaard. – CC-BY-NC-ND 4.0 Danish Presidency

The EU states have agreed on a common position on chat control. We publish the bill.

Last week, the Council working group discussed the law. We shall once again publish the classified minutes of the meeting.

Tomorrow, the Permanent Representatives want to officially decide on the position.

Update 19.10.: A Council spokesperson tells us, “The agenda item has been postponed until next week.”

Three years of dispute

For three and a half years, the EU institutions have been arguing over chat control. The Commission intends to oblige Internet services to search the content of their users without cause for information on criminal offences and to send them to authorities if suspected.

Parliament calls this mass surveillance and calls for only unencrypted content from suspects to be scanned.

A majority of EU countries want mandatory chat control. However, a blocking minority rejects this. Now the Council has agreed on a compromise. Internet services are not required to chat control, but may carry out a voluntary chat control.

Absolute red lines

The Danish Presidency wants to bring the draft law through the Council “as soon as possible” so that the trilogue negotiations can be started in a timely manner. The feedback from the states should be limited to “absolute red lines”.

The majority of states “supported the compromise proposal.” At least 15 spoke out in favour, including Germany and France.

Germany “welcomed both the deletion of the mandatory measures and the permanent anchoring of voluntary measures.”

Italy also sees voluntary chat control as skeptical. “We fear that the instrument could also be extended to other crimes, so we have difficulty supporting the proposal.” Politicians have already called for chat control to be extended to other content.

Absolute minimum consensus

Other states called the compromise “an absolute minimum consensus.” They “actually wanted more – especially in the sense of commitments.” Some states “showed themselves clearly disappointed by the cancellations made.”

Spain, in particular, “still considered mandatory measures to be necessary, unfortunately, a comprehensive agreement on this was not possible.” Hungary, too, “saw volunteerism as the sole concept as too little.”

Spain, Hungary and Bulgaria proposed “an obligation for providers to have to expose at least in open areas.” The Danish Presidency “described the proposal as ambitious, but did not take it up to avoid further discussion.”

Denmark explicitly pointed to the review clause. Thus, “the possibility of detection orders is kept open at a later date.” Hungary stressed that “this possibility must also be used.”

No obligation

The Danish Presidency had publicly announced that the chat control should not be mandatory, but voluntary.

However, the formulated compromise proposal was contradictory. She had deleted the article on mandatory chat control. However, another article said services should also carry out voluntary measures.

Several states have asked whether these formulations “could lead to a de facto obligation.” The Legal Services agreed: “The wording can be interpreted in both directions.” The Presidency of the Council “clarified that the text only had a risk mitigation obligation, but not a commitment to detection.”

The day after the meeting, the presidency of the Council sent out the likely final draft law of the Council. It states explicitly: ‘No provision of this Regulation shall be interpreted as imposing obligations of detection obligations on providers’.

Damage and abuse

Mandatory chat control is not the only issue in the planned law. Voluntary chat control is also prohibited. The European Commission cannot prove its proportionality. Many oppose voluntary chat control, including the EU Commission, the European Data Protection Supervisor and the German Data Protection Supervisor.

A number of scientists are critical of the compromise proposal. The voluntary chat control does not designate it to be appropriate. “Their benefit is not proven, while the potential for harm and abuse is enormous.”

The law also calls for mandatory age checks. The scientists criticize that age checks “bring with it an inherent and disproportionate risk of serious data breaches and discrimination without guaranteeing their effectiveness.” The Federal Data Protection Officer also fears a “large-scale abolition of anonymity on the Internet.”

Now follows Trilog

The EU countries will not discuss these points further. The Danish Presidency “reaffirmed its commitment to the compromise proposal without the Spanish proposals.”

The Permanent Representatives of the EU States will meet next week. In December, the justice and interior ministers meet. These two bodies are to adopt the bill as the official position of the Council.

This is followed by the trilogue. There, the Commission, Parliament and the Council negotiate to reach a compromise from their three separate bills.

[…]

A “risk mitigation obligation” can be used to explain anything and obligate spying through whatever services the EU says there is “risk”

Source: Translated from EU states agree on voluntary chat control

Considering the whole proposal was shot down several times in the past years and even past month, using a back door rush to push this through is not how a democracy is supposed to function at all. And this is how fascism grips it’s iron claws. What is going on in Demark?

For more information on the history of Chat Control click here

DOGE Is Officially Dead, all government data still in Musk’s hands though

After months of controversy, Elon Musk and Donald Trump’s failed passion project to cut costs across the federal government is officially dead, ahead of schedule.

Earlier this month, Office of Personnel Management director Scott Kupor told Reuters that the Department of Government Efficiency “doesn’t exist.”

Even though there are eight more months left on its mandate, DOGE is no longer a “centralized entity,” according to Kupor. Instead, the Office of Personnel Management, an existing independent agency that has been overseeing the federal workforce for decades, will be taking over most of DOGE’s functions

[…]

DOGE had a short but eventful life. Trump announced the creation of the “agency” immediately after his election last year. The cuts began shortly after Trump took office, with Musk taking a figurative and literal chainsaw to the federal government. With DOGE, Musk completely gutted the Department of Education, laid off a good chunk of the government’s cybersecurity officials, caused the deaths of an estimated 638 thousand people around the world with funding cuts to USAID, and stripped more than a quarter of the Internal Revenue Service’s workforce (most of these positions are now reportedly being filled by AI agents). Several DOGE staffers have also since ended up practically taking over other federal agencies like the Department of Health and Human Services and the Social Security Administration.

All that carnage ended up being for practically nothing. A Politico analysis from earlier this year claimed that even though DOGE purported to have saved Americans billions of dollars, only a fraction of that has been realized. Another report, this time by the Senate Permanent Subcommittee on Investigations, said that DOGE ended up spending more money than it saved while trying to downsize the government. Musk Watch, a tracker set up by veteran independent journalists, has been able to verify $16.3 billion in federal cuts, significantly less than the $165 billion that DOGE has claimed in the past, and a drop in the bucket compared to DOGE’s original claim that it would eliminate $2 trillion in spending.

[…]

Source: DOGE Is Officially Dead

Why is nobody talking about the datagrab that Musk has performed?

European Commission in pocket of US Big Tech  to massively rollback digital protections in Digital domain

The European Commission has been accused of “a massive rollback” of the EU’s digital rules after announcing proposals to delay central parts of the Artificial Intelligence Act and water down its landmark data protection regulation.

If agreed, the changes would make it easier for tech firms to use personal data to train AI models without asking for consent, and try to end “cookie banner fatigue” by reducing the number times internet users have to give their permission to being tracked on the internet.

The commission also confirmed the intention to delay the introduction of central parts of the AI Act, which came into force in August 2024 and does not yet fully apply to companies.

Companies making high-risk AI systems, namely those posing risks to health, safety or fundamental rights, such as those used in exam scoring or surgery, would get up to 18 months longer to comply with the rules.

The plans were part of the commission’s “digital omnibus”, which tries to streamline tech rules including GDPR, the AI Act, the ePrivacy directive and the Data Act.

After a long period of rule-making, the EU agenda has shifted since the former Italian prime minister Mario Draghi warned in a report last autumn that Europe had fallen behind the US and China in innovation and was weak in the emerging technologies that would drive future growth, such as AI. The EU has also come under heavy pressure from the Trump administration to rein in digital laws.

[…]

They are part of the bloc’s wider drive for “simplification”, with plans under way to scale back regulation on the environment, company reporting on supply chains and agriculture. Like these other proposals, the digital omnibus will need to be approved by EU minsters and the European parliament.

European Digital Rights (EDRi), a pan-European network of NGOs, described the plans as “a major rollback of EU digital protections” that risked dismantling “the very foundations of human rights and tech policy in the EU”.

In particular, it said that changes to GDPR would allow “the unchecked use of people’s most intimate data for training AI systems” and that a wide range of exemptions proposed to online privacy rules would mean businesses would be able to read data on phones and browsers without asking.

European business groups welcomed the proposals but said they did not go far enough. A representative from the Computer and Communications Industry Association, whose members include Amazon, Apple, Google and Meta, said: “Efforts to simplify digital and tech rules cannot stop here.” The CCIA urged “a more ambitious, all-encompassing review of the EU’s entire digital rulebook”.

Critics of the shake-up included the EU’s former commissioner for enterprise, Thierry Breton, who wrote in the Guardian that Europe should resist attempts to unravel its digital rulebook “under the pretext of simplification or remedying an alleged ‘anti-innovation’ bias. No one is fooled over the transatlantic origin of these attempts.”

[…]

Source: European Commission accused of ‘massive rollback’ of digital protections | European Commission | The Guardian

Yes, the simplification change allowing cookie consent to be stored in the browser is a good one. Allowing AI systems to run amok without proper oversight, especially in high risk domains and allowing large companies to do so without rules only benefits the players that can afford to play in these domains: namely the far right by introducing more mass surveillance tools and big (US) tech.

EU proposes doing away with constant cookies requests by setting the “No” in your browser settings

People will no longer be bombarded by constant requests to accept or reject “cookies” when browsing the internet, under proposed changes to the European Union’s strict data privacy laws.

The pop-up prompts asking internet users to consent to cookies when they visit a website are widely seen as a nuisance, undermining the original privacy intentions of the digital rules.

[I don’t think this undermines anything – cookie consent got rid of a LOT of spying and everyone now just automatically clicks on NO or uses addons to do this (well, if you are using Firefox as a browser). The original purpose: stop companies spying has been achieved]

Brussels officials have now tabled changes that would allow people to accept or reject cookies for a six-month period, and potentially set their internet browser to automatically opt-in or out, to avoid being repeatedly asked whether they consent to websites remembering information about their past visits.

Cookies allow websites to keep track of a user’s previous activity, allowing sites to pull up items added to an online shopping cart that were not purchased, or remember whether someone had logged in to an account on the site before, as well as target advertisements.

[…]

Source: EU proposes doing away with constant internet ‘cookies’ requests – The Irish Times

Tokyo Court Finds Cloudflare Liable For All Content it Allows Access to, Verification of all Users of the service and Should Follow Lawyers Requests without Court Verdicts in Manga Piracy Lawsuit

Japanese manga publishers have declared victory over Cloudflare in a long-running copyright infringement liability dispute. Kadokawa, Kodansha, Shueisha and Shogakukan say that Cloudflare’s refusal to stop manga piracy sites, meant they were left with no other choice but to take legal action. The Tokyo District Court rendered its decision this morning, finding Cloudflare liable for damages after it failed to sufficiently prevent piracy.

[…]

After a wait of more than three and a half years, the Tokyo District Court rendered its decision this morning. In a statement provided to TorrentFreak by the publishers, they declare “Victory Against Cloudflare” after the Court determined that Cloudflare is indeed liable for the pirate sites’ activities.

In a statement provided to TorrentFreak, the publishers explain that they alerted Cloudflare to the massive scale of the infringement, involving over 4,000 works and 300 million monthly visits, but their requests to stop distribution were ignored.

“We requested that the company take measures such as stopping the distribution of pirated content from servers under its management. However, Cloudflare continued to provide services to the manga piracy sites even after receiving notices from the plaintiffs,” the group says.

The publishers add that Cloudflare continued to provide services even after receiving information disclosure orders from U.S. courts, leaving them with “no choice but to file this lawsuit.”

Factors Considered in Determining Liability

Decisions in favor of Cloudflare in the United States have proven valuable over the past several years. Yet while the Tokyo District Court considered many of the same key issues, various factors led to a finding of liability instead, the publishers note.

“The judgment recognized that Cloudflare’s failure to take timely and appropriate action despite receiving infringement notices from the plaintiffs, and its negligent continuation of pirated content distribution, constituted aiding and abetting copyright infringement, and that Cloudflare bears liability for damages to the plaintiffs,” they write.

“The judgment, in that regard, attached importance to the fact that Cloudflare, without conducting any identity verification procedures, had enabled a massive manga piracy site to operate ‘under circumstances where strong anonymity was secured,’ as a basis for recognizing the company’s liability.”

[…]

According to Japanese media, Cloudflare plans to appeal the verdict, which was expected. In comments to the USTR last month, Cloudflare referred to a long-running dispute in Japan with the potential to negatively affect future business.

“One particular dispute reflects years of effort by Japan’s government and its publishing industry to impose additional obligations on intermediaries like CDNs,” the company’s submission reads (pdf).

“A fully adjudicated ruling that finds CDNs liable for monetary damages for infringing material would set a dangerous global precedent and necessitate U.S. CDN providers to limit the provision of global services to avoid liability, severely restricting market growth and expansion into Asian Pacific markets.”

Whether that heralds Cloudflare’s exit from the region is unclear.

[…]

Source: Tokyo Court Finds Cloudflare Liable For Manga Piracy in Long-Running Lawsuit * TorrentFreak

How Trademark Ruined Colorado-Style Pizza

You’ve heard of New York style, Chicago deep dish, Detroit square pans. But Colorado-style pizza? Probably not. And there’s a perfectly ridiculous reason why this regional style never spread beyond a handful of restaurants in the Rocky Mountains: one guy trademarked it and scared everyone else away from making it.

This story comes via a fascinating Sporkful podcast episode where reporter Paul Karolyi spent years investigating why Colorado-style pizza remains trapped in obscurity while other regional styles became national phenomena.

The whole episode is worth listening to for the detective work alone, but the trademark angle reveals something important about how intellectual property thinking can strangle cultural movements in their cradle.

Here’s the thing about pizza “styles”: they become styles precisely because they spread. New York, Chicago, Detroit, New Haven—these aren’t just individual restaurant concepts, they’re cultural phenomena adopted and adapted by hundreds of restaurants. That widespread adoption creates the network effects that make a “style” valuable: customers seek it out, restaurants compete to perfect it, food writers chronicle its evolution.

Colorado-style pizza never got that chance. When Karolyi dug into why, he discovered that Beau Jo’s—the restaurant credited with inventing the style—had locked it up legally. When he asked the owner’s daughter if other restaurants were making Colorado-style pizza, her response was telling:

We’re um a trademark, so they cannot.

Really?

Yes.

Beau owns a trademark for Colorado style pizza.

Yep.

When Karolyi finally tracked down the actual owner, Chip (after years of trying, which is its own fascinating subplot), he expected to hear about some grand strategic vision behind the trademark. Instead, he got a masterclass in reflexive IP hoarding:

Cuz it’s different and nobody else is doing that. So, why not do it Colorado style? I mean, there’s Chicago style and there’s Pittsburgh style and Detroit and everything else. Um, and we were doing something that was what was definitely different and um um licensing attorney said, “Yeah, we can do it” and we were able to.

That’s it. No business plan. No licensing strategy. Just “some lawyer said we can do it” so they did. This is the IP-industrial complex in microcosm: lawyers selling trademark applications because they can, not because they should.

I pressed my case to Chip that abandoning the trademark so others could also use it could actually be good for his business.

“If more places made Colorado style pizza, the style itself would become more famous, which would make more people come to Beau Jo’s to try the original. If imitation is the highest form of flattery, like everyone would know that Beau Jo was the originator. Like, do you ever worry or maybe do you think that the trademark has possibly hindered the spread of this style of pizza that you created that you should be getting credit for?”

“Never thought about it.”

“Well, what do you think about it now?”

“I don’t know. I have to think about that. It’s an interesting thought. I’ve never thought about it. I’m going to look into it. I’m going to look into it. I’m going to talk to some people and um I’m not totally opposed to it. I don’t know that it would be a good idea for us, but I’m willing to look at it.”

A few weeks later, Karolyi followed up with Chip. Predictably, the business advisors had circled the wagons. They “unanimously” told him not to give up the trademark—because of course they did. These are the same people who profit from maintaining artificial scarcity, even when it demonstrably hurts the very thing they’re supposedly protecting.

And so Colorado-style pizza remains trapped in its legal cage, known only to a handful of tourists who stumble across Beau Jo’s locations. A culinary innovation that could have sparked a movement instead became a cautionary tale about how IP maximalism kills the things it claims to protect.

This case perfectly illustrates the perverse incentives of modern IP thinking. We’ve created an entire industry of lawyers and consultants whose job is to convince business owners to “protect everything” on the off chance they might license it later. Never mind that this protection often destroys the very value they’re trying to capture.

The trademark didn’t just fail to help Beau Jo’s—it actively harmed them. As Karolyi documents in the podcast, the legal lockup has demonstrably scared off other restaurateurs from experimenting with Colorado-style pizza, ensuring the “style” remains a curiosity rather than a movement. Fewer competitors means less innovation, less media attention, and fewer customers seeking out “the original.” It’s a masterclass in how to turn potential network effects into network defects.

Compare this to the sriracha success story. David Tran of Huy Fong Foods deliberately avoided trademarking “sriracha” early on, allowing dozens of competitors to enter the market. The result? Sriracha became a cultural phenomenon, and Huy Fong’s distinctive rooster bottle became the most recognizable brand in a category they helped create. Even as IP lawyers kept circling, Tran understood what Chip apparently doesn’t:

“Everyone wants to jump in now,” said Tran, 70. “We have lawyers come and say ‘I can represent you and sue’ and I say ‘No. Let them do it.’” Tran is so proud of the condiment’s popularity that he maintains a daily ritual of searching the Internet for the latest Sriracha spinoff.

Sometimes the best way to protect your creation is to let it go. But decades of IP maximalist indoctrination have made this counterintuitive wisdom almost impossible to hear. Even when presented with a clear roadmap for how abandoning the trademark could grow his business, Chip couldn’t break free from the sunk-cost fallacy and his advisors’ self-interested counsel.

The real tragedy isn’t just that Colorado-style pizza remains obscure. It’s that this story plays out thousands of times across industries, with creators choosing artificial scarcity over organic growth, protection over proliferation. Every time someone trademarks a taco style or patents an obvious business method, they’re making the same mistake Chip made: confusing ownership with value creation.

Source: How Trademark Ruined Colorado-Style Pizza | Techdirt

Summarising a Book is now Potentially Copyright Infringing

A federal judge just ruled that computer-generated summaries of novels are “very likely infringing,” which would effectively outlaw many book reports. That seems like a problem.

The Authors Guild has one of the many lawsuits against OpenAI, and law professor Matthew Sag has the details on a ruling in that case that, if left in place, could mean that any attempt to merely summarize any copyright covered work is now possibly infringing. You can read the ruling itself here.

This isn’t just about AI—it’s about fundamentally redefining what copyright protects. And once again, something that should be perfectly fine is being treated as an evil that must be punished, all because some new machine did it.

But, I guess elementary school kids can rejoice that they now have an excuse not to do a book report.

[…]

Sag highlights how it could have a much more dangerous impact beyond getting kids out of their homework: making much of Wikipedia infringing.

A new ruling in Authors Guild v. OpenAI has major implications for copyright law, well beyond artificial intelligence. On October 27, 2025, Judge Sidney Stein of the Southern District of New York denied OpenAI’s motion to dismiss claims that ChatGPT outputs infringed the rights of authors such as George R.R. Martin and David Baldacci. The opinion suggests that short summaries of popular works of fiction are very likely infringing (unless fair use comes to the rescue).

This is a fundamental assault on the idea, expression, distinction as applied to works of fiction. It places thousands of Wikipedia entries in the copyright crosshairs and suggests that any kind of summary or analysis of a work of fiction is presumptively infringing.

Short summaries of copyright-covered works should not impact copyright in any way. Yes, as Sag points out, “fair use” can rescue in some cases, but the old saw remains that “fair use is just the right to hire a lawyer.” And when the process is the punishment, saying that fair use will save you in these cases is of little comfort. Getting a ruling on fair use will run you hundreds of thousands of dollars at least.

Copyright is supposed to stop the outright copying of the copyright-protected expression. A summary is not that. It should not implicate the copyright in any form, and it shouldn’t require fair use to come to the rescue.

Sag lays out the details of what happened in this case:

Judge Stein then went on to evaluate one of the more detailed chat-GPT generated summaries relating to A Game of Thrones, the 694 page novel by George R. R. Martin which eventually became the famous HBO series of the same name. Even though this was only a motion to dismiss, where the cards are stacked against the defendant, I was surprised by how easily the judge could conclude that:

“A more discerning observer could easily conclude that this detailed summary is substantially similar to Martin’s original work, including because the summary conveys the overall tone and feel of the original work by parroting the plot, characters, and themes of the original.”

The judge described the ChatGPT summaries as:

“most certainly attempts at abridgment or condensation of some of the central copyrightable elements of the original works such as setting, plot, and characters”

He saw them as:

“conceptually similar to—although admittedly less detailed than—the plot summaries in Twin Peaks and in Penguin Random House LLC v. Colting, where the district court found that works that summarized in detail the plot, characters, and themes of original works were substantially similar to the original works.” (emphasis added).

To say that the less than 580-word GPT summary of A Game of Thrones is “less detailed” than the 128-page Welcome to Twin Peaks Guide in the Twin Peaks case, or the various children’s books based on famous works of literature in the Colting case, is a bit of an understatement.

[…]

As Sag makes clear, there are few people out there who would legitimately think that the Wikipedia summary should be deemed infringing, which is why this ruling is notable. It again highlights how lots of people, including the media, lawmakers, and now (apparently) judges, get so distracted by the “but this new machine is bad!” in looking at LLM technology that they seem to completely lose the plot.

And that’s dangerous for the future of speech in general. We shouldn’t be tossing out fundamental key concepts in speech (“you can summarize a work of art without fear”) just because some new kind of summarization tool exists.

Source: Book Reports Potentially Copyright Infringing, Thanks To Court Attacks On LLMs | Techdirt

Switzerland plans surveillance worse than US

In Switzerland, a country known for its love for secrecy, particularly when it comes to banking, the tides have turned: An update to the VÜPF surveillance law directly targets privacy and anonymity services such as VPNs as well as encrypted chat apps and email providers. Right now the law is still under discussion in the Swiss Bundesrat.

[…]

While Swiss privacy has been overhyped, legislative rules in Switzerland are currently decent and comparable to German data protection laws. This update to the VÜPF, which could come into force by 2026, would change data protection legislation in Switzerland dramatically.

Why the update is dangerous

If the law passes in its current form,

  • Swiss email and VPN providers with just 5,000 users are forced to log IP addresses and retain the data for six months – while data retention in Germany is illegal for email providers.
  • ID or driver’s license, maybe a phone number, are required for the registration process of various services – rendering the anonymous usage impossible.
  • Data must be delivered upon request in plain text, meaning providers must be able to decrypt user data on their end (except for end-to-end encrypted messages exchanged between users).

What is more, the law is not introduced by or via the Parliament, but instead the Swiss government, the Federal Council and the Federal Department of Justice and Police (FDJP), want to massively expand internet surveillance by updating the VÜPF – without Parliament having a say. This comes as a shock in a country proud of its direct democracy with regular people’s decisions on all kinds of laws. However, in 2016 the Swiss actually voted for more surveillance, so direct democracy might not help here.

History of surveillance in Switzerland

In 2016, Swiss Parliament updated its data retention law BÜPF to enforce data retention for all communication data (post, email, phone, text messages, ip addresses). In 2018, the revision of the VÜPF translated this into administrative obligations for ISPs, email providers, and others, with exceptions in regard to the size of the provider and whether they were classified as telecommunications service providers or communications services.

This led to the fact that services such as Threema and ProtonMail were exempt from some of the obligations that providers such as Swisscom, Salt, and Sunrise had to comply with – even though the Swiss government would have liked to classify them as quasi network operators and telecommunications providers as well. The currently discussed update of the VÜPF seems to directly target smaller providers as well as providers of anonymous services and VPNs.

The Swiss surveillance state has always sought a lot of power, and had to be called back by the Federal Supreme Court in the past to put surveillance on a sound legal basis.

But now, article 50a of the VÜPF reform mandates that providers must be able to remove “the encryption provided by them or on their behalf”, basically asking for backdoor access to encryption. However, end-to-end encrypted messages exchanged between users do not fall under this decryption obligation. Yet, even Swiss email provider Proton Mail says to Der Bund that “Swiss surveillance would be much stricter than in the USA and the EU, and Switzerland would lose its competitiveness as a business location.”

Because of this upcoming legal change in Switzerland, Proton has started to move its server from Switzerland to the EU.

Source: Switzerland plans surveillance worse than US | Tuta

Roblox begins asking tens of millions of children to send it a selfie, for “age verification”.

Roblox is starting to roll out the mandatory age checks that will require all of its users to submit an ID or scan their face in order to access the platform’s chat features. The updated policy, which the company announced earlier this year, will be enforced first in Australia, New Zealand and the Netherlands and will expand to all other markets by early next year.

The company also detailed a new “age-based chat” system, which will limit users’ ability to interact with people outside of their age group. After verifying or estimating a user’s age, Roblox will assign them to an age group ranging from 9 years and younger to 21 years and older (there are six total age groups). Teens and children will then be limited from connecting with people that aren’t in or close to their estimated age group in in-game chats.

Unlike most social media apps which have a minimum age of 13, Roblox permits much younger children to use its platform. Since most children and many teens don’t have IDs, the company uses “age estimation” tech provided by identity company Persona. The checks, which use video selfies, are conducted within Roblox’s app and the company says that images of users’ faces are immediately deleted after completing the process.

[…]

Source: Roblox begins asking tens of millions of children to verify their age with a selfie

Deleted by Roblox itself, but also by Persona? Pretty scary, 1. having a database of all these kiddies faces and their online persona’s, ways of talking and typing, and 2. that even if the data is deleted, it could be intercepted as it is sent to Roblox and on to the verifier.