The Linkielist

Linking ideas with the world

The Linkielist

What Happened After Security Researchers Found 60 Flock Cameras Livestreaming to the Internet

A couple months ago, YouTuber Benn Jordan “found vulnerabilities in some of Flock’s license plate reader cameras,” reports 404 Media’s Jason Koebler. “He reached out to me to tell me he had learned that some of Flock’s Condor cameras were left live-streaming to the open internet.”

This led to a remarkable article where Koebler confirmed the breach by visiting a Flock surveillance camera mounted on a California traffic signal. (“On my phone, I am watching myself in real time as the camera records and livestreams me — without any password or login — to the open internet… Hundreds of miles away, my colleagues are remotely watching me too through the exposed feed.”) Flock left livestreams and administrator control panels for at least 60 of its AI-enabled Condor cameras around the country exposed to the open internet, where anyone could watch them, download 30 days worth of video archive, and change settings, see log files, and run diagnostics. Unlike many of Flock’s cameras, which are designed to capture license plates as people drive by, Flock’s Condor cameras are pan-tilt-zoom (PTZ) cameras designed to record and track people, not vehicles. Condor cameras can be set to automatically zoom in on people’s faces… The exposure was initially discovered by YouTuber and technologist Benn Jordan and was shared with security researcher Jon “GainSec” Gaines, who recently found numerous vulnerabilities in several other models of Flock’s automated license plate reader (ALPR) cameras.
Jordan appeared this week as a guest on Koebler’s own YouTube channel, while Jordan released a video of his own about the experience. titled “We Hacked Flock Safety Cameras in under 30 Seconds.” (Thanks to Slashdot reader beadon for sharing the link.) But together Jordan and 404 Media also created another video three weeks ago titled “The Flock Camera Leak is Like Netflix for Stalkers” which includes footage he says was “completely accessible at the time Flock Safety was telling cities that the devices are secure after they’re deployed.”

The video decries cities “too lazy to conduct their own security audit or research the efficacy versus risk,” but also calls weak security “an industry-wide problem.” Jordan explains in the video how he “very easily found the administration interfaces for dozens of Flock safety cameras…” — but also what happened next: None of the data or video footage was encrypted. There was no username or password required. These were all completely public-facing, for the world to see…. Making any modification to the cameras is illegal, so I didn’t do this. But I had the ability to delete any of the video footage or evidence by simply pressing a button. I could see the paths where all of the evidence files were located on the file system…

During and after the process of conducting that research and making that video, I was visited by the police and had what I believed to be private investigators outside my home photographing me and my property and bothering my neighbors. John Gaines or GainSec, the brains behind most of this research, lost employment within 48 hours of the video being released. And the sad reality is that I don’t view these things as consequences or punishment for researching security vulnerabilities. I view these as consequences and punishment for doing it ethically and transparently.

I’ve been contacted by people on or communicating with civic councils who found my videos concerning, and they shared Flock Safety’s response with me. The company claimed that the devices in my video did not reflect the security standards of the ones being publicly deployed. The CEO even posted on LinkedIn and boasted about Flock Safety’s security policies. So, I formally and publicly offered to personally fund security research into Flock Safety’s deployed ecosystem. But the law prevents me from touching their live devices. So, all I needed was their permission so I wouldn’t get arrested. And I was even willing to let them supervise this research.

I got no response.

So instead, he read Flock’s official response to a security/surveillance industry research group — while standing in front of one of their security cameras, streaming his reading to the public internet.

“Might as well. It’s my tax dollars that paid for it.”

” ‘Flock is committed to continuously improving security…'”

Source: What Happened After Security Researchers Found 60 Flock Cameras Livestreaming to the Internet | Slashdot

For more on why Flock cameras are problematic, read here

CD Project Takes down VR Mod for Cyberpunk – because it was paid

Yes, the TOS don’t allow commercial mods, which has plusses and minusses. So, yes, technically CD Project Red is in the right. However, it takes a lot of work and time to do some of these mods and if you want to get paid for it that is your right. Just as much as it is your right to not buy it if you don’t like it. Whatever.

There are loads of paid external services that run on top of Amazon, Paypal, Ebay, Discord, most AI products are built on top of OpenAI, etc. It’s a valid (if risky, due to the dependency) way to create value for people.

It seems to me that the TOS are overextended though. How can you legally determine what someone will do with a product they bought? US law is pretty bizarre in that respect, just as companies can get away with not allowing reverse engineering and lock people into buying hugely overpriced repairs and replacement parts only from them. Maybe look at China to see how this kind of law kills innovation and look at monopolies to see how this drives costs up and removes choice for consumers.

[…] Now that the dust has settled, I’m even more sorry to announce that we are leaving behind an adventure that so many of you deeply loved and enjoyed. CD PROJEKT S.A. decided that they would follow in Take-Two Interactive Software’s steps and issued a DMCA notice against me for the removal of the Cyberpunk 2077 VR mod.

At least they were a little more open about it, and I could get a reply both from their legal department and from the VP of business development. But in the end it amounted to the same iron-clad corpo logic: every little action that a company takes is in the name of money, but everything that modders do must be absolutely for free.

As usual they stretch the concept of “derivative work” until it’s paper-thin, as though a system that allows visualizing 40+ games in fully immersive 3D VR was somehow built making use of their intellectual property. And as usual they give absolutely zero f***s about how playing their game in VR made people happy, and they cannot just be grateful about the extra copies of the title they sold because of that—without ever having to pour money into producing an official conversion (no, they’re not planning to release their own VR port, in case you were wondering). […]

Source: Another one bites the dust | Patreon

Signal Founder Creates Truly Private GPT: Confer

When you use an AI service, you’re handing over your thoughts in plaintext. The operator stores them, trains on them, and–inevitably–will monetize them. You get a response; they get everything.

Confer works differently. In the previous post, we described how Confer encrypts your chat history with keys that never leave your devices. The remaining piece to consider is inference—the moment your prompt reaches an LLM and a response comes back.

Traditionally, end-to-end encryption works when the endpoints are devices under the control of a conversation’s participants. However, AI inference requires a server with GPUs to be an endpoint in the conversation. Someone has to run that server, but we want to prevent the people who are running it (us) from seeing prompts or the responses.

Confidential computing

This is the domain of confidential computing. Confidential computing uses hardware-enforced isolation to run code in a Trusted Execution Environment (TEE). The host machine provides CPU, memory, and power, but cannot access the TEE’s memory or execution state.

LLMs are fundamentally stateless—input in, output out—which makes them ideal for this environment. For Confer, we run inference inside a confidential VM. Your prompts are encrypted from your device directly into the TEE using Noise Pipes, processed there, and responses are encrypted back. The host never sees plaintext.

But this raises an obvious concern: even if we have encrypted pipes in and out of an encrypted environment, it really matters what is running inside that environment. The client needs assurance that the code running is actually doing what it claims.

[…]

Source: Private inference | Confer Blog

Europe is Rediscovering the Virtues of Cash

After spending years pushing digital payments to combat tax evasion and money laundering, European Union ministers decided in December to ban businesses from refusing cash. The reversal comes as 12% of European businesses flatly refused cash in 2024, up from 4% three years earlier.

Over one in three cinemas in the Netherlands no longer accept notes and coins. Cash usage across the euro area dropped from 79% of in-person transactions in 2016 to just 52% in 2024. Sweden leads the digital shift where 90% of purchases now happen digitally and cash represents under 1% of GDP compared to 22% in Japan.

The policy change stems from concerns about financial inclusion for elderly and poor populations who struggle with digital systems. Resilience worries also drove the decision after Spaniards facing nationwide power cuts last spring found themselves unable to buy food. European officials worry about dependence on American payment giants Visa and MasterCard. The EU now recommends citizens store enough cash to survive a week without electricity or internet access.

Source: Europe is Rediscovering the Virtues of Cash | Slashdot

Also, when under digital attack it’s useful to be able to get at your money. This is not theoretical, bank attacks by the Russians regularly take down Finnish payment methods.

EU seeks feedback on Open Digital Ecosystems

It’s important you give your feedback on this:

The European Open Digital Ecosystem Strategy will set out:

  • a strategic approach to the open source sector in the EU that addresses the importance of open source as a crucial contribution to EU technological sovereignty, security and competitiveness
  • a strategic and operational framework to strengthen the use, development and reuse of open digital assets within the Commission, building on the results achieved under the 2020-2023 Commission Open Source Software Strategy.

Source: Call for evidence: European Open Digital Ecosystems

The US muscled the EU into adopting Article 6 of the EU Copyright Directive, preventing reverse engineering in return for free trade. By implementing tariffs, the US broke that agreement. Theres no reason not to delete Article 6 of the EUCD, and all the other laws that prevent European companies from jailbreaking iPhones and making their own App Stores (minus Apples 30% commission), as well as ad-blockers for Facebook and Instagrams apps (which would zero out EU revenue for Meta), and, of course, jailbreaking tools for Xboxes, Teslas, and every make and model of every American car, so European companies could offer service, parts, apps, and add-ons for them. Video games need to be able to be run after official support shuts down and servers close down. We need to get out from under the high tech lock-in scams, we need to get rid of e-waste. We need to get back to ownership of the products we buy. This is an important part of digital sovereignity and in an uncertain world with unreliable partners, the importance of being able to follow EU values needs to be underscored. FOSS and allowing FOSS to develop is an important lynchpin of this.

Cloudflare defies Italy’s Piracy Shield, won’t block websites on 1.1.1.1 DNS – won’t cave to media cabal. Well done.

Italy fined Cloudflare 14.2 million euros for refusing to block access to pirate sites on its 1.1.1.1 DNS service, the country’s communications regulatory agency, AGCOM, announced yesterday. Cloudflare said it will fight the penalty and threatened to remove all of its servers from Italian cities.

AGCOM issued the fine under Italy’s controversial Piracy Shield law, saying that Cloudflare was required to disable DNS resolution of domain names and routing of traffic to IP addresses reported by copyright holders. The law provides for fines up to 2 percent of a company’s annual turnover, and the agency said it applied a fine equal to 1 percent.

The fine relates to a blocking order issued to Cloudflare in February 2025. Cloudflare argued that installing a filter applying to the roughly 200 billion daily requests to its DNS system would significantly increase latency and negatively affect DNS resolution for sites that aren’t subject to the dispute over piracy.

AGCOM rejected Cloudflare’s arguments. The agency said the required blocking would impose no risk on legitimate websites because the targeted IP addresses were all uniquely intended for copyright infringement.

In a September 2025 report on Piracy Shield, researchers said they found “hundreds of legitimate websites unknowingly affected by blocking, unknown operators experiencing service disruption, and illegal streamers continuing to evade enforcement by exploiting the abundance of address space online, leaving behind unusable and polluted address ranges.” This is “a conservative lower-bound estimate,” the report said.

The Piracy Shield law was adopted in 2024. “To effectively tackle live sports piracy, its broad blocking powers aim to block piracy-related domain names and IP addresses within 30 minutes,” TorrentFreak wrote in an article today about the Cloudflare fine.

Cloudflare to fight fine, may withhold services

Cloudflare co-founder and CEO Matthew Prince wrote today that Cloudflare already “had multiple legal challenges pending against the underlying scheme” and will “fight the unjust fine.”

“Yesterday a quasi-judicial body in Italy fined Cloudflare $17 million for failing to go along with their scheme to censor the Internet,” Prince wrote. He continued:

The scheme, which even the EU has called concerning, required us within a mere 30 minutes of notification to fully censor from the Internet any sites a shadowy cabal of European media elites deemed against their interests. No judicial oversight. No due process. No appeal. No transparency. It required us to not just remove customers, but also censor our 1.1.1.1 DNS resolver meaning it risked blacking out any site on the Internet. And it required us not just to censor the content in Italy but globally. In other words, Italy insists a shadowy, European media cabal should be able to dictate what is and is not allowed online.

Prince said he will discuss the matter with US government officials next week and that Cloudflare is “happy to discuss this with Italian government officials who, so far, have been unwilling to engage beyond issuing fines.” In addition to challenging the fine, Prince said Cloudflare is “considering the following actions: 1) discontinuing the millions of dollars in pro bono cyber security services we are providing the upcoming Milano-Cortina Olympics; 2) discontinuing Cloudflare’s Free cyber security services for any Italy-based users; 3) removing all servers from Italian cities; and 4) terminating all plans to build an Italian Cloudflare office or make any investments in the country.”

“Play stupid games, win stupid prizes,” Prince wrote.

Google also in Piracy Shield crosshairs

AGCOM said today that in the past two years, the Piracy Shield law disabled over 65,000 domain names and about 14,000 IP addresses. Italian authorities also previously ordered Google to block pirate sites at the DNS level.

The Computer & Communications Industry Association (CCIA), a trade group that represents tech companies including Cloudflare and Google, has criticized the Piracy Shield law. “Italian authorities have included virtual private networks (VPN) and public DNS resolvers in the Piracy Shield, which are services fundamental to the protection of free expression and not appropriate tools for blocking,” the CCIA said in a January 2025 letter to European Commission officials.

The CCIA added that “the Piracy Shield raises a significant number of concerns which can inadvertently affect legitimate online services, primarily due to the potential for overblocking.” The letter said that in October 2024, “Google Drive was mistakenly blocked by the Piracy Shield system, causing a three-hour blackout for all Italian users, while 13.5 percent of users were still blocked at the IP level, and 3 percent were blocked at the DNS level after 12 hours.”

The Italian system “aims to automate the blocking process by allowing rights holders to submit IP addresses directly through the platform, following which ISPs have to implement a block,” the CCIA said. “Verification procedures between submission and blocking are not clear, and indeed seem to be lacking. Additionally, there is a total lack of redress mechanisms for affected parties, in case a wrong domain or IP address is submitted and blocked.”

30-minute blocking prevents “careful verification”

The 30-minute blocking window “leaves extremely limited time for careful verification by ISPs that the submitted destination is indeed being used for piracy purposes,” the CCIA said. The trade group also questioned the piracy-reporting system’s ties to the organization that runs Italy’s top football league.

“Additionally, the fact that the Piracy Shield platform was developed for AGCOM by a company affiliated with Lega Serie A, which is one of the very few entities authorized to report, raises serious questions about the potential conflict of interest exacerbating the lack of transparency issue,” the letter said.

A trade group for Italian ISPs has argued that the law requires “filtering and tasks that collide with individual freedoms” and is contrary to European legislation that classifies broadband network services as mere conduits that are exempt from liability.

“On the contrary, in Italy criminal liability has been expressly established for ISPs,” Dalia Coffetti, head of regulatory and EU affairs at the Association of Italian Internet Providers, wrote in April 2025. Coffetti argued, “There are better tools to fight piracy, including criminal Law, cooperation between States, and digital solutions that downgrade the quality of the signal broadcast via illegal streaming websites or IPtv. European ISPs are ready to play their part in the battle against piracy, but the solution certainly does not lie in filtering and blocking IP addresses.”

Source: Cloudflare defies Italy’s Piracy Shield, won’t block websites on 1.1.1.1 DNS – Ars Technica

For more articles on how Piracy Shield has gone wrong, read here

Italy Fines Cloudflare €14 Million for Refusing to Filter Sites on Public 1.1.1.1 DNS

Italy’s communications regulator AGCOM imposed a record-breaking €14.2 million fine on Cloudflare after the company failed to implement the required piracy blocking measures. Cloudflare argued that filtering its global 1.1.1.1 DNS resolver would be “impossible” without hurting overall performance. AGCOM disagreed, noting that Cloudflare is not necessarily a neutral intermediary either.

italy flagLaunched in 2024, Italy’s elaborate ‘Piracy Shield‘ blocking scheme was billed as the future of anti-piracy efforts.

To effectively tackle live sports piracy, its broad blocking powers aim to block piracy-related domain names and IP addresses within 30 minutes.

While many pirate sources have indeed been blocked, the Piracy Shield is not without controversy. There have been multiple reports of overblocking, where the anti-piracy system blocked access to legitimate sites and services.

Many of these overblocking instances involved the American Internet infrastructure company Cloudflare, which has been particularly critical of Italy’s Piracy Shield. In addition to protesting the measures in public, Cloudflare allegedly refused to filter pirate sites through its public 1.1.1.1 DNS.

1.1.1.1: Too Big to Block?

This refusal prompted an investigation by AGCOM, which now concluded that Cloudflare openly violated its legal requirements in the country. Following an amendment, the Piracy Shield also requires DNS providers and VPNs to block websites.

The dispute centers specifically on the refusal to comply with AGCOM Order 49/25/CONS, which was issued in February 2025. The order required Cloudflare to block DNS resolution and traffic to a list of domains and IP addresses linked to copyright infringement.

Cloudflare reportedly refused to enforce these blocking requirements through its public DNS resolver. Among other things, Cloudflare countered that filtering its DNS would be unreasonable and disproportionate.

 

Cloudflare’s arguments (translated)

cloud
 

The company warned that doing so would affect billions of daily queries and have an “extremely negative impact on latency,” slowing down the service for legitimate users worldwide.

AGCOM was unmoved by this “too big to block” argument.

The regulator countered that Cloudflare has all the technological expertise and resources to implement the blocking measures. AGCOM argued the company is known for its complex traffic management and rejected the suggestion that complying with the blocking order would break its service.

€14,247,698 Fine

After weighing all arguments, AGCOM imposed a €14,247,698 (USD $16.7m) fine against Cloudflare, concluding that the company failed to comply with the required anti-piracy measures. The fine represents 1% of the company’s global revenue, where the law allows for a maximum of 2%.

 

AGCOM’s conclusion (translated)

14m
 

According to AGCOM, this is the first fine of this type, both in scope and size. This is fitting, as the regulator argued that Cloudflare plays a central role.

“The measure, in addition to being one of the first financial penalties imposed in the copyright sector, is particularly significant given the role played by Cloudflare” AGCOM notes, adding that Cloudflare is linked to roughly 70% of the pirate sites targeted under its regime.

In its detailed analysis, the regulator further highlighted that Cloudflare’s cooperation is “essential” for the enforcement of Italian anti-piracy laws, as its services allow pirate sites to evade standard blocking measures.

What’s Next?

Cloudflare has strongly contested the accusations throughout AGCOM’s proceedings and previously criticized the Piracy Shield system for lacking transparency and due process.

While the company did not immediately respond to our request for comment, it will almost certainly appeal the fine. This appeal may also draw the interest of other public DNS resolvers, such as Google and OpenDNS.

AGCOM, meanwhile, says that it remains fully committed to enforcing the local piracy law. The regulator notes that since the Piracy Shield started in February 2024, 65,000 domain names and 14,000 IP addresses were blocked.

A copy of AGCOM’s detailed analysis and the associated order (N. 333/25/CONS) available here (pdf).

Source: Italy Fines Cloudflare €14 Million for Refusing to Filter Pirate Sites on Public 1.1.1.1 DNS * TorrentFreak

The sites are not necessarily pirate sites – as noted above (and here), many many legitimate sites are blocked by Italy’s privacy shield, with little to no recourse.

French Court Orders Google to block swathes of the internet through DNS for … sports TV

The Paris Judicial Court has ordered Google to block nineteen additional pirate site domains through its public DNS resolver. The blockade was requested by Canal+ and aims to stop pirate streams of Champions League games. In its defense, Google argued that rightsholders should target intermediaries higher up the chain first, such as Cloudflare’s CDN, but the court rejected that.

champions leagueThe frontline of online piracy liability keeps moving, and core internet infrastructure providers are increasingly finding themselves in the crosshairs.

Since 2024, the Paris Judicial Court has ordered Cloudflare, Google and other intermediaries to actively block access to pirate sites through their DNS resolvers, confirming that third-party intermediaries can be required to take responsibility.

These blockades are requested by sports rights holders, covering Formula 1, football, and MotoGP, among others. They argue that public DNS resolvers help users to bypass existing ISP blockades, so these intermediaries should be ordered to block domains too.

Google DNS Blocks Expand

These blocking efforts didn’t stop. After the first blocking requests were granted, the Paris Court issued various additional blocking orders. Most recently, Google was compelled to take action following a complaint from French broadcaster Canal+ and its subsidiaries regarding Champions League piracy..

Like previous blocking cases, the request is grounded in Article L. 333-10 of the French Sports Code, which enables rightsholders to seek court orders against any entity that can help to stop ‘serious and repeated’ sports piracy.

After reviewing the evidence and hearing arguments from both sides, the Paris Court granted the blocking request, ordering Google to block nineteen domain names, including antenashop.site, daddylive3.com, livetv860.me, streamysport.org and vavoo.to.

The latest blocking order covers the entire 2025/2026 Champions League series, which ends on May 30, 2026. It’s a dynamic order too, which means that if these sites switch to new domains, as verified by ARCOM, these have to be blocked as well.

Cloudflare-First Defense Fails

Google objected to the blocking request. Among other things, it argued that several domains were linked to Cloudflare’s CDN. Therefore, suspending the sites on the CDN level would be more effective, as that would render them inaccessible.

Based on the subsidiarity principle, Google argued that blocking measures should only be ordered if attempts to block the pirate sites through more direct means have failed.

The court dismissed these arguments, noting that intermediaries cannot dictate the enforcement strategy or blocking order. Intermediaries cannot require “prior steps” against other technical intermediaries, especially given the “irremediable” character of live sports piracy.

The judge found the block proportional because Google remains free to choose the technical method, even if the result is mandated. Internet providers, search engines, CDNs, and DNS resolvers can all be required to block, irrespective of what other measures were taken previously.

Proportional

Google further argued that the blocking measures were disproportionate because they were complex, costly, easily bypassed, and had effects beyond the borders of France.

The Paris court rejected these claims. It argued that Google failed to demonstrate that implementing these blocking measures would result in “important costs” or technical impossibilities.

[…]

A copy of the order issued by the Tribunal Judiciaire de Paris (RG nº 25/11816) is available here (pdf). The order specifically excludes New Caledonia, Wallis and Futuna, and French Polynesia due to specific local legal frameworks.

1. antenashop.site
2. antenawest.store
3. daddylive3.com
4. hesgoal-tv.me
5. livetv860.me
6. streamysport.org
7. vavoo.to
8. witv.soccer
9. veplay.top
10. jxoxkplay.xyz
11. andrenalynrushplay.cfd
12. marbleagree.net
13. emb.apl375.me
14. hornpot.net
15. td3wb1bchdvsahp.ngolpdkyoctjcddxshli469r.org
16. ott-premium.com
17. rex43.premium-ott.xyz
18. smartersiptvpro.fr
19. eta.play-cdn.vip:80

Source: French Court Orders Google DNS to Block Pirate Sites, Dismisses ‘Cloudflare-First’ Defense * TorrentFreak

These blocks can (and do) go horribly wrong. And, should you have another DNS provider, they give you a handy list of where to go to watch the Champions League 🙂

Report: Microsoft quietly kills official way to activate Windows 11/10 without internet

In November last year, we reported on the removal of an unofficial KMS-related Windows activation, something which the company was planning to do for a while. The method worked by helping to activate Windows without an internet connection.

If you are wondering about official ways, offline Windows activation has been possible to do using the phone. However, it looks like Microsoft has quietly killed off that method as users online have found that they are no longer able to activate the OS using it.

[…]

Now when trying to activate the OS by attempting to call the phone number for Microsoft Product Activation, an automated voice response says the following: “Support for product activation has moved online. For the fastest and most convenient way to activate your product, please visit our online product activation portal at aka.ms/aoh”

If you are wondering, that link takes users to the Microsoft Product Activation Portal for online activation.

[…]

Source: Report: Microsoft quietly kills official way to activate Windows 11/10 without internet – Neowin

Together with Windows more and more requiring a Microsoft account to install / log in to windows, this reflects a growing need by Microsoft to peer into your computer.

Your smart TV is watching you and nobody’s stopping it

At the end of last year, Texas Attorney General Ken Paxton sued five of the largest TV companies, accusing them of excessive and deceptive surveillance of their customers.

Paxton reserved special venom for the two China-based members of the quintet. His argument is that unlike Sony, Samsung, and LG, if Hisense and TCL have conducted surveillance in the way the lawsuits accuse them of, they’d potentially be required to share all data with the Chinese Communist Party.

It is a rare pleasure to state that legal action against tech companies is cogent, timely, focused, and – if the allegations are true – deserves to succeed. It is less pleasant to predict that even if one, several, or all of these manufacturers did what they’re accused of, and were sanctioned for it, it would not put the safeguards in place to stop such practices from recurring.

At the heart of the cases is the fact that most smart TVs use Automatic Content Recognition (ACR) to send rapid-fire screenshots back to company servers, where they are analyzed to finely detail your TV usage. This sometimes covers not just streaming video, but whatever apps or external devices are displaying, and the allegations are that every other bit of personal data the set can scry is also pulled in. Installed apps can have trackers, data from other devices can be swept up.

These lawsuits aside, smart TV companies more generally boast of their prying prowess to the ecosystem of data exploiters from which they make their money. The companies are much less open about the mechanisms and amount of data collection, and deploy a barrage of defenses to entice customers into turning the stuff on and stop them from turning it off. You may have already seen massive on-screen Ts&Cs with only ACCEPT as an option, ACR controls buried in labyrinthine menu jails, features that stop working even if you complete the obstacle course – all this is old news.

How old are these practices? TV maker Vizio got hit by multiple suits between 2015 and 2017, and collected $2.2 million in fines from the Federal Trade Commission and the state of New Jersey, as well as settling related class actions to the tune of $17 million. The FTC said the fines settled claims the maker had used installed software on its TVs to collect viewing data on 11 million TVs without their owners’ knowledge or consent. A court order said the manufacturer had to delete data collected before 2016 and promise to “prominently disclose and obtain affirmative express consent” for data collection and sharing from then on.

Yet ten years on, the problem has only got worse. There is no law against data collection, and companies often eat the fines, adjust their behavior to the barest minimum compliance, and set about finding new ways to entomb your digital twin in their datacenters.

It’s not even as if more regulation helps. The European GDPR data protection and privacy regs give consumers powerful rights and companies strict obligations, which smart TV makers do not rush to observe. Researchers claim the problem is growing no matter which side of the Atlantic your TV is watching you on.

[…]

Source: Your smart TV is watching you and nobody’s stopping it • The Register

Google starts to close Android sources, will only release code twice a year now

The operating system that powers every Android phone and tablet on the market is based on AOSP, short for the Android Open Source Project. Google develops and releases AOSP under the permissive Apache 2.0 License, which allows any developer to use, modify, and distribute their own operating systems based on the project without paying fees or releasing their own modified source code. Since beginning the project, Google released the source code for nearly every new version of Android for mobile devices, typically doing so within days of rolling out the corresponding update to its own Pixel mobile devices. Starting this year, however, Google is making a major change to its release schedule for Android source code drops: AOSP sources will only be released twice a year.

Google told Android Authority that, effective 2026, Google will publish new source code to AOSP in Q2 and Q4. The reason is to [blah blah bullshit]

[…]

Source: Google will now only release Android source code twice a year

With competition getting under way by the likes of Sailfish to satisfy an increasing amount of people seeking to get out from under the thumbs of Android and IOS, Google is closing the system so that alternatives can’t use their work in helping creating better products.

HSBC blocks app users for having sideloaded password manager

[…] Neil Brown, board member at F-Droid, said he was blocked from accessing HSBC’s UK mobile banking after a security screen flagged Bitwarden as a risk. Brown had installed the password manager via F-Droid rather than Google Play.

Bitwarden, an open source password manager, is available through official channels including Google Play and Galaxy stores, as well as via F-Droid sideloading.

HSBC didn’t provide The Register with a clear answer on why it won’t allow a sideloaded Bitwarden installation to coexist with its app on the same device.

Representatives from both F-Droid and Bitwarden suspect the issue stems from HSBC’s side.

Gary Orenstein, chief customer officer at Bitwarden, told us: “It seems that HSBC has chosen a level of security and permissions for their mobile app that allows the HSBC app to see if there are other apps on the phone not installed from the Google Play store, and if one is found, to disallow the install of the HSBC app.”

[…]

Source: HSBC blocks app users for having sideloaded password manager • The Register

There are many great reasons to install apps from things that aren’t the Google Play Store, privacy and freedom of choice being a major one – especially with people trying to escape the Google / Apple duopoly by jumping to other OSs like Sailfish (on the Jolla Phone). Not being able to access your banking app is a major problem. I guess it’s time to start changing banks as well then!

MacOS Logitech mice stop working due to cloud certificate being invalid. Apple shakedown turns hardware into junk.

If you’re among the macOS users experiencing some weird issues with your Logitech mouse, then good news: Logitech has now released a fix. This comes after multiple Reddit users reported yesterday that Logi Options Plus — the app required to manage and configure the controls on Logitech accessories — had stopped working, preventing them from using customized scrolling features, button actions, and gestures.

One Reddit user said that the scroll directions and extra buttons on their Logitech mouse “were not working as I intended” and that the Logi Options Plus app became stuck in a boot loop upon opening it to identify the cause. Logitech has since acknowledged the situation and said that its G Hub app — a similar management software for gaming devices under the Logitech G brand — was also affected.

According to Logitech’s support page, the problem was caused by “an expired certificate” required for the apps to run. Windows users were unaffected. The issues only impacted Mac users because macOS prevents certain applications from running if it doesn’t detect a valid Developer ID certificate, something that has affected other apps in the past.

So Apple requires the maker of hardware to pay them a subscription to be able to use the hardware?! It’s a mouse, not a piece of rocket science! If your hardware supplier goes bust, your hardware turns into junk.

LG forced a Copilot web app onto its TVs but will now let you delete it

LG says it will let users delete the Microsoft Copilot shortcut it installed on newer TVs after several reports highlighted the unremovable icon. In a statement to The Verge, LG spokesperson Chris De Maria says the company “respects consumer choice and will take steps to allow users to delete the shortcut icon if they wish.”

Last week, a user on the r/mildlyinfuriating subreddit posted an image of the Microsoft Copilot icon in their lineup of apps on an LG TV, with no option to delete it. “My LG TV’s new software update installed Microsoft Copilot, which cannot be deleted,” the post says. The post garnered more than 36,000 upvotes as people grow more frustrated with AI popping up just about everywhere.

Both LG and Samsung announced plans to add Microsoft’s Copilot AI assistant to their TVs in January, but it appears to be popping up on LG TVs following a recent update to webOS.

De Maria adds that the icon is a “shortcut” to the Microsoft Copilot web app that opens in the TV’s web browser, rather than “an application-based service embedded in the TV.” He also adds that “features such as microphone input are activated only with the customer’s explicit consent.”

Asked when LG will start letting users delete the Copilot icon, De Maria said there’s no “definitive timing” yet.

Here’s LG’s full statement:

Following recent coverage regarding the arrival of Microsoft Copilot on LG TVs, we’re reaching out to provide an important clarification. Based on recent coverage regarding the arrival of Microsoft Copilot on LG TVs, we want to clarify that Microsoft Copilot is provided as a shortcut icon to enhance customer accessibility and convenience. It is not an application-based service embedded in the TV. When users select the Copilot shortcut, Microsoft’s website opens through the TV’s web browser, and features such as microphone input are activated only with the customer’s explicit consent.

Source: LG forced a Copilot web app onto its TVs but will let you delete it | The Verge

After Samsung forces Gemini, LG TV users get unremovable Microsoft Copilot through forced update

LG smart TV owners are reporting that a recent webOS software update has added Microsoft Copilot to their TVs, with no apparent way to remove it. Reports first surfaced over the weekend on Reddit, where a post showing a Copilot tile pinned to an LG TV home screen climbed to more than 35,000 upvotes on r/mildlyinfuriating, accompanied by hundreds of comments from users describing the same behavior.

According to affected users, Copilot appears automatically after installing the latest webOS update on certain LG TV models. The feature shows up on the home screen alongside streaming apps, but unlike Netflix or YouTube, it cannot be uninstalled.

LG has previously confirmed plans to integrate Microsoft Copilot into webOS as part of its broader “AI TV” strategy. At CES 2025, the company described Copilot as an extension of its AI Search experience, designed to answer questions and provide recommendations using Microsoft’s AI services. In practice, the iteration of Copilot currently seen on LG TVs appears to function as a shortcut to a web-based Copilot interface rather than a fully native application like the one described by LG.

The issue, for many, isn’t necessarily what Copilot does, but that it has been forced onto consumers with no option to remove it. LG’s own support documentation notes that certain preinstalled or system apps cannot be deleted, only hidden. Users who encounter Copilot after the update report that this limitation applies, leaving them with no way to fully remove the feature once it has been added. It’s a similar story on rival models, for instance some Samsung TV’s include Gemini.

The overwhelmingly negative reaction from users indicates a growing frustration with AI features being imposed on consumers in every way possible. Smart TVs have naturally become platforms for advertising, data collection, and now AI services, with updates adding new functionality that owners did not explicitly request and, in most cases, do not want. While LG allows users to disable some AI-related options, such as voice recognition and personalization features, those settings do not remove the Copilot app itself.

Ultimately, those wanting to minimize Copilot’s presence on their TVs are limited to keeping it disconnected from the Internet. That’s about the most that can be done at the moment, unless LG backtracks and either allows users to disable or completely uninstall the app in response to backlash, which seems unlikely.

Source: LG TV users baffled by unremovable Microsoft Copilot installation — surprise forced update shows app pinned to the home screen | Tom’s Hardware

How Cops Are Using Flock’s license plate camera Network To Surveil Protesters And Activists

It’s no secret that 2025 has given Americans plenty to protest about. But as news cameras showed protesters filling streets of cities across the country, law enforcement officers—including U.S. Border Patrol agents—were quietly watching those same streets through different lenses: Flock Safety automated license plate readers (ALPRs) that tracked every passing car.

Through an analysis of 10 months of nationwide searches on Flock Safety’s servers, we discovered that more than 50 federal, state, and local agencies ran hundreds of searches through Flock’s national network of surveillance data in connection with protest activity. In some cases, law enforcement specifically targeted known activist groups, demonstrating how mass surveillance technology increasingly threatens our freedom to demonstrate.

Flock Safety provides ALPR technology to thousands of law enforcement agencies. The company installs cameras throughout their jurisdictions, and these cameras photograph every car that passes, documenting the license plate, color, make, model and other distinguishing characteristics. This data is paired with time and location, and uploaded to a massive searchable database. Flock Safety encourages agencies to share the data they collect broadly with other agencies across the country. It is common for an agency to search thousands of networks nationwide even when they don’t have reason to believe a targeted vehicle left the region.

Via public records requests, EFF obtained datasets representing more than 12 million searches logged by more than 3,900 agencies between December 2024 and October 2025. The data shows that agencies logged hundreds of searches related to the 50501 protests in February, the Hands Off protests in April, the No Kings protests in June and October, and other protests in between.

[…]

While EFF and other civil liberties groups argue the law should require a search warrant for such searches, police are simply prompted to enter text into a “reason” field in the Flock Safety system. Usually this is only a few words–or even just one.

In these cases, that word was often just “protest.”

Crime does sometimes occur at protests, whether that’s property damage, pick-pocketing, or clashes between groups on opposite sides of a protest. Some of these searches may have been tied to an actual crime that occurred, even though in most cases officers did not articulate a criminal offense when running the search. But the truth is, the only reason an officer is able to even search for a suspect at a protest is because ALPRs collected data on every single person who attended the protest.

[…]

In a few cases, police were using Flock’s ALPR network to investigate threats made against attendees or incidents where motorists opposed to the protests drove their vehicle into crowds. For example, throughout June 2025, an Arizona Department of Public Safety officer logged three searches for “no kings rock threat,” and a Wichita (Kan.) Police Department officer logged 22 searches for various license plates under the reason “Crime Stoppers Tip of causing harm during protests.”

Even when law enforcement is specifically looking for vehicles engaged in potentially criminal behavior such as threatening protesters, it cannot be ignored that mass surveillance systems work by collecting data on everyone driving to or near a protest—not just those under suspicion.

Border Patrol’s Expanding Reach

As U.S. Border Patrol (USBP), ICE, and other federal agencies tasked with immigration enforcement have massively expanded operations into major cities, advocates for immigrants have responded through organized rallies, rapid-response confrontations, and extended presences at federal facilities.

USBP has made extensive use of Flock Safety’s system for immigration enforcement, but also to target those who object to its tactics. In June, a few days after the No Kings Protest, USBP ran three searches for a vehicle using the descriptor “Portland Riots.”

[…]

Fighting Back Against ALPR

ALPR systems are designed to capture information on every vehicle that passes within view. That means they don’t just capture data on “criminals” but on everyone, all the time—and that includes people engaged in their First Amendment right to publicly dissent. Police are sitting on massive troves of data that can reveal who attended a protest, and this data shows they are not afraid to use it.

Our analysis only includes data where agencies explicitly mentioned protests or related terms in the “reason” field when documenting their search. It’s likely that scores more were conducted under less obvious pretexts and search reasons. According to our analysis, approximately 20 percent of all searches we reviewed listed vague language like “investigation,” “suspect,” and “query” in the reason field. Those terms could well be cover for spying on a protest, an abortion prosecution, or an officer stalking a spouse, and no one would be the wiser–including the agencies whose data was searched. Flock has said it will now require officers to select a specific crime under investigation, but that can and will also be used to obfuscate dubious searches.

For protestors, this data should serve as confirmation that ALPR surveillance has been and will be used to target activities protected by the First Amendment. Depending on your threat model, this means you should think carefully about how you arrive at protests, and explore options such as by biking, walking, carpooling, taking public transportation, or simply parking a little further away from the action. Our Surveillance Self-Defense project has more information on steps you could take to protect your privacy when traveling to and attending a protest.

[…]

Everyone should have the right to speak up against injustice without ending up in a database.

Source: How Cops Are Using Flock Safety’s ALPR Network To Surveil Protesters And Activists | Techdirt

Australias Social Media Ban goes into Effect

The BBC has a live page following the ban and surprise surprise – it didn’t take long for people to circumvent the ban at all, with alternative social media being used (eg Lemon8, Yope, etc), VPNs being used (and the use of VPNs being threatened by ministers), pleas by campaigners asking for parents not to help circumvent the rules, etc.

That the ban won’t work is predictable. It will force kids into hiding, where they will be beyond the oversight of absolutely anyone. Worse, it will leave them with no help when things do go wrong – who is going to be complaining to their parents or the police about cyberbullying when they are using an illegal platform where they are being bullied on?

The age limit of 16 is entirely arbitrary too. Some kids develop faster than others and some very much slower. With science showing that adulthood starts at 32 (and looking at how far right politics and belief in populist nonsense is going globally, in many cases seemingly never), mature children are being punished and immature young adults are being exposed to content that they are not equipped to handle.

The goal – stopping toxic, unwanted behaviors in social media platforms – is a good one. By now we should be able to define these unwanted behaviours (eg no false news; no body shaming; no targeted abuse; no political preferences in feeds; who really needs video calls with groups of more than 6 people on a social media platform anyway? etc) and test them. To throw a random age line at the problem doesn’t solve it. How about for every instance of one of these behaviours a huge fine is levied (eg $1 million or above – the scale of the profits of the social media companies beggars belief, so only huge fines will make them feel the cost / benefit of paying the fine / fixing the problem lies in the fixing the problem side of things) – something that these behemoths cannot ignore. And if too many transgressions are detected in a certain period (eg 100 fines per week) then the platform is closed entirely for a certain period (weeks / months). This will incentivise the social media platforms to fix the problems which is what we want instead of driving kids into hiding and exposing them to a much more dangerous social media landscape.

The year age verification laws came for the open internet

When the nonprofit Freedom House recently published its annual report, it noted that 2025 marked the 15th straight year of decline for global internet freedom. The biggest decline, after Georgia and Germany, came within the United States.

Among the culprits cited in the report: age verification laws, dozens of which have come into effect over the last year. “Online anonymity, an essential enabler for freedom of expression, is entering a period of crisis as policymakers in free and autocratic countries alike mandate the use of identity verification technology for certain websites or platforms, motivated in some cases by the legitimate aim of protecting children,” the report warns.

Age verification laws are, in some ways, part of a years-long reckoning over child safety online, as tech companies have shown themselves unable to prevent serious harms to their most vulnerable users. Lawmakers, who have failed to pass data privacy regulations, Section 230 reform or any other meaningful legislation that would thoughtfully reimagine what responsibilities tech companies owe their users, have instead turned to the blunt tool of age-based restrictions — and with much greater success.

Over the last two years, 25 states have passed laws requiring some kind of age verification to access adult content online. This year, the Supreme Court delivered a major victory to backers of age verification standards when it upheld a Texas law requiring sites hosting adult content to check the ages of their users.

Age checks have also expanded to social media and online platforms more broadly. Sixteen states now have laws requiring parental controls or other age-based restrictions for social media services. (Six of these measures are currently in limbo due to court challenges.) A federal bill to ban kids younger than 13 from social media has gained bipartisan support in Congress. Utah, Texas and Louisiana passed laws requiring app stores to check the ages of their users, all of which are set to go into effect next year. California plans to enact age-based rules for app stores in 2027.

These laws have started to fragment the internet. Smaller platforms and websites that don’t have the resources to pay for third-party verification services may have no choice but to exit markets where age checks are required. Blogging service Dreamwidth pulled out of Mississippi after its age verification laws went into effect, saying that the $10,000 per user fines it could face were an “existential threat” to the company. Bluesky also opted to go dark in Mississippi rather than comply. (The service has complied with age verification laws in South Dakota and Wyoming, as well as the UK.) Pornhub, which has called existing age verification laws “haphazard and dangerous,” has blocked access in 23 states.

Pornhub is not an outlier in its assessment. Privacy advocates have long warned that age verification laws put everyone’s privacy at risk. Practically, there’s no way to limit age verification standards only to minors. Confirming the ages of everyone under 18 means you have to confirm the ages of everyone. In practice, this often means submitting a government-issued ID or allowing an app to scan your face. Both are problematic and we don’t need to look far to see how these methods can go wrong.

Discord recently revealed that around 70,000 users “may” have had their government IDs leaked due to an “incident” involving a third-party vendor the company contracts with to provide customer service related to age verification. Last year, another third-party identity provider that had worked with TikTok, Uber and other services exposed drivers’ licenses. As a growing number of platforms require us to hand over an ID, these kinds of incidents will likely become even more common.

Similar risks exist for face scans. Because most minors don’t have official IDs, platforms often rely on AI-based tools that can guess users’ ages. A face scan may seem more private than handing over a social security number, but we could be turning over far more information than we realize, according to experts at the Electronic Frontier Foundation (EFF).

“When we submit to a face scan to estimate our age, a less scrupulous company could flip a switch and use the same face scan, plus a slightly different algorithm, to guess our name or other demographics,” the organization notes. “A poorly designed system might store this personal data, and even correlate it to the online content that we look at. In the hands of an adversary, and cross-referenced to other readily available information, this information can expose intimate details about us.”

These issues aren’t limited to the United States. Australia, Denmark and Malaysia have taken steps to ban younger teens from social media entirely. Officials in France are pushing for a similar ban, as well as a “curfew” for older teens. These measures would also necessitate some form of age verification in order to block the intended users. In the UK, where the Online Safety Act went into effect earlier this year, we’ve already seen how well-intentioned efforts to protect teens from supposedly harmful content can end up making large swaths of the internet more difficult to access.

The law is ostensibly meant to “prevent young people from encountering harmful content relating to suicide, self-harm, eating disorders and pornography,” according to the BBC. But the law has also resulted in age checks that reach far beyond porn sites. Age verification is required to access music on Spotify. It will soon be required for Xbox accounts. On X, videos of protests have been blocked. Redditors have reported being blocked from a lengthy number of subreddits that are marked NSFW but don’t actually host porn, including those related to menstruation, news and addiction recovery. Wikipedia, which recently lost a challenge to be excluded from the law’s strictest requirements, is facing the prospect of being forced to verify the ages of its UK contributors, which the organization has said could have disastrous consequences.

The UK law has also shown how ineffective existing age verification methods are. Users have been able to circumvent the checks by using selfies of video game characters, AI-generated images of ID documents and, of course, Virtual Private Networks (VPNs).

As the EFF notes, VPNs are incredibly widely used. The software allows people to browse the internet while masking their actual location. They’re used by activists and students and people who want to get around geoblocks built into streaming services. Many universities and businesses (including Engadget parent company Yahoo) require their students and workers to use VPNs in order to access certain information. Blocking VPNs would have serious repercussions for all of these groups.

The makers of several popular VPN services reported major spikes in the UK following the Online Safety Act going into effect this summer, with ProtonVPN reporting a 1,400 percent surge in sign-ups. That’s also led to fears of a renewed crackdown on VPNs. Ofcom, the regulator tasked with enforcing the law, told TechRadar it was “monitoring” VPN usage, which has further fueled speculation it could try to ban or restrict their use. And here in the States, lawmakers in Wisconsin have proposed an age verification law that would require sites that host “harmful” content to also block VPNs.

While restrictions on VPNs are, for now, mostly theoretical, the fact that such measures are even being considered is alarming. Up to now, VPN bans are more closely associated with authoritarian countries without an open internet, like Russia and China. If we continue down a path of trying to put age gates up around every piece of potentially objectionable content, the internet could get a lot worse for everyone.

Source: The year age verification laws came for the open internet

New EU Jolla Phone Now Available for Pre-Order as an Independent No Spyware Linux Phone

Jolla kicked off a campaign for a new Jolla Phone, which they call the independent European Do It Together (DIT) Linux phone, shaped by the people who use it.

“The Jolla Phone is not based on Big Tech technology. It is governed by European privacy thinking and a community-led model.”

The new Jolla Phone is powered by a high-performing Mediatek 5G SoC, and features 12GB RAM, 256GB storage that can be expanded to up to 2TB with a microSDXC card, a 6.36-inch FullHD AMOLED display with ~390ppi, 20:9 aspect ratio, and Gorilla Glass, and a user-replaceable 5,500mAh battery.

The Linux phone also features 4G/5G support with dual nano-SIM and a global roaming modem configuration, Wi-Fi 6 wireless, Bluetooth 5.4, NFC, 50MP Wide and 13MP Ultrawide main cameras, front front-facing wide-lens selfie camera, fingerprint reader on the power key, a user-changeable back cover, and an RGB indication LED.

On top of that, the new Jolla Phone promises a user-configurable physical Privacy Switch that lets you turn off the microphone, Bluetooth, Android apps, or whatever you wish.

The device will be available in three colors, including Snow White, Kaamos Black, and The Orange. All the specs of the new Jolla Phone were voted on by Sailfish OS community members over the past few months.

Honouring the original Jolla Phone form factor and design, the new model ships with Sailfish OS (with support for Android apps), a Linux-based European alternative to dominating mobile operating systems that promises a minimum of 5 years of support, no tracking, no calling home, and no hidden analytics.

“Mainstream phones send vast amounts of background data. A common Android phone sends megabytes of data per day to Google even if the device is not used at all. Sailfish OS stays silent unless you explicitly allow connections,” said Jolla.

The new Jolla Phone is now available for pre-order for 99 EUR and will only be produced if at least 2000 pre-orders are reached in one month from today, until January 4th, 2026. The full price of the Linux phone will be 499 EUR (incl. local VAT), and the 99 EUR pre-order price will be fully refundable and deducted from the full price.

The device will be manufactured and sold in Europe, but Jolla says that it will design the cellular band configuration to enable global travelling as much as possible, including e.g. roaming in the U.S. carrier networks. The initial sales markets are the EU, the UK, Switzerland, and Norway.

Source: New Jolla Phone Now Available for Pre-Order as an Independent Linux Phone – 9to5Linux

New hotness in democracy: if the people say no to mass surveillance, do it again right after you have said you won’t do it. Not EU this time: it’s India

You know what they say: If at first you don’t succeed at mass government surveillance, try, try again. Only two days after India backpedaled on its plan to force smartphone makers to preinstall a state-run “cybersecurity” app, Reuters reports that the country is back at it. It’s said to be considering a telecom industry proposal with another draconian requirement. This one would require smartphone makers to enable always-on satellite-based location tracking (Assisted GPS).

The measure would require location services to remain on at all times, with no option to switch them off. The telecom industry also wants phone makers to disable notifications that alert users when their carriers have accessed their location.

[…]

Source: India is reportedly considering another draconian smartphone surveillance plan

Looks like the Indians took a page out of the Danish playbook for Chat Control and turning the EU into a 1984 Brave New World

Subaru Owners Are Ticked About In-Car Pop-Up Ads for SiriusXM

I’ve written about Stellantis brands doing this twice already in 2025, and this time, it’s Subaru sending pop-up ads for SiriusXM to owners’ infotainment screens.

The Autopian ran a story on the egregious push notifications on Monday, and it only took a short search to find more examples. It happened right around Thanksgiving, as the promotion urged drivers to “Enjoy SiriusXM FREE thru 12/1.” That day has come and gone, but not before it angered droves of Subaru owners.

“I have got this Sirius XM ad a few times over the last couple of years,” the caption on the embedded Reddit thread reads. “This last time was the final straw as I almost wrecked because of it. My entire infotainment screen changed which caused me to take my eyes off the road and since I was going 55mph in winter I swerved a bit and slid and almost went off into a ditch. Something that would not have happened had this ad not popped up.

[…]

At least one 2024 Crosstrek owner reported that the pop-up took over their screen even though they were using Apple CarPlay. To force-close an application that’s in use, solely for the sake of in-car advertising, is especially egregious.

[…]

Reddit posts dating back as far as 2023 show owners complaining about in-car notifications.

[…]

 

Source: Subaru Owners Are Ticked About In-Car Pop-Up Ads for SiriusXM

India demands smartphone makers install government app

India’s government has issued a directive that requires all smartphone manufacturers to install a government app on every handset in the country and has given them 90 days to get the job done – and to ensure users can’t remove the code.

The app is called “Sanchar Saathi” and is a product of India’s Department of Telecommunications (DoT).

On Google Play and Apple’s App Store, the Department describes the app as “a citizen centric initiative … to empower mobile subscribers, strengthen their security and increase awareness about citizen centric initiatives.”

The app does those jobs by allowing users to report incoming calls or messages – even on WhatsApp – they suspect are attempts at fraud. Users can also report incoming calls for which caller ID reveals the +91 country code, as India’s government thinks that’s an indicator of a possible illegal telecoms operator.

Users can also block their device if they lose it or suspect it was stolen, an act that will prevent it from working on any mobile network in India.

Another function allows lookup of IMEI numbers so users can verify if their handset is genuine.

Spam and scams delivered by calls or TXTs are pervasive around the world, and researchers last year found that most Indian netizens receive three or more dodgy communiqués every day. This app has obvious potential to help reduce such attacks.

An announcement from India’s government states that cybersecurity at telcos is another reason for the requirement to install the app.

“Spoofed/ Tampered IMEIs in telecom network leads to situation where same IMEI is working in different devices at different places simultaneously and pose challenges in action against such IMEIs,” according to the announcement. “India has [a] big second-hand mobile device market. Cases have also been observed where stolen or blacklisted devices are being re-sold. It makes the purchaser abettor in crime and causes financial loss to them. The blocked/blacklisted IMEIs can be checked using Sanchar Saathi App.”

That motive is likely the reason India has required handset-makers to install Sanchar Saathi on existing handsets with a software update.

The directive also requires the app to be pre-installed, “visible, functional, and enabled for users at first setup.” Manufacturers may not disable or restrict its features and “must ensure the App is easily accessible during device setup.”

Those functions mean India’s government will soon have a means of accessing personal info on hundreds of millions of devices.

Apar Gupta, founder and director of India’s Internet Freedom Foundation, has criticized the directive on grounds that Sanchar Saathi isn’t fit for purpose. “Rather than resorting to coercion and mandating it to be installed the focus should be on improving it,” he wrote.

[…]

Source: India demands smartphone makers install government app • The Register

Cowed BBC Censors Lecture Calling Trump ‘Most Openly Corrupt President’

The BBC is now voluntarily suppressing criticism of Donald Trump before it airs—and the reason is obvious: Trump threatened to sue them into oblivion, and they blinked.

Historian Rutger Bregman revealed this week that the BBC commissioned a public lecture from him last month, recorded it, then quietly cut a single sentence before broadcast. The deleted line? Calling Trump “the most openly corrupt president in American history.” Bregman posted about the capitulation, noting that the decision came from “the highest levels” of the BBC—meaning the executives dealing with Trump’s threats.

Well, at least we should call out Donald Trump as the most openly censorial president in American history.

This is the payoff from Trump’s censorship campaign against the BBC. Weeks ago, Trump threatened to sue the BBC for a billion dollars over an edit in a program it aired a year ago. The BBC apologized and fired employees associated with the project. That wasn’t enough. Trump’s FCC censorship lackey Brendan Carr launched a bullshit investigation anyway. And now the BBC is preemptively editing out true statements that might anger the thin-skinned man baby President.

Bregman posted the exact line that got cut. Here’s the full paragraph, with the censored sentence in bold:

On one side we had an establishment propping up an elderly man in obvious mental decline. On the other we had a convicted reality star who now rules as the most openly corrupt president in American history. When it comes to staffing his administration, he is a modern day Caligula, the Roman emperor who wanted to make his horse a consul. He surrounds himself with loyalists, grifters, and sycophants.

Gosh, for what reason would the BBC cut that one particular line?

The BBC admitted to this in the most mealy-mouthed way when asked by the New Republic to comment on the situation:

Asked for comment on Bregman’s charge, a spokesperson for the BBC emailed me this: “All of our programmes are required to comply with the BBC’s editorial guidelines, and we made the decision to remove one sentence from the lecture on legal advice.”

“On legal advice.” Translation: Trump’s SLAPP suit threats worked exactly as intended.

Greg Sargent, writing in the New Republic, nails why this matters:

There is something deeply perverse in this outcome. Even if you grant Trump’s criticism of the edit of his January 6 speech—never mind that as the violence raged, Trump essentially sat on his hands for hours and arguably directed the mob to target his vice president—the answer to this can’t be to let Trump bully truth-telling into self-censoring silence.

That’s plainly what happened here.

Exactly. The BBC’s initial capitulation—the apology, the firings, the groveling—was bad enough. But this is worse. This is pre-censorship. The BBC is now editing out true statements about Trump before they air, purely because they’re afraid of how he might react. That’s not “legal advice.” That’s cowardice institutionalized as policy.

Once again, I remind you that Trump’s supporters have, for years, insisted that he was “the free speech president” and have talked about academic freedom and the right to state uncomfortable ideas.

[…]

Source: BBC Pre-Edits Lecture Calling Trump ‘Most Openly Corrupt President’ | Techdirt

Nexperia accused by parent Wingtech and Chinese unit of plotting to move supply chain

BEIJING/AMSTERDAM, Nov 28 (Reuters) – Wingtech (600745.SS)

, opens new tab, the Chinese parent company of Netherlands-based Nexperia, accused its Dutch unit on Friday of conspiring to build a non-Chinese supply chain and permanently strip it of its control, escalating tensions between the two sides.
In a separate statement, Nexperia’s Chinese arm demanded the Dutch business halt overseas expansion, including in Malaysia. “Abandon improper intentions to replace Chinese capacity,” Nexperia China said.
Sign up here.
The accusations follow an open letter from Nexperia published on Thursday claiming repeated attempts to engage with its Chinese unit had failed.
Nexperia, which produces billions of chips for cars and electronics, has been in a tug-of-war since the Dutch government seized the company two months ago on economic security grounds. An Amsterdam court subsequently stripped Wingtech of control.
Beijing retaliated by halting exports of Nexperia’s finished products on October 4, leading to disruptions in global automotive supply chains.
The curbs were relaxed in early November and the Dutch government suspended the seizure last week following talks. But the court ruling remains in force.
The chipmaker’s Europe-based units and Chinese entities remain locked in a standoff. Nexperia’s Chinese arm declared itself independent from European management, which responded by stopping the shipment of wafers to the company’s plant in China.

CHINESE PARENT WARNS OF RENEWED SUPPLY CHAIN DISRUPTION

The escalating war of words casts doubt on the viability of a company-led resolution urged by China and the European Union this week.
Wingtech said on Friday that Nexperia’s Dutch unit was avoiding the issue of its “legitimate control”, making negotiations untenable.
“We need to find a way first to talk to one another constructively” a spokesperson for Nexperia’s European headquarters said on Friday.
Nexperia China said that the Dutch unit’s claim it could not contact its management was misleading, accusing it of stifling communication by deleting the email accounts of Nexperia China employees and terminating their access to IT systems.
The Chinese unit claimed that the Dutch side was engineering a breakup, citing a $300 million plan to expand a Malaysian plant, and an alleged internal goal of sourcing 90% of production outside China by mid-2026.
[…]

Source: Nexperia accused by parent Wingtech and Chinese unit of plotting to move supply chain | Reuters