Iconic Gun-Makers Gave Sensitive Customer Information to Political Operatives in secret for decades

For years, America’s most iconic gun-makers turned over sensitive personal information on hundreds of thousands of customers to political operatives.

Those operatives, in turn, secretly employed the details to rally firearm owners to elect pro-gun politicians running for Congress and the White House, a ProPublica investigation has found.

The clandestine sharing of gun buyers’ identities — without their knowledge and consent — marked a significant departure for an industry that has long prided itself on thwarting efforts to track who owns firearms in America.

At least 10 gun industry businesses, including Glock, Smith & Wesson, Remington, Marlin and Mossberg, handed over names, addresses and other private data to the gun industry’s chief lobbying group, the National Shooting Sports Foundation. The NSSF then entered the gun owners’ details into what would become a massive database.

The data initially came from decades of warranty cards filled out by customers and returned to gun manufacturers for rebates and repair or replacement programs.

A ProPublica review of dozens of warranty cards from the 1970s through today found that some promised customers their information would be kept strictly confidential. Others said some information could be shared with third parties for marketing and sales. None of the cards informed buyers their details would be used by lobbyists and consultants to win elections.

[…]

The undisclosed collection of intimate gun owner information is in sharp contrast with the NSSF’s public image.

[…]

For two decades, the group positioned itself as an unwavering watchdog of gun owner privacy. The organization has raged against government and corporate attempts to amass information on gun buyers. As recently as this year, the NSSF pushed for laws that would prohibit credit card companies from creating special codes for firearms dealers, claiming the codes could be used to create a registry of gun purchasers.

As a group, gun owners are fiercely protective about their personal information. Many have good reasons. Their ranks include police officers, judges, domestic violence victims and others who have faced serious threats of harm.

In a statement, the NSSF defended its data collection. Any suggestion of “unethical or illegal behavior is entirely unfounded,” the statement said, adding that “these activities are, and always have been, entirely legal and within the terms and conditions of any individual manufacturer, company, data broker, or other entity.”

The gun industry companies either did not respond to ProPublica or declined to comment, noting they are under different ownership today and could not find evidence that customer information was previously shared. One ammunition maker named in the NSSF documents as a source of data said it never gave the trade group or its vendors any “personal information.”

ProPublica established the existence of the secret program after reviewing tens of thousands of internal corporate and NSSF emails, reports, invoices and contracts. We also interviewed scores of former gun executives, NSSF employees, NRA lobbyists and political consultants in the U.S. and the United Kingdom.

The insider accounts and trove of records lay bare a multidecade effort to mobilize gun owners as a political force. Confidential information from gun customers was central to what NSSF called its voter education program. The initiative involved sending letters, postcards and later emails to persuade people to vote for the firearms industry’s preferred political candidates. Because privacy laws shield the names of firearm purchasers from public view, the data NSSF obtained gave it a unique ability to identify and contact large numbers of gun owners or shooting sports enthusiasts.

It also allowed the NSSF to figure out whether a gun buyer was a registered voter. Those who weren’t would be encouraged to register and cast their ballots for industry-supported politicians.

From 2000 to 2016, the organization poured more than $20 million into its voter education campaign, which was initially called Vote Your Sport and today is known as GunVote. The NSSF trumpeted the success of its electioneering in reports, claiming credit for putting both George W. Bush and Donald J. Trump in the White House and firearm-friendly lawmakers in the U.S. House and Senate.

In April 2016, a contractor on NSSF’s voter education project delivered a large cache of data to Cambridge Analytica

[…]

The data given to Cambridge included 20 years of gun owners’ warranty card information as well as a separate database of customers from Cabela’s, a sporting goods retailer with approximately 70 stores in the U.S. and Canada.

Cambridge combined the NSSF data with a wide array of sensitive particulars obtained from commercial data brokers. It included people’s income, their debts, their religion, where they filled prescriptions, their children’s ages and purchases they made for their kids. For women, it revealed intimate elements such as whether the underwear and other clothes they purchased were plus size or petite.

The information was used to create psychological profiles of gun owners and assign scores to behavioral traits, such as neuroticism and agreeableness. The profiles helped Cambridge tailor the NSSF’s political messages to voters based on their personalities.

[…]

As the body count from mass shootings at schools and elsewhere in the nation has climbed, those politicians have halted proposals to resurrect the assault weapons ban and enact other gun control measures, even those popular with voters, such as raising the minimum age to buy an assault rifle from 18 to 21.

In response to questions from ProPublica, the NSSF acknowledged it had used the customer information in 2016 for “creating a data model” of potentially sympathetic voters. But the group said the “existence and proven success of that model then obviated the need to continue data acquisition via private channels and today, NSSF uses only commercial-source data to which the data model is then applied.”

[…]

Source: Iconic Gun-Makers Gave Sensitive Customer Information to Political Operatives — ProPublica

Neurotech companies are selling your brain data, senators warn

Three Democratic senators are sounding the alarm over brain-computer interface (BCI) technologies’ ability to collect — and potentially sell — our neural data. In a letter to the Federal Trade Commission (FTC), Sens. Chuck Schumer (D-NY), Maria Cantwell (D-IN), and Ed Markey (D-MA) called for an investigation into neurotechnology companies’ handling of user data, and for tighter regulations on their data-sharing policies.

“Unlike other personal data, neural data — captured directly from the human brain — can reveal mental health conditions, emotional states, and cognitive patterns, even when anonymized,” the letter reads. “This information is not only deeply personal; it is also strategically sensitive.”

While the concept of neural technologies may conjure up images of brain implants like Elon Musk’s Neuralink, there are far less invasive — and less regulated — neurotech products on the market, including headsets that help people meditate, purportedly trigger lucid dreaming, and promise to help users with online dating by helping them swipe through apps “based on your instinctive reaction.” These consumer products gobble up insights about users’ neurological data — and since they aren’t categorized as medical devices, the companies behind them aren’t barred from sharing that data with third parties.

“Neural data is the most private, personal, and powerful information we have—and no company should be allowed to harvest it without transparency, ironclad consent, and strict guardrails. Yet companies are collecting it with vague policies and zero transparency,” Schumer told The Verge via email.

The letter cites a 2024 report by the Neurorights Foundation, which found that most neurotech companies not only have few safeguards on user data but also have the ability to share sensitive information with third parties. The report looked at the data policies of 30 consumer-facing BCI companies and found that all but one “appear to have access to” users’ neural data, “and provide no meaningful limitations to this access.” The Neurorights Foundation only surveyed companies whose products are available to consumers without the help of a medical professional; implants like those made by Neuralink weren’t among them.

The companies surveyed by the Neurorights Foundation make it difficult for users to opt out of having their neurological data shared with third parties. Just over half the companies mentioned in the report explicitly let consumers revoke consent for data processing, and only 14 of the 30 give users the ability to delete their data. In some instances, user rights aren’t universal — for example, some companies only let users in the European Union delete their data but don’t grant the same rights to users elsewhere in the world.

To safeguard against potential abuses, the senators are calling on the FTC to:

  • investigate whether neurotech companies are engaging in unfair or deceptive practices that violate the FTC Act
  • compel companies to report on data handling, commercial practices, and third-party access
  • clarify how existing privacy standards apply to neural data
  • enforce the Children’s Online Privacy Protection Act as it relates to BCIs
  • begin a rulemaking process to establish safeguards for neural data, and setting limits on secondary uses like AI training and behavioral profiling
  • and ensure that both invasive and noninvasive neurotechnologies are subject to baseline disclosure and transparency standards, even when the data is anonymized

Though the senators’ letter calls out Neuralink by name, Musk’s brain implant tech is already subject to more regulations than other BCI technologies. Since Neuralink’s brain implant is considered a “medical” technology, it’s required to comply with the Health Insurance Portability and Accountability Act (HIPAA), which safeguards people’s medical data.

Stephen Damianos, the executive director of the Neurorights Foundation, said that HIPAA may not have entirely caught up to existing neurotechnologies, especially with regards to “informed consent” requirements.

“There are long-established and validated models for consent from the medical world, but I think there’s work to be done around understanding the extent to which informed consent is sufficient when it comes to neurotechnology,” Damianos told The Verge. “The analogy I like to give is, if you were going through my apartment, I would know what you would and wouldn’t find in my apartment, because I have a sense of what exactly is in there. But brain scans are overbroad, meaning they collect more data than what is required for the purpose of operating a device. It’s extremely hard — if not impossible — to communicate to a consumer or a patient exactly what can today and in the future be decoded from their neural data.”

Data collection becomes even trickier for “wellness” neurotechnology products, which don’t have to comply with HIPAA, even when they advertise themselves as helping with mental health conditions like depression and anxiety.

Damianos said there’s a “very hazy gray area” between medical devices and wellness devices.

“There’s this increasingly growing class of devices that are marketed for health and wellness as distinct from medical applications, but there can be a lot of overlap between those applications,” Damianos said. The dividing line is often whether a medical intermediary is required to help someone obtain a product, or whether they can “just go online, put in your credit card, and have it show up in a box a few days later.”

There are very few regulations on neurotechnologies advertised as being for “wellness.” In April 2024, Colorado passed the first-ever legislation protecting consumers’ neural data. The state updated its existing Consumer Protection Act, which protects users’ “sensitive data.” Under the updated legislation, “sensitive data” now includes “biological data” like biological, genetic, biochemical, physiological, and neural information. And in September, California amended its Consumer Privacy Act to protect neural data.

“We believe in the transformative potential of these technologies, and I think sometimes there’s a lot of doom and gloom about them,” Damianos told The Verge. “We want to get this moment right. We think it’s a really profound moment that has the potential to reshape what it means to be human. Enormous risks come from that, but we also believe in leveraging the potential to improve people’s lives.”

Source: Neurotech companies are selling your brain data, senators warn | The Verge

Can the EU’s Dual Strategy of Regulation and Investment Redefine AI Leadership?

Beyond sharing a pair of vowels, AI and the EU both present significant challenges when it comes to setting the right course. This article makes the case that reducing regulation for large general-purpose AI providers under the EU’s competitiveness agenda is not a silver bullet for catching Europe up to the US and China, and would only serve to entrench European dependencies on US tech. Instead, by combining its regulatory toolkit and ambitious investment strategy, the EU is uniquely positioned to set the global standard for trustworthy AI and pursue its tech sovereignty. It is an opportunity that Europe must take.

Recent advances in AI have drastically shortened the improvement cycle from years to months, thanks to new inference-time compute techniques that enable self-prompting, chain-of-thought reasoning in models like OpenAI’s o1 and DeepSeek’s R1. However, these rapid gains also increase risks like AI-enabled cyber offenses and biological attacks. Meanwhile, the EU and France recently committed €317 billion to AI development in Europe, joining a global race with comparably large announcements from both the US and China.

Now turning to EU AI policy, the newly established AI Office and 13 independent experts are nearing the end of a nine-month multistakeholder drafting process of the Code of Practice (CoP); the voluntary technical details of the AI Act’s mandatory provisions for general purpose AI providers. The vast majority of the rules will apply to only the largest model providers, ensuring proportionality: the protection of SMEs, start-ups, and other downstream industries. In the meantime, the EU has fully launched a competitiveness agenda, with the Commission’s recently published Competitiveness Compass and first omnibus simplification package outlining plans for widespread streamlining of reporting obligations amidst mounting pushback against this simplified narrative. Add to this the recent withdrawal of the AI Liability Directive, and it’s clear to see which way the political winds are blowing.

So why must this push for simplification be replaced by a push for trustworthy market creation in the case of general-purpose AI and the Code of Practice? I’ll make three main points: 1) Regulation is not the reason for Europe lacking Big Tech companies, 2) Sweeping deregulation creates legal uncertainty and liability risks for downstream deployers, and slows trusted adoption of new technologies and thereby growth, 3) Watering down the CoP for upstream model providers with systemic risk will almost exclusively benefit large US incumbents, entrenching dependency and preventing tech sovereignty.

[…]

The EU’s tech ecosystem had ample time to emerge in the years preceding and following the turn of the century, free of so-called “red tape,” but this did not happen and will not again through deregulation […] One reason presented by Bradford is that the European digital single market still remains fragmented, with differing languages, cultures, consumer preferences, administrative obstacles, and tax regimes preventing large tech companies from seamlessly growing within the bloc and throughout the world. Even more fragmented are the capital markets of the EU, resulting in poor access to venture capital for tech start-ups and scale-ups. Additional points include harsh, national-level bankruptcy laws that are “creditor-oriented” in the EU, compared to more forgiving “debtor-friendly” equivalents in the US, resulting in lower risk appetite for European entrepreneurs. Finally, skilled migration is significantly more streamlined in the US, with federal-level initiatives like the H-1B visa leading to the majority of Big Tech CEOs hailing from overseas

[…]

The downplaying of regulation as Europe’s AI hindrance has been repeated by leading industry voices such as US VC firm a16z, European VC firm Merantix Capital, and French provider MistralAI. To reiterate: the EU ‘lagging behind’ on trillion-dollar tech companies and the accompanying innovation was not a result of regulation before there was regulation, and is also not a result of regulation after.

[…]

Whether for planes, cars, or drugs, early use of dangerous new technologies, without accompanying rules, saw frequent preventable accidents, reducing consumer trust and slowing market growth. Now, with robust checks and balances in place from well-resourced regulatory authorities, such markets have been able to thrive, providing value and innovation to citizens. Other sectors, like nuclear energy and, more recently, crypto, have suffered from an initial lack of regulation, causing industry corner-cutting, leading to infamous disasters (from Fukushima to the collapse of FTX) from which public trust has been difficult to win back. Regulators around the world are currently risking the same fate for AI.

This point is particularly relevant for so-called ‘downstream deployers’: companies that build applications on top of (usually) Big Tech, provided underlying models. Touted by European VC leader Robert Lacher as Europe’s “huge opportunity” in AI, downstream deployers, particularly SMEs, serve to gain from the Code of Practice, which ensures that necessary regulatory checks and balances occur upstream at the level of model provider.

[…]

Finally, the EU’s enduring and now potentially crippling dependency on US technology companies has been importantly addressed by the new Commission, best exemplified by the title of Executive Vice President Henna Virkkunen’s file: Tech Sovereignty, Security and Democracy. With the last few months’ geopolitical developments, including all-time-low transatlantic relations and an unfolding trade war, some have gone as far as warning of the possibility of US technology being used for surveillance of Europe and of the US sharing intelligence with Russia. Clearly, the urgency of tech sovereignty has drastically increased. A strong Code of Practice would return agency to the EU, ensuring that US upstream incumbents meet basic security, safety, and ethical standards whilst also easing the EU’s AI adoption problem by ensuring technology is truly trustworthy.

So, concretely, what needs to be done? Bruegel economist Mario Mariniello summed it up concisely: “On tech regulation, the European Union should be bolder.”

[…]

This article has outlined why deregulating highly capable AI models, produced by the world’s largest companies, is not a solution to Europe’s growth problem. Instead of stripping back obligations, ensuring protections of European citizens, the EU must combine its ambitious AI investment plan with boldly pursuing leadership in setting global standards, accelerating trustworthy adoption and ensuring tech sovereignty. This combination will put Europe on the right path to drive this technological revolution forward for the benefit of all.

Source: Can the EU’s Dual Strategy of Regulation and Investment Redefine AI Leadership? | TechPolicy.Press

Europe’s Tech Sovereignty Demands More Than Competitiveness

BRUSSELS – As part of his confrontational stance toward Europe, US President Donald Trump could end up weaponizing critical technologies. The European Union must appreciate the true nature of this threat instead of focusing on competing with the US as an economic ally. To achieve true tech sovereignty, the EU should transcend its narrow focus on competitiveness and deregulation and adopt a far more ambitious strategy

[…]

Europe’s growing anxiety about competitiveness is fueled by its inability to challenge US-based tech giants where it counts: in the market. As the Draghi report points out, the productivity gap between the United States and the EU largely reflects the relative weakness of Europe’s tech sector. Recent remarks by European Commission President Ursula von der Leyen and Tech Commissioner Henna Virkkunen suggest that policymakers have taken Draghi’s message to heart, making competitiveness the central focus of EU tech policy. But this singular focus is both insufficient and potentially counterproductive at a time of technological and geopolitical upheaval. While pursuing competitiveness could reduce Big Tech’s influence over Europe’s economy and democratic institutions, it could just as easily entrench it. European leaders’ current fixation on deregulationturbocharged by the Draghi report – leaves EU policymaking increasingly vulnerable to lobbying by powerful corporate interests and risks legitimizing policies that are incompatible with fundamental European values.

As a result, the European Commission’s deregulatory measures – including its recent decision to shelve draft AI and privacy rules, and its forthcoming “simplification” of tech legislation including the GDPR – are more likely to benefit entrenched tech giants than they are to support startups and small and medium-size enterprises. Meanwhile, Europe’s hasty and uncritical push for “AI competitiveness” risks reinforcing Big Tech’s tightening grip on the AI technology stack.

It should come as no surprise that the Draghi report’s deregulatory agenda was warmly received in Silicon Valley, even by Elon Musk himself. But the ambitions of some tech leaders go far beyond cutting red tape. Musk’s use of X (formerly Twitter) and Starlink to interfere in national elections and the war in Ukraine, together with the Trump administration’s brazen attacks on EU tech regulation, show that Big Tech’s quest for power poses a serious threat to European sovereignty.

Europe’s most urgent task, then, is to defend its citizens’ rights, sovereignty, and core values from increasingly hostile American tech giants and their allies in Washington. The continent’s deep dependence on US-controlled digital infrastructure – from semiconductors and cloud computing to undersea cables – not only undermines its competitiveness by shutting out homegrown alternatives but also enables the owners of that infrastructure to exploit it for profit.

[…]

Strong enforcement of competition law and the Digital Markets Act, for example, could curb Big Tech’s influence while creating space for European startups and challengers to thrive. Similarly, implementing the Digital Services Act and the AI Act will protect citizens from harmful content and dangerous AI systems, empowering Europe to offer a genuine alternative to Silicon Valley’s surveillance-driven business models. Against this backdrop, efforts to develop homegrown European alternatives to Big Tech’s digital infrastructure have been gaining momentum. A notable example is the so-called “Eurostack” initiative, which should be viewed as a key step in defending Europe’s ability to act independently.

[…]

A “competitive” economy holds little value if it comes at the expense of security, a fair and safe digital environment, civil liberties, and democratic values. Fortunately, Europe doesn’t have to choose. By tackling its technological dependencies, protecting democratic governance, and upholding fundamental rights, it can foster the kind of competitiveness it truly needs.

Source: Europe’s Tech Sovereignty Demands More Than Competitiveness by Marietje Schaake & Max von Thun – Project Syndicate

Deregulation has led to huge amounts of problems globally, such as the monopoly / duopoly problems we can’t seem to deal with; reliance on external markets and companies that whimsically change their minds; unsustainable hardware and software choices allowing devices to be bricked, poorly secured and irreparable; vendor lock-in to closed source ecosystems; damage to innovation; privacy invasions which lead to hacking attacks; etc etc. As Europe we can make our own choices about our own values – we are not determined by the singular motive of profit. European values are inclusive and also promote things like education and happiness.

Employee monitoring app exposes 21M work screens​ to internet

A surveillance tool meant to keep tabs on employees is leaking millions of real-time screenshots onto the open web.

Your boss watching your screen isn’t the end of the story. Everyone else might be watching, too. Researchers at Cybernews have uncovered a major privacy breach involving WorkComposer, a workplace surveillance app used by over 200,000 people across countless companies.

The app, designed to track productivity by logging activity and snapping regular screenshots of employees’ screens, left over 21 million images exposed in an unsecured Amazon S3 bucket, broadcasting how workers go about their day frame by frame.

[…]

WorkComposer is one of many time-tracking tools that have crept into modern work culture. Marketed as a way to keep teams “accountable,” the software logs keystrokes, tracks how long you spend on each app, and snaps desktop screenshots every few minutes.

The leak shows just how dangerous this setup becomes when basic security hygiene is ignored. A leak of this magnitude turns everyday work activity into a goldmine for cybercriminals.

[…]

The leak’s real-time nature only amplifies the danger, as threat actors could monitor unfolding business operations as they happen, giving them access to otherwise locked-down environments.

[…]

Source: Employee monitoring app exposes 21M work screens​ | Cybernews

Can you imagine the time lost by employers going over all this data?

Microsoft Windows new built in spying tool Microsoft Recall is a really great idea too. Not.

Internet Archive Sued for $700m by Record Labels about digitising songs pre 1960. Petition to rescue the Internet Archive

A dramatic appeal hopes to ensure the survival of the nonprofit Internet Archive. The signatories of a petition, which is now open for further signatures, are demanding that the US recording industry association RIAA and participating labels such as as Universal Music Group (UMG), Capitol Records, Sony Music, and Arista drop their lawsuit against the online library. The legal dispute, pending since mid-2023 and expanded in March, centers on the “Great 78” project. This project aims to save 500,000 song recordings by digitizing 250,000 records from the period 1880 to 1960. Various institutions and collectors have donated the records, which are made for 78 revolutions per minute (“shellac”), so that the Internet Archive can put this cultural treasure online.

The music companies originally demanded Ã…372 million for the online publication of the songs and the associated “mass theft .” They recently increased their demand to Ã…700 million for potential copyright infringement. The basis for the lawsuit is the Music Modernization Act, which US President Donald Trump approved in 2018. This includes the CLASSICS Act. This law retroactively introduces federal copyright protection for sound recordings made before 1972, which until the were protected in the US by different state laws. The monopoly rights now apply US-wide for a good 100 years (for recordings made before 1946) or until 2067 (for recordings made between 1947 and 1972).

The lawsuit ultimately threatens the existence of the entire Internet Archive , including the wavy-known Wayback Machine , they say. This important public service is used by millions of people every day to access historical “snapshots” from the web. Journalists, educators, researchers, lawyers, and citizens use it to verify sources, investigate disinformation, and maintain public accountability. The legal attack also puts a “critical infrastructure of the internet” at risk. And this at a time when digital information is being deleted, overwritten, and destroyed: “We cannot afford to lose the tools that preserve memory and defend facts.” The Internet Archive was forced to delete 500,000 books as recently as 2024. It also continually struggles with IT attacks .

The case is called Universal Music Group et al. v. Internet Archive. The lawsuit was originally filed in the U.S. District Court for the Southern District of New York (Case No. 1:23-cv-07133), but is now pending in the U.S. District Court for the Northern District of California (Case No. 3:23-cv-6522). The Internet Archive takes the position that the Great 78 project does not harm the music industry. Quite the opposite: Anyone who wants to enjoy music uses commercial streaming services anyway; the old 78 rpm shellac recordings are study material for researchers.

Source: Suit of record labels: Petition to rescue the Internet Archive | heise online (NB this is a Google Translate page from the original German page)

Original page here: https://www.heise.de/news/Klage-von-Plattenlabels-Petition-zur-Rettung-des-Internet-Archive-10358777.html

How can copyright law be so incredibly wrong all the time?!

A Data Breach at Yale New Haven Health Compromised 5.5 Million Patients’ Information

[…]Yale New Haven Health (YNHHS), a massive nonprofit healthcare network in Connecticut. Hackers stole the data of more than 5.5 million individuals during an attack in March 2025.

[…]

According to a public notice on the YNHHS website, the organization discovered “unusual activity” on its system on March 8, 2025, which was later identified as unauthorized third-party access that allowed bad actors to copy certain patient data. While the information stolen varies by individual, it may include the following:

  • Name
  • Date of birth
  • Address
  • Phone number
  • Email address
  • Race
  • Ethnicity
  • Social Security number
  • Patient type
  • Medical record number

YNHHS says the breach did not include access to medical records, treatment information, or financial data (such as account and payment information).

[…]

Source: A Data Breach at Yale New Haven Health Compromised 5.5 Million Patients’ Information | Lifehacker

Wait – race and ethnicity?!

Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads

CEO Aravind Srinivas said this week on the TBPN podcast that one reason Perplexity is building its own browser is to collect data on everything users do outside of its own app. This so it can sell premium ads.

“That’s kind of one of the other reasons we wanted to build a browser, is we want to get data even outside the app to better understand you,” Srinivas said. “Because some of the prompts that people do in these AIs is purely work-related. It’s not like that’s personal.”

And work-related queries won’t help the AI company build an accurate-enough dossier.

“On the other hand, what are the things you’re buying; which hotels are you going [to]; which restaurants are you going to; what are you spending time browsing, tells us so much more about you,” he explained.

Srinivas believes that Perplexity’s browser users will be fine with such tracking because the ads should be more relevant to them.

“We plan to use all the context to build a better user profile and, maybe you know, through our discover feed we could show some ads there,” he said.

The browser, named Comet, suffered setbacks but is on track to be launched in May, Srinivas said.

He’s not wrong, of course. Quietly following users around the internet helped Google become the roughly $2 trillion market cap company it is today.

[…]

Meta’s ad tracking technology, Pixels, which is embedded on websites across the internet, is how Meta gathers data, even on people that don’t have Facebook or Instagram accounts. Even Apple, which has marketed itself as a privacy protector, can’t resist tracking users’ locations to sell advertising in some of its apps by default.

On the other hand, this kind of thing has led people across the political spectrum in the U.S. and in Europe to distrust big tech.

The irony of Srinivas openly explaining his browser-tracking ad-selling ambitions this week also can’t be overstated.

Google is currently in court fighting the U.S. Department of Justice, which has alleged Google behaved in monopolistic ways to dominate search and online advertising. The DOJ wants the judge to order Google to divest Chrome.

Both OpenAI and Perplexity — not surprisingly, given Srinivas’ reasons — said they would buy the Chrome browser business if Google was forced to sell.

Source: Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads | TechCrunch

Yup, so even if Mozilla is making Firefox more invasive, it’s still better than these guys.

Blue Shield of California Exposed the Data of 4.7 Million People to Google for targeted advertising

Blue Shield of California shared the protected health information of 4.7 million individuals with Google over a nearly three-year period, a data breach that impacts the majority of its nearly 6 million members, according to reporting from Bleeping Computer.

This isn’t the only large data breach to affect a healthcare organization the last year alone. Community Health Center records were hacked in October 2024, compromising more than a million individuals’ data, along with an attack on lab testing company Lab Services Cooperative, which affected records of 1.6 million Planned Parenthood patients. UnitedHealth Group suffered a breach in February 2024, resulting in the leak of more than 100 million people’s data.

What happened with Blue Shield of California?

According to an April 9 notice posted on Blue Shield of California’s website, the company allowed certain data, including protected health information, to be shared with Google Ads through Google Analytics, which may have allowed Google to serve targeted ads back to members. While not discovered until Feb. 11, 2025, the leak occurred for several years, from April 2021 to January 2024, when the connection between Google Analytics and Google Ads was severed on Blue Shield websites.

The following Blue Shield member information may have been compromised:

  • Insurance plan name, type, and group number
  • City and zip code
  • Gender
  • Family size
  • Blue Shield assigned identifiers for online accounts
  • Medical claim service date and provider
  • Patient name
  • Patient financial responsibility
  • “Find a Doctor” search criteria and results

According to the notice, no additional personal data—Social Security numbers, driver’s license numbers, and banking and credit card information—were disclosed. Blue Shield also states that no bad actor was involved, nor have they confirmed that the information has been used maliciously.

[…]

Source: Blue Shield of California Exposed the Data of 4.7 Million People to Google | Lifehacker

Tesla now seems to be remote hacking odometers to weasel out of warranty repairs. Time to stop DMCA type laws globally.

A lawsuit filed in February accuses Tesla of remotely altering odometer values on failure-prone cars, in a bid to push these lemons beyond the 50,000 mile warranty limit:

https://www.thestreet.com/automotive/tesla-accused-of-using-sneaky-tactic-to-dodge-car-repairs

The suit was filed by a California driver who bought a used Tesla with 36,772 miles on it. The car’s suspension kept failing, necessitating multiple servicings, and that was when the plaintiff noticed that the odometer readings for his identical daily drive were going up by ever-larger increments. This wasn’t exactly subtle: he was driving 20 miles per day, but the odometer was clocking 72.35 miles/day. Still, how many of us monitor our daily odometer readings?

In short order, his car’s odometer had rolled over the 50k mark and Tesla informed him that they would no longer perform warranty service on his lemon. Right after this happened, the new mileage clocked by his odometer returned to normal. This isn’t the only Tesla owner who’s noticed this behavior: Tesla subreddits are full of similar complaints:

https://www.reddit.com/r/RealTesla/comments/1ca92nk/is_tesla_inflating_odometer_to_show_more_range/

This isn’t Tesla’s first dieselgate scandal. In the summer of 2023, the company was caught lying to drivers about its cars’ range:

https://pluralistic.net/2023/07/28/edison-not-tesla/#demon-haunted-world

Drivers noticed that they were getting far fewer miles out of their batteries than Tesla had advertised. Naturally, they contacted the company for service on their faulty cars. Tesla then set up an entire fake service operation in Nevada that these calls would be diverted to, called the “diversion team.” Drivers with range complaints were put through to the “diverters” who would claim to run “remote diagnostics” on their cars and then assure them the cars were fine. They even installed a special xylophone in the diversion team office that diverters would ring every time they successfully deceived a driver.

These customers were then put in an invisible Tesla service jail. Their Tesla apps were silently altered so that they could no longer book service for their cars for any reason – instead, they’d have to leave a message and wait several days for a callback. The diversion center racked up 2,000 calls/week and diverters were under strict instructions to keep calls under five minutes. Eventually, these diverters were told that they should stop actually performing remote diagnostics on the cars of callers – instead, they’d just pretend to have run the diagnostics and claim no problems were found (so if your car had a potentially dangerous fault, they would falsely claim that it was safe to drive).

Most modern cars have some kind of internet connection, but Tesla goes much further. By design, its cars receive “over-the-air” updates, including updates that are adverse to drivers’ interests. For example, if you stop paying the monthly subscription fee that entitles you to use your battery’s whole charge, Tesla will send a wireless internet command to your car to restrict your driving to only half of your battery’s charge.

This means that your Tesla is designed to follow instructions that you don’t want it to follow, and, by design, those instructions can fundamentally alter your car’s operating characteristics. For example, if you miss a payment on your Tesla, it can lock its doors and immobilize itself, then, when the repo man arrives, it will honk its horn, flash its lights, back out of its parking spot, and unlock itself so that it can be driven away:

https://tiremeetsroad.com/2021/03/18/tesla-allegedly-remotely-unlocks-model-3-owners-car-uses-smart-summon-to-help-repo-agent/

Some of the ways that your Tesla can be wirelessly downgraded (like disabling your battery) are disclosed at the time of purchase. Others (like locking you out and summoning a repo man) are secret. But whether disclosed or secret, both kinds of downgrade depend on the genuinely bizarre idea that a computer that you own, that is in your possession, can be relied upon to follow orders from the internet even when you don’t want it to. This is weird enough when we’re talking about a set-top box that won’t let you record a TV show – but when we’re talking about a computer that you put your body into and race down the road at 80mph inside of, it’s frankly terrifying.

[…]

Laws that ban reverse-engineering are a devastating weapon that corporations get to use in their bid to subjugate and devour the human race.

The US isn’t the only country with a law like Section 1201 of the DMCA. Over the past 25 years, the US Trade Representative has arm-twisted nearly every country in the world into passing laws that are nearly identical to America’s own disastrous DMCA. Why did countries agree to pass these laws? Well, because they had to, or the US would impose tariffs on them:

https://pluralistic.net/2025/03/03/friedmanite/#oil-crisis-two-point-oh

The Trump tariffs change everything, including this thing. There is no reason for America’s (former) trading partners to continue to enforce the laws it passed to protect Big Tech’s right to twiddle their citizens. That goes double for Tesla: rather than merely complaining about Musk’s Nazi salutes, countries targeted by the regime he serves could retaliate against him, in a devastating fashion. By abolishing their anticircuvmention laws, countries around the world would legalize jailbreaking Teslas, allowing mechanics to unlock all the subscription features and software upgrades for every Tesla driver, as well as offering their own software mods. Not only would this tank Tesla stock and force Musk to pay back the loans he collateralized with his shares (loans he used to buy Twitter and the US predidency), it would also abolish sleazy gimmicks like hacking drivers’ odometers to get out of paying for warranty service:

https://pluralistic.net/2025/03/08/turnabout/#is-fair-play

Source: Pluralistic: Tesla accused of hacking odometers to weasel out of warranty repairs (15 Apr 2025) – Pluralistic: Daily links from Cory Doctorow

Discord Wants Your Face: Begins Testing Facial Scans for Age Verification

Discord has begun requiring some users in the United Kingdom and Australia to verify their age through a facial scan before being permitted to access sensitive content. The chat app’s new process has been described as an “experiment,” and comes in response to laws passed in those countries that place guardrails on youth access to online platforms. Discord has also been the target of concerns that it does not sufficiently protect minors from sexual content.

Users may be asked to verify their age when encountering content that has been flagged by Discord’s systems as being sensitive in nature, or when they change their settings to enable access to sensitive content. The app will ask users to scan their face through a computer or smartphone webcam; alternatively, they can scan a driver’s license or other form of ID.

[…]

Source: Discord Begins Testing Facial Scans for Age Verification

Age verification is impossible to do correctly, incredibly privacy invasive and a really hacker tempting target. The UK and Australia and every other country considering age verification are seriously endangering their citizens.

Fortunately you can always hold up a picture from a magazine in front of the webcam.

Your TV is watching you better: LG TVs’ integrated ads get more personal with tech that analyzes viewer emotions

LG TVs will soon leverage an AI model built for showing advertisements that more closely align with viewers’ personal beliefs and emotions. The company plans to incorporate a partner company’s AI tech into its TV software in order to interpret psychological factors impacting a viewer, such as personal interests, personality traits, and lifestyle choices. The aim is to show LG webOS users ads that will emotionally impact them.

The upcoming advertising approach comes via a multi-year licensing deal with Zenapse, a company describing itself as a software-as-a-service marketing platform that can drive advertiser sales “with AI-powered emotional intelligence.” LG will use Zenapse’s technology to divide webOS users into hyper-specific market segments that are supposed to be more informative to advertisers. LG Ad Solutions, LG’s advertising business, announced the partnership on Tuesday.

The technology will be used to inform ads shown on LG smart TVs’ homescreens, free ad-supported TV (FAST) channels, and elsewhere throughout webOS, per StreamTV Insider. LG will also use Zenapse’s tech to “expand new software development and go-to-market products,” it said. LG didn’t specify the duration of its licensing deal with Zenapse.

[…]

With all this information, ZenVision will group LG TV viewers into highly specified market segments, such as “goal-driven achievers,” “social connectors,” or “emotionally engaged planners,” an LG spokesperson told StreamTV Insider. Zenapse’s website for ZenVision points to other potential market segments, including “digital adopters,” “wellness seekers,” “positive impact & environment,” and “money matters.”

Companies paying to advertise on LG TVs can then target viewers based on the ZenVision-specified market segments and deliver an “emotionally intelligent ad,” as Zenapse’s website puts it.

This type of targeted advertising aims to bring advertisers more in-depth information about TV viewers than demographic data or even contextual advertising (which shows ads based on what the viewer is watching) via psychographic data. Demographic data gives advertisers viewer information, like location, age, gender, ethnicity, marital status, and income. Psychographic data is supposed to go deeper and allow advertisers to target people based on so-called psychological factors, like personal beliefs, values, and attitudes. As Salesforce explains, “psychographic segmentation delves deeper into their psyche” than relying on demographic data.

[…]

With their ability to track TV viewers’ behavior, including what they watch and search for on their TVs, smart TVs are a growing obsession for advertisers. As LG’s announcement pointed out, CTVs represent “one of the fastest-growing ad segments in the US, expected to reach over $40 billion by 2027, up from $24.6 billion in 2023.”

However, as advertisers’ interest in appealing to streamers grows, so do their efforts to track and understand viewers for more targeted advertising. Both efforts could end up pushing the limits of user comfort and privacy.

[…]

 

Source: LG TVs’ integrated ads get more personal with tech that analyzes viewer emotions – Ars Technica

An LG TV is not exactly a cheap thing. I am paying for the whole product, not for a service. I bought a TV, not a marketing department.

OpenDNS Quits Belgium Under Threat of Piracy Blocks or Fines of €100K Per Day after having quit France

In a brief statement citing a court order in Belgium but providing no other details, Cisco says that its OpenDNS service is no longer available to users in Belgium. Cisco’s withdrawal is almost certainly linked to an IPTV piracy blocking order obtained by DAZN; itt requires OpenDNS, Cloudflare and Google to block over 100 pirate sites or face fines of €100,000 euros per day. Just recently, Cisco withdrew from France over a similar order.

dns-block-soccer-ball1 Without assurances that hosts, domain registries, registrars, DNS providers, and consumer ISPs would not be immediately held liable for internet users’ activities, investing in the growth of the early internet may have proven less attractive.

Of course, not being held immediately liable is a far cry from not being held liable at all. After years of relatively plain sailing, multiple ISPs in the United States are currently embroiled in multi-multi million dollar lawsuits for not policing infringing users. In Europe, countries including Italy and France have introduced legislation to ensure that if online services facilitate or assist piracy in any way, they can be compelled by law to help tackle it.

DNS Under Pressure

Given their critical role online, and the fact that not a single byte of infringing content has ever touched their services, some believed that DNS providers would be among the last services to be put under pressure.

After Sony sued Quad9 and wider discussions opened up soon after, in 2023 Canal+ used French law to target DNS providers. Last year, Google, Cloudflare, and Cisco were ordered to prevent their services from translating domain names into IP addresses used by dozens of sports piracy sites.

While all three companies objected, it’s understood that Cloudflare and Google eventually complied with the order. Cisco’s compliance was also achieved, albeit by its unexpected decision to suspend access to its DNS service for the whole of France and the overseas territories listed in the order.

So Long France, Goodbye Belgium

Another court order obtained by DAZN at the end of March followed a similar pattern.

dazn-block-s1 Handed down by a court in Belgium, it compels the same three DNS providers to cease returning IP addresses when internet users provide the domain names of around 100 pirate sports streaming sites.

At last count those sites were linked to over 130 domain names which in its role as a search engine operator, Google was also ordered to deindex from search results.

During the evening of April 5, Belgian media reported that a major blocking campaign was underway to protect content licensed by DAZN and 12th Player, most likely football matches from Belgium’s Pro League. DAZN described the action as the “the first of its kind” and a “real step forward” in the fight against content piracy. Google and Cloudflare’s participation was not confirmed, but it seems likely that Cisco was not involved all.

In a very short statement posted to the Cisco community forum, employee tom1 announced that effective April 11, 2025, OpenDNS will no longer be accessible to users in Belgium due to a court order. The nature of the order isn’t clarified, but it almost certainly refers to the order obtained by DAZN.

 

cisco-belgium
 

Cisco’s suspension of OpenDNS in Belgium mirrors its response to a similar court order in France. Both statements were delivered without fanfare which may suggest that the company prefers not to be seen as taking a stand. In reality, Cisco’s reasons are currently unknown and that has provoked some interesting comments from users on the Cisco community forum.

[…]

Source: OpenDNS Quits Belgium Under Threat of Piracy Blocks or Fines of €100K Per Day * TorrentFreak

Yup the copyrights holders are again blocking human progress on a massive scale and corrupt politicians are creating rules that allow them to pillage whilst holding us back.

LaLiga Piracy Blocks Randomly Take Down huge innocent segments of internet with no recourse or warning, slammed as “Unaccountable Internet Censorship”

Cloud-based web application platform Vercel is among the latest companies to find their servers blocked in Spain due to LaLiga’s ongoing IPTV anti-piracy campaign. In a statement, Vercel’s CEO and the company’s principal engineer slam “indiscriminate” blocking as an “unaccountable form of internet censorship” that has prevented legitimate customers from conducting their daily business.

laliga-vercel1 Since early February, Spain has faced unprecedented yet avoidable nationwide disruption to previously functioning, entirely legitimate online services.

A court order obtained by top-tier football league LaLiga in partnership with telecommunications giant Telefonica, authorized ISP-level blocking across all major ISPs to prevent public access to pirate IPTV services and websites.

In the first instance, controversy centered on Cloudflare, where shared IP addresses were blocked by local ISPs when pirates were detected using them, regardless of the legitimate Cloudflare customers using them too.

When legal action by Cloudflare failed, in part due to a judge’s insistence that no evidence of damage to third parties had been proven before the court, joint applicants LaLiga and Telefonica continued with their blocking campaign. It began affecting innocent third parties early February and hasn’t stopped since.

Vercel Latest Target

US-based Vercel describes itself as a “complete platform for the web.” Through the provision of cloud infrastructure and developer tools, users can deploy code from their computers and have it up and running in just seconds. Vercel is not a ‘rogue’ hosting provider that ignores copyright complaints, it takes its responsibilities very seriously.

Yet it became evident last week that blocking instructions executed by Telefonica-owned telecoms company Movistar were once again blocking innocent users, this time customers of Vercel.

 

Movistar informed of yet more adverse blockingblock-laliga-tinybird
 

As the thread on X continued, Vercel CEO Guillermo Rauch was asked whether Vercel had “received any requests to remove illegal content before the blocking occurs?”

Vercel Principal Engineer Matheus Fernandes answered quickly.

 

No takedown requests, just blocksblock-laliga-vercel
 

Additional users were soon airing their grievances; ChatGPT blocked regularly on Sundays, a whole day “ruined” due to unwarranted blocking of AI code editor Cursor, blocking at Cloudflare, GitHub, BunnyCDN, the list goes on.

 

shame
 

Vercel Slams “Unaccountable Internet Censorship”

In a joint statement last week, Vercel CEO Guillermo Rauch and Principal Engineer Matheus Fernandes cited the LaLiga/Telefonica court order and reported that ISPs are “blocking entire IP ranges, not specific domains or content.”

Among them, the IP addresses 66.33.60.129 and 76.76.21.142, “used by businesses like Spanish startup Tinybird, Hello Magazine, and others operating on Vercel, despite no affiliations with piracy in any form.”

[…]

The details concerning this latest blocking disaster and the many others since February, are unavailable to the public. This lack of transparency is consistent with most if not all dynamic blocking programs around the world. With close to zero transparency, there is no accountability when blocking takes a turn for the worse, and no obvious process through which innocent parties can be fairly heard.

[…]

The hayahora.futbol project is especially impressive; it gathers evidence of blocking events, including dates, which ISPs implemented blocking, how long the blocks remained in place, and which legitimate services were wrongfully blocked.

[…]

Source: Vercel Slams LaLiga Piracy Blocks as “Unaccountable Internet Censorship” * TorrentFreak

So guys streaming a *game* can close down huge sections of internet without accountability? How did a law like that happen without some serious corruption?

Apple to Spy on User Emails and other Data on Devices to Bolster AI Technology

Apple Inc. will begin analyzing data on customers’ devices in a bid to improve its artificial intelligence platform, a move designed to safeguard user information while still helping it catch up with AI rivals.

Today, Apple typically trains AI models using synthetic data — information that’s meant to mimic real-world inputs without any personal details. But that synthetic information isn’t always representative of actual customer data, making it harder for its AI systems to work properly.

The new approach will address that problem while ensuring that user data remains on customers’ devices and isn’t directly used to train AI models. The idea is to help Apple catch up with competitors such as OpenAI and Alphabet Inc., which have fewer privacy restrictions.

The technology works like this: It takes the synthetic data that Apple has created and compares it to a recent sample of user emails within the iPhone, iPad and Mac email app. By using actual emails to check the fake inputs, Apple can then determine which items within its synthetic dataset are most in line with real-world messages.

These insights will help the company improve text-related features in its Apple Intelligence platform, such as summaries in notifications, the ability to synthesize thoughts in its Writing Tools, and recaps of user messages.

[…]

The company will roll out the new system in an upcoming beta version of iOS and iPadOS 18.5 and macOS 15.5. A second beta test of those upcoming releases was provided to developers earlier on Monday.

[…]

Already, the company has relied on a technology called differential privacy to help improve its Genmoji feature, which lets users create a custom emoji. It uses that system to “identify popular prompts and prompt patterns, while providing a mathematical guarantee that unique or rare prompts aren’t discovered,” the company said in the blog post.

The idea is to track how the model responds in situations where multiple users have made the same request — say, asking for a dinosaur carrying a briefcase — and improving the results in those cases.

The features are only for users who are opted in to device analytics and product improvement capabilities. Those options are managed in the Privacy and Security tab within the Settings app on the company’s devices.

[…]

Source: Apple to Analyze User Data on Devices to Bolster AI Technology

EU gives burner phones and laptops on visits to U.S. (as well as they have been doing for China)

The European Commission has started issuing burner phones and stripped-down laptops to staff visiting the U.S. over concerns that the treatment of visitors to the country has become a security risk, according to a new report from the Financial Times. And it’s just the latest news that America’s slide into fascism under Donald Trump is having severe consequences for the United States’ standing in the world, all while the president announced Monday that he has no plans to obey a U.S. Supreme Court order to bring back a man wrongly sent to a prison in El Salvador.

Officials who spoke with the Financial Times said that new guidance for EU staff traveling to the U.S. included recommendations they not carry personal phones, turn off their burner phones when entering the country, and have “special sleeves” (presumably Faraday cages), that can protect from electronic snooping. U.S. border agents often confiscate phones and claim the right to look through anyone’s personal devices before they can be allowed to enter the U.S.

There have been several reports of researchers denied access to the U.S., including a French scientist who was reportedly stopped last month for having text messages that were critical of Trump. Other travelers from countries like Australia and Canada have reported being detained in horrendous conditions.

[…]

The U.S. is also trying to deport people in a white nationalist scheme to purge the country of any dissent. Several international students have been kidnapped by masked secret police in recent weeks, including people like Mahmoud Khalil and Rumeysa Ozturk, pro-Palestine protesters who are currently sitting in ICE detention facilities. Ozturk’s only “crime” was writing an op-ed for her student newspaper opposing Israel’s war on Gaza and she was picked up off the street near her home outside Boston and flown to Louisiana. The Trump regime has said it locked up Ozturk and is preparing to deport her for “antisemitism,” and supporting Hamas, but the Washington Post reported Sunday that the State Department’s investigation found she did no such thing.

Trump appeared for a press availability in the White House with El Salvador’s president Nayib Bukele on Monday, where he made it clear that he’s going to continue shipping people who’ve committed no crime out of the country to El Salvador’s torture prisons. The U.S. Supreme Court ruled last week that the U.S. government needs to facilitate the return of Kilmar Abrego Garcia, a Maryland man who Trump falsely accuses of being a member of the MS-13 gang, but the U.S. president made it clear he has no plans to bring Garcia back.

[…]

Source: Visitors to U.S. Take Extreme Precautions as Trump Continues March of Fascism

UK Effort to Keep Apple Encryption Fight Secret Is Blocked

A court has blocked a British government attempt to keep secret a legal case over its demand to access Apple Inc. user data in a victory for privacy advocates.

The UK Investigatory Powers Tribunal, a special court that handles cases related to government surveillance, said the authorities’ efforts were a “fundamental interference with the principle of open justice” in a ruling issued on Monday.

The development comes after it emerged in January that the British government had served Apple with a demand to circumvent encryption that the company uses to secure user data stored in its cloud services.

Apple challenged the request, while taking the unprecedented step of removing its advanced data protection feature for its British users. The government had sought to keep details about the demand — and Apple’s challenge of it — from being publicly disclosed.

[…]

Source: UK Effort to Keep Apple Encryption Fight Secret Is Blocked

Meta gets caught gaming AI benchmarks with Llama 4

tl;dr – Meta did a VW by using a special version of their AI which was optimised to score higher on the most important metric for AI performance.

Over the weekend, Meta dropped two new Llama 4 models: a smaller model named Scout, and Maverick, a mid-size model that the company claims can beat GPT-4o and Gemini 2.0 Flash “across a broad range of widely reported benchmarks.”

Maverick quickly secured the number-two spot on LMArena, the AI benchmark site where humans compare outputs from different systems and vote on the best one. In Meta’s press release, the company highlighted Maverick’s ELO score of 1417, which placed it above OpenAI’s 4o and just under Gemini 2.5 Pro. (A higher ELO score means the model wins more often in the arena when going head-to-head with competitors.)

[…]

In fine print, Meta acknowledges that the version of Maverick tested on LMArena isn’t the same as what’s available to the public. According to Meta’s own materials, it deployed an “experimental chat version” of Maverick to LMArena that was specifically “optimized for conversationality,” TechCrunch first reported.

[…]

A spokesperson for Meta, Ashley Gabriel, said in an emailed statement that “we experiment with all types of custom variants.”

“‘Llama-4-Maverick-03-26-Experimental’ is a chat optimized version we experimented with that also performs well on LMArena,” Gabriel said. “We have now released our open source version and will see how developers customize Llama 4 for their own use cases. We’re excited to see what they will build and look forward to their ongoing feedback.”

[…]

”It’s the most widely respected general benchmark because all of the other ones suck,” independent AI researcher Simon Willison tells The Verge. “When Llama 4 came out, the fact that it came second in the arena, just after Gemini 2.5 Pro — that really impressed me, and I’m kicking myself for not reading the small print.”

[…]

Source: Meta gets caught gaming AI benchmarks with Llama 4 | The Verge

EU: These are scary times – let’s backdoor encryption and make everyone unsafe!

The EU has shared its plans to ostensibly keep the continent’s denizens secure – and among the pages of bureaucratese are a few worrying sections that indicate the political union wants to backdoor encryption by 2026, or even sooner.

While the superstate has made noises about backdooring encryption before, the ProtectEU plan [PDF], launched on Monday, says the European Commission wants to develop a roadmap to allow “lawful and effective access to data for law enforcement in 2025” and a technology roadmap to do so by the following year.

“We are working on a roadmap now, and we will look at what is technically also possible,” said Henna Virkkunen, executive vice-president of the EC for tech sovereignty, security and democracy. “The problem is now that our law enforcement, they have been losing ground on criminals because our police investigators, they don’t have access to data,” she added.

“Of course, we want to protect the privacy and cyber security at the same time; and that’s why we have said here that now we have to prepare a technical roadmap to watch for that, but it’s something that we can’t tolerate, that we can’t take care of the security because we don’t have tools to work in this digital world.”

She claimed that in “85 percent” of police cases, law enforcement couldn’t access the data it needed. The proposal is to amend the existing Cybersecurity Act to allow these changes. You can watch the response below.

According to the document, the EC will set up a Security Research & Innovation Campus at its Joint Research Centre in 2026 to, somehow, work out the technical details. Since it’s impossible to backdoor encryption in a way that can’t be potentially exploited by others, it seems a very odd move to make if security’s your goal.

China, Russia, and the US certainly would spend a huge amount of time and money to find the backdoor. Even American law enforcement has given up on the cause of backdooring, although the UK still seems to be wedded to the idea.

In the meantime, for critical infrastructure (and presumably government communications), the EC wants to deploy quantum cryptography across the state. They want to get this in place by 2030 at the latest.

[…]

Source: EU: These are scary times – let’s backdoor encryption! • The Register

Proton may roll away from the Swiss

The EC’s not alone in proposing changes to privacy – new laws outlined in Switzerland could force privacy-focused groups such as Proton out of the country.

Under today’s laws, police can obtain data from services like Proton if they can get a court order for some crimes. But under the proposed laws a court order would not be required and that means Proton would leave the country, said cofounder Andy Yen.

“Swiss surveillance would be significantly stricter than in the US and the EU, and Switzerland would lose its competitiveness as a business location,” Proton’s cofounder told Swiss title Der Bund. “We feel compelled to leave Switzerland if the partial revision of the surveillance law planned by the Federal Council comes into force.”

The EU keeps banging away at this. They tried in 2018, 2020, 2021, 2023, 2024. And fortunately they keep getting stopped by people with enough brains to realise that you cannot have a safe backdoor. For security to be secure it needs to be unbreakable.

https://www.linkielist.com/?s=eu+encryption

 

T-Mobile SyncUP Bug Reveals Names, Images, and Locations of Random Children

T-Mobile sells a little-known GPS service called SyncUP, which allows users who are parents to monitor the locations of their children. This week, an apparent glitch in the service’s system obscured the locations of users’ own children while sending them detailed information and the locations of other, random children.

404 Media first reported on the extremely creepy bug, which appears to have impacted a large number of users. The outlet notes an outpouring of consternation and concern from web users on social platforms like Reddit and X, many of which claimed to have been impacted. 404 also interviewed one specific user, “Jenna,” who explained her ordeal with the bug:

Jenna, a parent who uses SyncUP to keep track of her three-year-old and six-year-old children, logged in Tuesday and instead of seeing if her kids had left school yet, was shown the exact, real-time locations of eight random children around the country, but not the locations of her own kids. 404 Media agreed to use a pseudonym for Jenna to protect the privacy of her kids.

“I’m not comfortable giving my six-year-old a phone, but he takes a school bus and I just want to be able to see where he is in real time,” Jenna said. “I had put a 500 meter boundary around his school, so I get an alert when he’s leaving.”

Jenna sent 404 Media a series of screenshots that show her logged into the app, as well as the locations of children located in other states. In the screenshots, the address-level location of the children are available, as is their name and the last time the location was updated.

Even more alarmingly, the woman interviewed by 404 claims that the company didn’t show much concern for the bug. “Jenna” says she called the company and was referred to an employee who told her that a ticket had been filed in the system on the issue’s behalf. A follow-up email from the concerned mother produced no response, she said.

[…]

When reached for comment by Gizmodo, a T-Mobile spokesperson told us: “Yesterday we fully resolved a temporary system issue with our SyncUP products that resulted from a planned technology update. We are in the process of understanding potential impacts to a small number of customers and will reach out to any as needed. We apologize for any inconvenience.”

The privacy implications of such a glitch are obvious and not really worth extrapolating on. That said, it’s also a good reminder that the more digital access you give a company, the more potential there is for that access to fall into the wrong hands.

Source: T-Mobile Bug Reveals Names, Images, and Locations of Random Children

Indiana security prof and wife vanish after FBI raid

A tenured computer security professor at Indiana University and his university-employed wife have not been seen publicly since federal agents raided their homes late last week.

On Friday, the FBI with help from the cops searched two properties in Bloomington and Carmel, Indiana, belonging to Xiaofeng Wang, a professor at the Indiana Luddy School of Informatics, Computing, and Engineering – who’s been with the American university for more than 20 years – and Nianli Ma, a lead library systems analyst and programmer also at the university.

The university has removed the professor’s profile from its website, while the Indiana Daily Student reports Wang was axed the same day the Feds swooped. It’s said the college learned the professor had taken a job at a university in Singapore, leading to the boffin’s termination by his US employer. Ma’s university profile has also vanished.

“I can confirm the FBI Indianapolis office conducted court authorized activity at homes in Carmel and Bloomington, Indiana last Friday,” the FBI told The Register. “We have no further comment at this time.”

“The Bloomington Police Department was requested to simply assist with scene security while the FBI conducted court authorized law enforcement activity at the residence,” the police added to The Register, also declining to comment further.

Reading between the lines, Prof Wang and his spouse may not necessarily be in custody, and that the Feds may have raided their homes while one or both of the couple were away and possibly already abroad. According to the student news outlet, the professor hasn’t been seen for roughly the past two weeks.

Prof Wang earned his PhD in electrical and computer engineering from Carnegie Mellon University in 2004 and joined Indiana Uni that same year. Since then, he’s become a well respected member of the IT security community, publishing extensively on Apple security, e-commerce fraud, and adversarial machine learning.

Over the course of his academic career – starting in the 1990s with computer science degrees from universities in Nanjing and Shanghai, China – Prof Wang has led research projects with funding exceeding $20 million. He was named a fellow of the IEEE in 2018, the American Association for the Advancement of Science in 2022, and the Association for Computing Machinery in 2023. He reportedly pocketed more than $380,000 in salaries in 2024, while his wife was paid $85,000.

According to neighbors in Carmel, agents arrived around 0830 on March 28, announcing: “FBI, come out!” Agents were seen removing boxes of evidence and photographing the scene.

“Indiana University was recently made aware of a federal investigation of an Indiana University faculty member,” the institution told us.

“At the direction of the FBI, Indiana University will not make any public comments regarding this investigation. In accordance with Indiana University practices, Indiana University will also not make any public comments regarding the status of this individual.”

While US Immigration and Customs Enforcement, aka ICE, has recently made headlines for detaining academic visa holders, among others, there’s no indication the agency was involved in the Indiana raids. That suggests the investigation likely goes beyond immigration matters.

Context

It wouldn’t be the first time foreign academics have come under federal scrutiny. During Trump’s first term, the Department of Justice launched the so-called “China Initiative,” aimed at uncovering economic espionage and IP theft by researchers linked to China.

The effort was widely seen as a failure, with over 50 percent of investigations dropped, some professors wrongly accused, and a few were ultimately found guilty of nothing more than hoarding pirated porn.

The initiative was also widely criticized as counterproductive, prompting an exodus of Chinese researchers from the US and pushing some American-based scientists to relocate to the Chinese mainland. History has seen this movie before: During the 1950s Red Scare, America booted prominent rocket scientist Qian Xuesen over suspected communist ties. He went on to become the architect of China’s missile and space programs — a move that helped Beijing get its intercontinental ballistic missiles, aka ICBMs.

Wang and Ma are still incommunicado, and presumed innocent. Fellow academics in the security industry have pointed out this kind of action is highly unusual. Matt Blaze, Tor Project board member and the McDevitt Chair of Computer Science and Law at Georgetown University, pointed out that to disappear from the university’s records, archived here, is “especially concerning.”

“It’s hard to imagine what reason there could be for the university to scrub its website as if he never worked there,” Blaze said on Mastodon.

“While there’s a process for removing tenured faculty, it takes more than an afternoon to do it.”

Source: Indiana security prof and wife vanish after FBI raid • The Register

Windows 11 is closing a loophole that let you skip making a Microsoft account

Microsoft is no longer playing around when it comes to requiring every Windows 11 device be set up with an internet-connected account. In its latest Windows 11 Insider Preview, the company says it will take out a well-known bypass script that let end users skip the requirement of connecting to the internet and logging in with a Microsoft account to get through the initialization process of a new PC.

As reported by Windows Central, Microsoft already requires users to connect to the internet, but there’s a way to bypass it: the bypassnro command. For those setting up computers for businesses or secondary users, or simply, on principle refuse to link their computer to a Microsoft account, the command is super simple to activate during the Windows setup process.

Microsoft cites security as one reason it’s making this change:

We’re removing the bypassnro.cmd script from the build to enhance security and user experience of Windows 11. This change ensures that all users exit setup with internet connectivity and a Microsoft Account.

Since the bypassnro command is disabled in the latest beta build, it will likely be pushed to production versions within weeks. All hope is not yet lost, as of right now the script can be reactivated with a registry edit by opening a command prompt during the initial setup (Press Shift + F10) and running the command:

reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\OOBE /v BypassNRO /t REG_DWORD /d 1 /f shutdown /r /t 0”

However, there’s no guarantee Microsoft will allow this additional workaround for long. There are other workarounds as well, such as using the unattended.xml automation that lets you skip the initial setup “out-of-box experience.” It’s not straightforward, though, but it makes more sense for IT departments setting up multiple computers.

As of late, Microsoft has been making it harder for people to upgrade to Windows 11 while also nudging them to move on from Windows 10, which will lose support in October. The company is cracking down on the ability to install Windows 11 on older PCs that don’t support TPM 2.0, and hounding you with full-screen ads to buy a new PC. Microsoft even removed the ability to install Windows 11 with old product keys.

Source: Windows 11 is closing a loophole that let you skip making a Microsoft account | The Verge

I don’t want a cloud based user account to run an OS on my own PC.

Your TV is watching you watch and selling that data

[…]Your TV wants your data

The TV business traditionally included three distinct entities. There’s the hardware, namely the TV itself; the entertainment, like movies and shows; and the ads, usually just commercials that interrupt your movies and shows. In the streaming era, tech companies want to control all three, a setup also known as vertical integration. If, say, Roku makes the TV, supplies the content, and sells the ads, then it stands to control the experience, set the rates, and make the most money. That’s business!

Roku has done this very well. Although it was founded in 2002, Roku broke into the market in 2008 after Netflix invested $6 million in the company to make a set-top box that enabled any TV to stream Netflix content. It was literally called the Netflix Player by Roku. Over the course of the next 15 years, Roku would grow its hardware business to include streaming sticks, which are basically just smaller set-top-boxes; wireless soundbars, speakers, and subwoofers; and after licensing its operating system to third-party TV makers, its own affordable, Roku-branded smart TVs

[…]

The shift toward ad-supported everything has been happening across the TV landscape. People buy new TVs less frequently these days, so TV makers want to make money off the TVs they’ve already sold. Samsung has Samsung Ads, LG has LG Ad Solutions, Vizio has Vizio Ads, and so on and so forth. Tech companies, notably Amazon and Google, have gotten into the mix too, not only making software and hardware for TVs but also leveraging the massive amount of data they have on their users to sell ads on their TV platforms. These companies also sell data to advertisers and data brokers, all in the interest of knowing as much about you as possible in the interest of targeting you more effectively. It could even be used to train AI.

[…]

Is it possible to escape the ads?

Breaking free from this ad prison is tough. Most TVs on the market today come with a technology called automatic content recognition (ACR) built in. This is basically Shazam for TV — Shazam itself helped popularize the tech — and gives smart TV platforms the ability to monitor what you’re watching by either taking screenshots or capturing audio snippets while you’re watching. (This happens at the signal level, not from actual microphone recordings from the TV.)

Advertisers and TV companies use ACR tech to collect data about your habits that are otherwise hard to track, like if you watch live TV with an antenna. They use that data to build out a profile of you in order to better target ads. ACR also works with devices, like gaming consoles, that you plug into your TV through HDMI cables.

Yash Vekaria, a PhD candidate at UC Davis, called the HDMI spying “the most egregious thing we found” in his research for a paper published last year on how ACR technology works. And I have to admit that I had not heard of ACR until I came across Vekaria’s research.

[…]

Unfortunately, you don’t have much of a choice when it comes to ACR on your TV. You probably enabled the technology when you first set up your TV and accepted its privacy policy. If you refuse to do this, a lot of the functions on your TV won’t work. You can also accept the policy and then disable ACR on your TV’s settings, but that could disable certain features too. In 2017, Vizio settled a class-action lawsuit for tracking users by default. If you want to turn off this tracking technology, here’s a good guide from Consumer Reports that explains how for most types of smart TVs.

[…]

it does bug me, just on principle, that I have to let a tech company wiretap my TV in order to enjoy all of the device’s features.

[…]

Source: Roku’s Moana 2 controversy is part of a bigger ad problem | Vox

Yes, let’s “Make it Fair” – by recognising that copyright has failed to reward creators properly

A few weeks ago, the UK’s regional and national daily news titles ran similar front covers, exhorting the government there to “Make it Fair”. The campaign Web site explained:

Tech companies use creative content, such as news articles, books, music, film, photography, visual art, and all kinds of creative work, to train their generative AI models.

Publishers and creators say that doing this without proper controls, transparency or fair payment is unfair and threatens their livelihoods.

Under new UK proposals, creators will be able to opt out of their works being used for training purposes, but the current campaign wants more than that:

Creators argue this [opt-out] puts the burden on them to police their work and that tech companies should pay for using their content.

The campaign Web site then uses a familiar trope:

Tech giants should not profit from stolen content, or use it for free.

But the material is not stolen, it is simply analysed as part of the AI training. Analysing texts or images is about knowledge acquisition, not copyright infringement. Once again, the copyright industries are trying to place a (further) tax on knowledge. Moreover, levying that tax is completely impractical. Since there is no way to determine which works were used during training to produce any given output, the payments would have to be according to their contribution to the training material that went into creating the generative AI system itself. A Walled Culture post back in October 2023 noted that the amounts would be extremely small, because of the sheer quantity of training data that is used. Any monies collected from AI companies would therefore have to be handed over in aggregate, either to yet another inefficient collection society, or to the corporate intermediaries. For this reason, there is no chance that creators would benefit significantly from any AI tax.

We’ve been here before. Five years ago, I wrote a post about the EU Copyright Directive’s plans for an ancillary copyright, also known as the snippet or link tax. One of the key arguments by the newspaper publishers was that this new tax was needed so that journalists were compensated when their writing appeared in search results and elsewhere. As I showed back then, the amounts involved would be negligible. In fact, few EU countries have even bothered to implement the provision on allocating a share to journalists, underlining how pointless it all was. At the time, the European Commission insisted on behalf of its publishing friends that ancillary copyright was absolutely necessary because:

The organisational and financial contribution of publishers in producing press publications needs to be recognised and further encouraged to ensure the sustainability of the publishing industry.

Now, on the new Make it Fair Web site we find a similar claim about sustainability:

We’re calling on the government to ensure creatives are rewarded properly so as to ensure a sustainable future for AI and the creative industries.

As with the snippet tax, an AI tax is not going to do that, since the sums involved as so small. A post on the News Media Association reveals what is the real issue here:

The UK’s creative industries have today launched a bold campaign to highlight how their content is at risk of being given away for free to AI firms as the government proposes weakening copyright law.

Walled Culture has noted many times it is a matter of dogma for the industries involved that copyright must only ever get stronger, as if they were a copyright ratchet. The fear is evidently that once it has been “weakened” in some way, a precedent would be set, and other changes might be made to give more rights to ordinary people (perish the thought) rather than to companies. It’s worth pointing out that the copyright world is deploying its usual sleight of hand here, writing:

The government must stand with the creative industries that make Britain great and enforce our copyright laws to allow creatives to assert their rights in the age of AI.

A fair deal for artists and writers isn’t just about making things right, it is essential for the future of creativity and AI.

Who could be against this call for the UK government to defend the poor artists and writers? No one, surely? But the way to do that, according to Make it Fair, is to “stand with the creative industries”. In other words, give the big copyright companies more power to act as gatekeepers, on the assumption that their interests are perfectly aligned with those of the struggling creators.

They are not. As Walled Culture the book explores in some detail (free digital versions available), the vast majority of those “artists and writers” invoked by the “Make it Fair” campaign are unable to make a decent living from their work under copyright. Meanwhile, huge global corporations enjoy fat profits as a result of that same creativity, but give very little back to the people who did all the work.

There are serious problems with the new AI offerings, and big tech companies definitely need to be reined in for many things, but not for their basic analysis of text and images. If publishers really want to “Make it Fair”, they should start by rewarding their own authors fairly, with more than the current pittance. And if they won’t do that, as seems likely given their history of exploitation, creators should explore some of the ways they can make a decent living without them. Notably, many of these have no need for a copyright system that is the epitome of unfairness, which is precisely why publishers are so desperate to defend it in this latest coordinated campaign.

Source: Yes, let’s “Make it Fair” – by recognising that copyright has failed to reward creators properly – Walled Culture

HP settles lawsuit for $0 after bricking printers that don’t use HP ink

HP Inc. has settled a class action lawsuit in which it was accused of unlawfully blocking customers from using third-party toner cartridges – a practice that left some with useless printers – but won’t pay a cent to make the case go away.

One of the named plaintiffs in the case is called Mobile Emergency Housing Corp (MEHC) and works with emergency management organizations and government agencies to provide shelters for disaster victims and first responders across the US and Caribbean.

According to court documents [PDF], MEHC bought an HP Color LaserJet Pro M254 in August 2019. In October 2020, the org used toner cartridges from third-party supplier Greensky rather than pay for HP’s premium-priced toner.

A month later, HP sent or activated a firmware update – part of its so-called “Dynamic Security” measures – rendering MEHC’s printers incompatible with third-party toner cartridges like those from Greensky.

When MEHC’s CEO Joseph James tried to print out a document, he got the following error message.

The same thing happened to another plaintiff, Performance Automotive, which purchased an HP Color LaserJet Pro MFP M281fdw in 2018 and also installed a firmware update that prevented the machine from working when third-party toner cartridges were present.

HP is not shy about why it does this: In 2024 CEO Enrique Lores told the Davos World Economic Forum “We lose money on the hardware, we make money on the supplies.”

[…]

Incidentally, HP’s printing division reported $4.5 billion in net revenue in fiscal year 2024.

Lores has also argued that using third-party suppliers is a security risk, claiming malware could theoretically be slipped into cartridge controller chips. The Register is unaware of this happening outside a lab. He’s also pitched HP’s own gear as the greener choice, pointing to its cartridge recycling program.

MEHC, Performance Automotive, (and many readers) disagree and would like to choose their own toner.

Thus, a lawsuit was launched, but rather than fight its case in court, HP has, once again, chosen to settle the case privately with no admission of guilt.

“HP denies that it did anything wrong,” its settlement notice reads. “HP agrees under the Settlement to continue making certain disclosures about its use of Dynamic Security, and to continue to provide printer users with the option to either install or decline to install firmware updates that include Dynamic Security.”

[…]

Source: HP settles lawsuit after killing first responder’s printers • The Register