Neurotech companies are selling your brain data, senators warn

Three Democratic senators are sounding the alarm over brain-computer interface (BCI) technologies’ ability to collect — and potentially sell — our neural data. In a letter to the Federal Trade Commission (FTC), Sens. Chuck Schumer (D-NY), Maria Cantwell (D-IN), and Ed Markey (D-MA) called for an investigation into neurotechnology companies’ handling of user data, and for tighter regulations on their data-sharing policies.

“Unlike other personal data, neural data — captured directly from the human brain — can reveal mental health conditions, emotional states, and cognitive patterns, even when anonymized,” the letter reads. “This information is not only deeply personal; it is also strategically sensitive.”

While the concept of neural technologies may conjure up images of brain implants like Elon Musk’s Neuralink, there are far less invasive — and less regulated — neurotech products on the market, including headsets that help people meditate, purportedly trigger lucid dreaming, and promise to help users with online dating by helping them swipe through apps “based on your instinctive reaction.” These consumer products gobble up insights about users’ neurological data — and since they aren’t categorized as medical devices, the companies behind them aren’t barred from sharing that data with third parties.

“Neural data is the most private, personal, and powerful information we have—and no company should be allowed to harvest it without transparency, ironclad consent, and strict guardrails. Yet companies are collecting it with vague policies and zero transparency,” Schumer told The Verge via email.

The letter cites a 2024 report by the Neurorights Foundation, which found that most neurotech companies not only have few safeguards on user data but also have the ability to share sensitive information with third parties. The report looked at the data policies of 30 consumer-facing BCI companies and found that all but one “appear to have access to” users’ neural data, “and provide no meaningful limitations to this access.” The Neurorights Foundation only surveyed companies whose products are available to consumers without the help of a medical professional; implants like those made by Neuralink weren’t among them.

The companies surveyed by the Neurorights Foundation make it difficult for users to opt out of having their neurological data shared with third parties. Just over half the companies mentioned in the report explicitly let consumers revoke consent for data processing, and only 14 of the 30 give users the ability to delete their data. In some instances, user rights aren’t universal — for example, some companies only let users in the European Union delete their data but don’t grant the same rights to users elsewhere in the world.

To safeguard against potential abuses, the senators are calling on the FTC to:

  • investigate whether neurotech companies are engaging in unfair or deceptive practices that violate the FTC Act
  • compel companies to report on data handling, commercial practices, and third-party access
  • clarify how existing privacy standards apply to neural data
  • enforce the Children’s Online Privacy Protection Act as it relates to BCIs
  • begin a rulemaking process to establish safeguards for neural data, and setting limits on secondary uses like AI training and behavioral profiling
  • and ensure that both invasive and noninvasive neurotechnologies are subject to baseline disclosure and transparency standards, even when the data is anonymized

Though the senators’ letter calls out Neuralink by name, Musk’s brain implant tech is already subject to more regulations than other BCI technologies. Since Neuralink’s brain implant is considered a “medical” technology, it’s required to comply with the Health Insurance Portability and Accountability Act (HIPAA), which safeguards people’s medical data.

Stephen Damianos, the executive director of the Neurorights Foundation, said that HIPAA may not have entirely caught up to existing neurotechnologies, especially with regards to “informed consent” requirements.

“There are long-established and validated models for consent from the medical world, but I think there’s work to be done around understanding the extent to which informed consent is sufficient when it comes to neurotechnology,” Damianos told The Verge. “The analogy I like to give is, if you were going through my apartment, I would know what you would and wouldn’t find in my apartment, because I have a sense of what exactly is in there. But brain scans are overbroad, meaning they collect more data than what is required for the purpose of operating a device. It’s extremely hard — if not impossible — to communicate to a consumer or a patient exactly what can today and in the future be decoded from their neural data.”

Data collection becomes even trickier for “wellness” neurotechnology products, which don’t have to comply with HIPAA, even when they advertise themselves as helping with mental health conditions like depression and anxiety.

Damianos said there’s a “very hazy gray area” between medical devices and wellness devices.

“There’s this increasingly growing class of devices that are marketed for health and wellness as distinct from medical applications, but there can be a lot of overlap between those applications,” Damianos said. The dividing line is often whether a medical intermediary is required to help someone obtain a product, or whether they can “just go online, put in your credit card, and have it show up in a box a few days later.”

There are very few regulations on neurotechnologies advertised as being for “wellness.” In April 2024, Colorado passed the first-ever legislation protecting consumers’ neural data. The state updated its existing Consumer Protection Act, which protects users’ “sensitive data.” Under the updated legislation, “sensitive data” now includes “biological data” like biological, genetic, biochemical, physiological, and neural information. And in September, California amended its Consumer Privacy Act to protect neural data.

“We believe in the transformative potential of these technologies, and I think sometimes there’s a lot of doom and gloom about them,” Damianos told The Verge. “We want to get this moment right. We think it’s a really profound moment that has the potential to reshape what it means to be human. Enormous risks come from that, but we also believe in leveraging the potential to improve people’s lives.”

Source: Neurotech companies are selling your brain data, senators warn | The Verge

AI Helps Unravel a Cause of Alzheimer’s Disease and Identify a Therapeutic Candidate

graphic showing how Alzheimer's severity increases with PHGDH expression

A new study found that a gene recently recognized as a biomarker for Alzheimer’s disease is actually a cause of it, due to its previously unknown secondary function. Researchers at the University of California San Diego used artificial intelligence to help both unravel this mystery of Alzheimer’s disease and discover a potential treatment that obstructs the gene’s moonlighting role.

[…]

hong and his team took a closer look at phosphoglycerate dehydrogenase (PHGDH), which they had previously discovered as a potential blood biomarker for early detection of Alzheimer’s disease. In a follow-up study, they later found that expression levels of the PHGDH gene directly correlated with changes in the brain in Alzheimer’s disease; in other words, the higher the levels of protein and RNA produced by the PHGDH gene, the more advanced the disease.

[…]

Using mice and human brain organoids, the researchers found that altering the amounts of PHGDH expression had consequential effects on Alzheimer’s disease: lower levels corresponded to less disease progression, whereas increasing the levels led to more disease advancement. Thus, the researchers established that PHGDH is indeed a causal gene to spontaneous Alzheimer’s disease.

In further support of that finding, the researchers determined—with the help of AI—that PHGDH plays a previously undiscovered role: it triggers a pathway that disrupts how cells in the brain turn genes on and off. And such a disturbance can cause issues, like the development of Alzheimer’s disease.

[…]

another Alzheimer’s project in his lab, which did not focus on PHGDH, changed all this. A year ago, that project revealed a hallmark of Alzheimer’s disease: a widespread imbalance in the brain in the process where cells control which genes are turned on and off to carry out their specific roles.

The researchers were curious if PHGDH had an unknown regulatory role in that process, and they turned to modern AI for help.

With AI, they could visualize the three-dimensional structure of the PHGDH protein. Within that structure, they discovered that the protein has a substructure that is very similar to a known DNA-binding domain in a class of known transcription factors. The similarity is solely in the structure and not in the protein sequence.>

Zhong said, “It really demanded modern AI to formulate the three-dimensional structure very precisely to make this discovery.”

After discovering the substructure, the team then demonstrated that with it, the protein can activate two critical target genes. That throws off the delicate balance, leading to several problems and eventually the early stages of Alzheimer’s disease. In other words, PHGDH has a previously unknown role, independent of its enzymatic function, that through a novel pathway leads to spontaneous Alzheimer’s disease.

That ties back to the team’s earlier studies: the PHGDH gene produced more proteins in the brains of Alzheimer’s patients compared to the control brains, and those increased amounts of the protein in the brain triggered the imbalance. While everyone has the PHGDH gene, the difference comes down to the expression level of the gene, or how many proteins are made by it.

[…]

Given that PHGDH is such an important enzyme, there are past studies on its possible inhibitors. One small molecule, known as NCT-503, stood out to the researchers because it is not quite effective at impeding PHGDH’s enzymatic activity (the production of serine), which they did not want to change. NCT-503 is also able to penetrate the blood-brain-barrier, which is a desirable characteristic.

They turned to AI again for three-dimensional visualization and modeling. They found that NCT-503 can access that DNA-binding substructure of PHGDH, thanks to a binding pocket. With more testing, they saw that NCT-503 does indeed inhibit PHGDH’s regulatory role.

When the researchers tested NCT-503 in two mouse models of Alzheimer’s disease, they saw that it significantly alleviated Alzheimer’s progression. The treated mice demonstrated substantial improvement in their memory and anxiety tests. These tests were chosen because Alzheimer’s patients suffer from cognitive decline and increased anxiety.

The researchers do acknowledge limitations of their study. One being that there is no perfect animal model for spontaneous Alzheimer’s disease. They could test NCT-503 only in the mouse models that are available, which are those with mutations in those known disease-causing genes.

Still, the results are promising, according to Zhong.

[…]

Source: AI Helps Unravel a Cause of Alzheimer’s Disease and Identify a Therapeutic Candidate

Can the EU’s Dual Strategy of Regulation and Investment Redefine AI Leadership?

Beyond sharing a pair of vowels, AI and the EU both present significant challenges when it comes to setting the right course. This article makes the case that reducing regulation for large general-purpose AI providers under the EU’s competitiveness agenda is not a silver bullet for catching Europe up to the US and China, and would only serve to entrench European dependencies on US tech. Instead, by combining its regulatory toolkit and ambitious investment strategy, the EU is uniquely positioned to set the global standard for trustworthy AI and pursue its tech sovereignty. It is an opportunity that Europe must take.

Recent advances in AI have drastically shortened the improvement cycle from years to months, thanks to new inference-time compute techniques that enable self-prompting, chain-of-thought reasoning in models like OpenAI’s o1 and DeepSeek’s R1. However, these rapid gains also increase risks like AI-enabled cyber offenses and biological attacks. Meanwhile, the EU and France recently committed €317 billion to AI development in Europe, joining a global race with comparably large announcements from both the US and China.

Now turning to EU AI policy, the newly established AI Office and 13 independent experts are nearing the end of a nine-month multistakeholder drafting process of the Code of Practice (CoP); the voluntary technical details of the AI Act’s mandatory provisions for general purpose AI providers. The vast majority of the rules will apply to only the largest model providers, ensuring proportionality: the protection of SMEs, start-ups, and other downstream industries. In the meantime, the EU has fully launched a competitiveness agenda, with the Commission’s recently published Competitiveness Compass and first omnibus simplification package outlining plans for widespread streamlining of reporting obligations amidst mounting pushback against this simplified narrative. Add to this the recent withdrawal of the AI Liability Directive, and it’s clear to see which way the political winds are blowing.

So why must this push for simplification be replaced by a push for trustworthy market creation in the case of general-purpose AI and the Code of Practice? I’ll make three main points: 1) Regulation is not the reason for Europe lacking Big Tech companies, 2) Sweeping deregulation creates legal uncertainty and liability risks for downstream deployers, and slows trusted adoption of new technologies and thereby growth, 3) Watering down the CoP for upstream model providers with systemic risk will almost exclusively benefit large US incumbents, entrenching dependency and preventing tech sovereignty.

[…]

The EU’s tech ecosystem had ample time to emerge in the years preceding and following the turn of the century, free of so-called “red tape,” but this did not happen and will not again through deregulation […] One reason presented by Bradford is that the European digital single market still remains fragmented, with differing languages, cultures, consumer preferences, administrative obstacles, and tax regimes preventing large tech companies from seamlessly growing within the bloc and throughout the world. Even more fragmented are the capital markets of the EU, resulting in poor access to venture capital for tech start-ups and scale-ups. Additional points include harsh, national-level bankruptcy laws that are “creditor-oriented” in the EU, compared to more forgiving “debtor-friendly” equivalents in the US, resulting in lower risk appetite for European entrepreneurs. Finally, skilled migration is significantly more streamlined in the US, with federal-level initiatives like the H-1B visa leading to the majority of Big Tech CEOs hailing from overseas

[…]

The downplaying of regulation as Europe’s AI hindrance has been repeated by leading industry voices such as US VC firm a16z, European VC firm Merantix Capital, and French provider MistralAI. To reiterate: the EU ‘lagging behind’ on trillion-dollar tech companies and the accompanying innovation was not a result of regulation before there was regulation, and is also not a result of regulation after.

[…]

Whether for planes, cars, or drugs, early use of dangerous new technologies, without accompanying rules, saw frequent preventable accidents, reducing consumer trust and slowing market growth. Now, with robust checks and balances in place from well-resourced regulatory authorities, such markets have been able to thrive, providing value and innovation to citizens. Other sectors, like nuclear energy and, more recently, crypto, have suffered from an initial lack of regulation, causing industry corner-cutting, leading to infamous disasters (from Fukushima to the collapse of FTX) from which public trust has been difficult to win back. Regulators around the world are currently risking the same fate for AI.

This point is particularly relevant for so-called ‘downstream deployers’: companies that build applications on top of (usually) Big Tech, provided underlying models. Touted by European VC leader Robert Lacher as Europe’s “huge opportunity” in AI, downstream deployers, particularly SMEs, serve to gain from the Code of Practice, which ensures that necessary regulatory checks and balances occur upstream at the level of model provider.

[…]

Finally, the EU’s enduring and now potentially crippling dependency on US technology companies has been importantly addressed by the new Commission, best exemplified by the title of Executive Vice President Henna Virkkunen’s file: Tech Sovereignty, Security and Democracy. With the last few months’ geopolitical developments, including all-time-low transatlantic relations and an unfolding trade war, some have gone as far as warning of the possibility of US technology being used for surveillance of Europe and of the US sharing intelligence with Russia. Clearly, the urgency of tech sovereignty has drastically increased. A strong Code of Practice would return agency to the EU, ensuring that US upstream incumbents meet basic security, safety, and ethical standards whilst also easing the EU’s AI adoption problem by ensuring technology is truly trustworthy.

So, concretely, what needs to be done? Bruegel economist Mario Mariniello summed it up concisely: “On tech regulation, the European Union should be bolder.”

[…]

This article has outlined why deregulating highly capable AI models, produced by the world’s largest companies, is not a solution to Europe’s growth problem. Instead of stripping back obligations, ensuring protections of European citizens, the EU must combine its ambitious AI investment plan with boldly pursuing leadership in setting global standards, accelerating trustworthy adoption and ensuring tech sovereignty. This combination will put Europe on the right path to drive this technological revolution forward for the benefit of all.

Source: Can the EU’s Dual Strategy of Regulation and Investment Redefine AI Leadership? | TechPolicy.Press

Europe’s Tech Sovereignty Demands More Than Competitiveness

BRUSSELS – As part of his confrontational stance toward Europe, US President Donald Trump could end up weaponizing critical technologies. The European Union must appreciate the true nature of this threat instead of focusing on competing with the US as an economic ally. To achieve true tech sovereignty, the EU should transcend its narrow focus on competitiveness and deregulation and adopt a far more ambitious strategy

[…]

Europe’s growing anxiety about competitiveness is fueled by its inability to challenge US-based tech giants where it counts: in the market. As the Draghi report points out, the productivity gap between the United States and the EU largely reflects the relative weakness of Europe’s tech sector. Recent remarks by European Commission President Ursula von der Leyen and Tech Commissioner Henna Virkkunen suggest that policymakers have taken Draghi’s message to heart, making competitiveness the central focus of EU tech policy. But this singular focus is both insufficient and potentially counterproductive at a time of technological and geopolitical upheaval. While pursuing competitiveness could reduce Big Tech’s influence over Europe’s economy and democratic institutions, it could just as easily entrench it. European leaders’ current fixation on deregulationturbocharged by the Draghi report – leaves EU policymaking increasingly vulnerable to lobbying by powerful corporate interests and risks legitimizing policies that are incompatible with fundamental European values.

As a result, the European Commission’s deregulatory measures – including its recent decision to shelve draft AI and privacy rules, and its forthcoming “simplification” of tech legislation including the GDPR – are more likely to benefit entrenched tech giants than they are to support startups and small and medium-size enterprises. Meanwhile, Europe’s hasty and uncritical push for “AI competitiveness” risks reinforcing Big Tech’s tightening grip on the AI technology stack.

It should come as no surprise that the Draghi report’s deregulatory agenda was warmly received in Silicon Valley, even by Elon Musk himself. But the ambitions of some tech leaders go far beyond cutting red tape. Musk’s use of X (formerly Twitter) and Starlink to interfere in national elections and the war in Ukraine, together with the Trump administration’s brazen attacks on EU tech regulation, show that Big Tech’s quest for power poses a serious threat to European sovereignty.

Europe’s most urgent task, then, is to defend its citizens’ rights, sovereignty, and core values from increasingly hostile American tech giants and their allies in Washington. The continent’s deep dependence on US-controlled digital infrastructure – from semiconductors and cloud computing to undersea cables – not only undermines its competitiveness by shutting out homegrown alternatives but also enables the owners of that infrastructure to exploit it for profit.

[…]

Strong enforcement of competition law and the Digital Markets Act, for example, could curb Big Tech’s influence while creating space for European startups and challengers to thrive. Similarly, implementing the Digital Services Act and the AI Act will protect citizens from harmful content and dangerous AI systems, empowering Europe to offer a genuine alternative to Silicon Valley’s surveillance-driven business models. Against this backdrop, efforts to develop homegrown European alternatives to Big Tech’s digital infrastructure have been gaining momentum. A notable example is the so-called “Eurostack” initiative, which should be viewed as a key step in defending Europe’s ability to act independently.

[…]

A “competitive” economy holds little value if it comes at the expense of security, a fair and safe digital environment, civil liberties, and democratic values. Fortunately, Europe doesn’t have to choose. By tackling its technological dependencies, protecting democratic governance, and upholding fundamental rights, it can foster the kind of competitiveness it truly needs.

Source: Europe’s Tech Sovereignty Demands More Than Competitiveness by Marietje Schaake & Max von Thun – Project Syndicate

Deregulation has led to huge amounts of problems globally, such as the monopoly / duopoly problems we can’t seem to deal with; reliance on external markets and companies that whimsically change their minds; unsustainable hardware and software choices allowing devices to be bricked, poorly secured and irreparable; vendor lock-in to closed source ecosystems; damage to innovation; privacy invasions which lead to hacking attacks; etc etc. As Europe we can make our own choices about our own values – we are not determined by the singular motive of profit. European values are inclusive and also promote things like education and happiness.

Content Moderation or Security Theater? How Social Media Platforms Really Enforce Community Guidelines

[…] To better understand how major platforms moderate content, we studied and compared the community guidelines of Meta, TikTok, YouTube, and X.

We must note that platforms’ guidelines often evolve, so the information used in this study is based only on the latest available data at the time of publication. Moreover, the strictness and regularity of policy implementation may vary per platform.

Content Moderation

We were able to categorize 3 main methods of content moderation in major platforms’ official policies: AI-based enforcement, human or staff review, and user reporting.

Content moderation practices (AI enforcement, human review, and user reporting) across Meta, TikTok, YouTube, and X

Notably, TikTok is the only platform that doesn’t officially employ all 3 content moderation methods. It only clearly defines the process of user reporting, although it mentions that it relies on a “combination of safety approaches.” Content may go through an automated review, especially those from accounts with previous violations, and human moderation when necessary.

Human or staff review and AI enforcement are observed in the other 3 platforms’ policies. In most cases, the platforms claim to employ the methods hand-in-hand. YouTube and X (formerly Twitter) describe using a combination of machine learning and human reviewers. Meta has a unique Oversight Board that manages more complicated cases.

Criteria for

Banning Accounts

Meta TikTok YouTube X
Severe Single Violation
Repeated Violations
Circumventing Enforcement

All platform policies include the implementation of account bans for repeat or single “severe” violations. Of the 4 platforms, TikTok and X are the only ones to include circumventing moderation enforcement as additional grounds for account banning.

Content Restrictions

Age Restrictions Adult Content Gore Graphic Violence
Meta 10-12 (supervised), 13+ Allowed with conditions Allowed with conditions Allowed with conditions
TikTok 13+ Prohibited Allowed with conditions Prohibited
YouTube Varies Prohibited Prohibited Prohibited
X 18+ Allowed (with labels) Allowed with conditions Prohibited

Content depicting graphic violence is the most widely prohibited in platforms’ policies, with only Meta allowing it with conditions (the content must be “newsworthy” or “professional”).

Adult content is also heavily moderated per the official community guidelines. X allows them given there are adequate labels, while other platforms restrict any content with nudity or sexual activity that isn’t for educational purposes.

YouTube is the only one to impose a blanket prohibition on gory or distressing materials. The other platforms allow such content but might add warnings for users.

Policy strictness across platforms, ranked from least (1) to most (5) strict across 6 categories

All platforms have a zero-tolerance policy for content relating to child exploitation. Other types of potentially unlawful content — or those that threaten people’s lives or safety — are also restricted with varying levels of strictness. Meta allows discussions of crime for awareness or news but prohibits advocating for or coordinating harm.

Other official metrics for restriction include the following:

Platforms' official community guidelines regarding free speech vs. fact-checking, news and education, and privacy and security

What Gets Censored the Most?

Overall, major platforms’ community and safety guidelines are generally strict and clear regarding what’s allowed or not. However, what content moderation looks like in practice may be very different.

We looked at censorship patterns for videos on major social media platforms, including Instagram Reels, TikTok, Facebook Reels, YouTube Shorts, and X.

The dataset considered a wide variety of videos, ranging from entertainment and comedy to news, opinion, and true crime. Across the board, the types of content we observed to be most commonly censored include:

  • Profanity: Curse words were censored via audio muting, bleeping, or subtitle redaction.
  • Explicit terms: Words pertaining to sexual activity or self-harm were omitted or replaced with alternative spellings.
  • Violence and conflict: References to weapons, genocide, geopolitical conflicts, or historical violence resulted in muted audio, altered captions, or warning notices, especially on TikTok and Instagram.
  • Sexual abuse: Content related to human trafficking and sexual abuse had significant censorship, often requiring users to alter spellings (e.g., “s3x abuse” or “trffcked”).
  • Racial slurs: Some instances of censored racial slurs were found in rap music videos on TikTok and X.

Pie charts showing the types of content censored and censorship methods observed across platforms

Instagram seems to heavily censor explicit language, weapons, and sexual content, mostly through muting and subtitle redaction. Content depicting war, conflict, graphic deaths and injuries, or other potentially distressing materials often require users to click through a “graphic content” warning before being able to view the image or video.

Facebook primarily censors profanity and explicit terms through audio bleeping and subtitle removal. However, some news-related posts are able to retain full details.

On the other hand, TikTok uses audio censorship and alters captions. As such, many creators regularly use coded language when discussing sensitive topics. YouTube also employs similar filters, muting audio or blurring visuals extensively to hide profanity and explicit words or graphics. However, it still allows offensive words in some contexts (educational, scientific, etc.).

X combines a mix of redactions, visual blurring, and muted audio. Profanity and graphic violence are sometimes left uncensored, but sensitive content will typically get flagged or blurred, especially once reported by users.

Censorship Method Platforms Using It Description/Example
Muted or Bleeped Audio Instagram, TikTok, Facebook, YouTube, X Profanity, explicit terms, and violence-related speech altered or omitted
Redacted or Censored Subtitles Instagram, TikTok, Facebook, X Sensitive words (e.g., words like “n*****,” “fu*k,” and “traff*cked”) altered or omitted
Blurred Video or Images Instagram, Facebook, X Sensitive content (e.g., death and graphic injuries) blurred and labeled with a warning

News and Information Accounts

Our study confirmed that news outlets and credible informational accounts are sometimes subject to different moderation standards.

Posts on Instagram, YouTube, and X (from accounts like CNN or BBC) discussing war or political violence were only blurred and presented with an initial viewing warning, but they were not muted or altered in any way. Meanwhile, user-generated content discussing similar topics faced audio censorship.

On the other hand, comedic and entertainment posts still experienced strict regulations on profanity, even on news outlets. This suggests that humor and artistic contexts likely don’t exempt content from moderation, regardless of the type of account or creator.

The Coded Language Workaround

A widespread workaround for censorship is the use of coded language to bypass automatic moderation. Below are some of the most common ones we observed:

  • “Fuck” → “fk,” “f@ck,” “fkin,” or a string of 4 special characters
  • “Ass” → “a$$,” “a**,” or “ahh”
  • “Gun” → “pew pew” or a hand gesture in lieu of saying the word
  • “Genocide” → “g*nocide”
  • “Sex” → “s3x,” “seggs,” or “s3ggs”
  • “Trafficking” → “tr@fficking,” or “trffcked”
  • “Kill” → “k-word”
  • “Dead” → “unalive”
  • “Suicide” → “s-word,” or “s**cide”
  • “Porn” → “p0rn,” “corn,” or corn emoji
  • “Lesbian” → “le$bian” or “le dollar bean”
  • “Rape” → “r@pe,” “grape,” or grape emoji

This is the paradox of modern content moderation: how effective are “strict” guidelines when certain types of accounts are occasionally exempt from them and other users can exploit simple loopholes?

Since coded words are widely and easily understood, it suggests that AI-based censorship mainly filters out direct violations rather than stopping or removing sensitive discussions altogether.

Is Social Media Moderation Just Security Theater?

Overall, it’s clear that platform censorship for content moderation is enforced inconsistently.

Given that our researchers are also subject to the algorithmic biases of the platforms tested, and we’re unlikely to be able to interact with shadowbanned accounts, we can’t fully quantify or qualify the extent of restrictions that some users suffer for potentially showing inappropriate content.

However, we know that many creators are able to circumvent or avoid automated moderation. Certain types of accounts receive preferential treatment in terms of restrictions. Moreover, with social media apps’ heavy reliance on AI moderation, users are able to evade restrictions with the slightest modifications or substitutions.

Are Platforms Capable of Implementing Strict Blanket Restrictions on “Inappropriate” Content?

Especially with how most people rely on social media to engage with the world, it could be considered impractical or even ineffective to try and restrict sensitive conversations. This is particularly true when contexts are excluded, and restrictions focus solely on keywords, which is often the case for automated moderation.

Also, one might ponder whether content restrictions are primarily in place for liability protection instead of user safety — especially if platforms know about the limitations of AI-based moderation but continue to use it as their primary means of enforcing community guidelines.

Are Social Media Platforms Deliberately Performing Selective Moderation?

At the beginning of 2025, Meta made waves after it announced that it would be removing fact-checkers. Many suggested that this change was influenced by the seemingly new goodwill between its founder and CEO, Mark Zuckerberg, and United States President Donald Trump.

Double standards are also apparent in other platforms whose owners have clear political ties. Elon Musk, a popular supporter and backer of Trump, has been reported to spread misinformation about government spending — posting or reposting false claims on X, the platform he owns.

This is despite the platform’s guidelines clearly prohibiting “media that may result in widespread confusion on public issues, impact public safety, or cause serious harm.”

Given the seemingly one-sided implementation of policies on different social media sites, we believe individuals and organizations must practice careful scrutiny when consuming media or information on these platforms.

Community guidelines aren’t fail-safes for ensuring safe, uplifting, and constructive spaces online. We believe that what AI algorithms or fact-checkers consider safe shouldn’t be seen as the standard or universal truth. That is, not all restricted posts are automatically “harmful,” the same way not all retained posts are automatically true or reliable.

Ultimately, the goal of this study is to help digital marketers, social media professionals, journalists, and the general public learn more about the evolving mechanics of online expression. With the insights gathered from this research, we hope to spark conversation about the effectiveness and fairness of content moderation in the digital space.

[…]

Source: Content Moderation or Security Theater? How Social Media Platforms Really Enforce Community Guidelines

1 Million customers from French Boulanger’s Customers Exposed Online for free

In a recent discovery, SafetyDetectives’ Cybersecurity Team stumbled upon a clear web forum post where a threat actor publicized a database allegedly belonging to Boulanger Electroménager & Multimédia purportedly exposing 5 Million of their customers.

What is Boulanger Electroménager & Multimédia?

Boulanger Electroménager & Multimédia is a French company that specializes in the sale of household appliances and multimedia products.

Founded in 1954, according to their website, Boulanger has physical stores and delivers its products to clients across France. The company also offers an app, which has over 1 million downloads on the Google Play Store and Apple’s App Store.

Where Was The Data Found?

The data was found in a forum post available on the clear surface web. This well-known forum operates message boards dedicated to database downloads, leaks, cracks, and more.

What Was Leaked?

The author of the post included two links to the unparsed and clean datasets, which purportedly belong to Boulanger. They claim the unparsed dataset consists of a 16GB .JSON file with 27,561,591 million records, whereas the clean dataset is comprised of a 500MB .CSV file with 5 million records.

Links to both datasets were hidden and set to be shown after giving a like or leaving a comment on the post. As a result, the data was set to be unlocked for free by anyone with an account on the forum who was willing to simply interact with the post.

Our Cybersecurity Team reviewed part of the datasets to assess their authenticity, and we can confirm that the data appears to be legitimate. After running a comparative analysis, it seems like these datasets correspond to the purportedly stolen data from the 2024 cyberincident.

Back in September 2024, Boulanger was one of the targets of a ransomware attack that also affected other retailers, such as Truffaut and Cultura. A threat author with the nickname “horrormar44” claimed responsibility for the breach.

At the time, the data was offered on a different well-known clear web forum — which is currently offline — at a price of €2,000. Although there allegedly were some potential buyers, it is unclear if the sale was actually finalized. In any case, it seems the data has resurfaced now as free to download.

While reviewing the data, we found that the clean dataset contains just over 1 million rows containing one customer per row and includes some duplicates. While that’s still a considerable number of customers, it’s far smaller than the 5 million claimed by the author of the post.

The sensitive information allegedly belonging to Boulanger’s customers included:

  • Name
  • Surname
  • Full physical address
  • Email address
  • Phone number

[….]

Source: 27 Million Records from French Boulanger’s Customers Allegedly Exposed Online