cientists Identify New Mutation That Enables Three-Hour Sleepers

Researchers have discovered a mutation in the SIK3 gene that enables some people to function normally on just three to six hours of sleep. The finding, published this week in PNAS, adds to a growing list of genetic variants linked to naturally short sleepers.

When University of California, San Francisco scientists introduced the mutation to mice, the animals required 31 minutes less sleep daily. The modified enzyme showed highest activity in brain synapses, suggesting it might support brain homeostasis — the resetting process thought to occur during sleep.

“These people, all these functions our bodies are doing while we are sleeping, they can just perform at a higher level than we can,” said Ying-Hui Fu, the study’s co-author. This marks the fifth mutation across four genes identified in naturally short sleepers. Fu’s team hopes these discoveries could eventually lead to treatments for sleep disorders by revealing how sleep regulation functions in humans.

Source: cientists Identify New Mutation That Enables Three-Hour Sleepers | Slashdot

VMware perpetual license holders receive cease-and-desist letters from Broadcom

Broadcom has been sending cease-and-desist letters to owners of VMware perpetual licenses with expired support contracts, Ars Technica has confirmed.

Following its November 2023 acquisition of VMware, Broadcom ended VMware perpetual license sales. Users with perpetual licenses can still use the software they bought, but they are unable to renew support services unless they had a pre-existing contract enabling them to do so. The controversial move aims to push VMware users to buy subscriptions to VMware products bundled such that associated costs have increased by 300 percent or, in some cases, more.

Some customers have opted to continue using VMware unsupported, often as they research alternatives, such as VMware rivals or devirtualization.

Over the past weeks, some users running VMware unsupported have reported receiving cease-and-desist letters from Broadcom informing them that their contract with VMware and, thus, their right to receive support services, has expired. The letter [PDF], reviewed by Ars Technica and signed by Broadcom managing director Michael Brown, tells users that they are to stop using any maintenance releases/updates, minor releases, major releases/upgrades extensions, enhancements, patches, bug fixes, or security patches, save for zero-day security patches, issued since their support contract ended.

The letter tells users that the implementation of any such updates “past the Expiration Date must be immediately removed/deinstalled,” adding:

Any such use of Support past the Expiration Date constitutes a material breach of the Agreement with VMware and an infringement of VMware’s intellectual property rights, potentially resulting in claims for enhanced damages and attorneys’ fees.

[…]

The cease-and-desist letters also tell recipients that they could be subject to auditing.

Failure to comply with [post-expiration reporting] requirements may result in a breach of the Agreement by Customer[,] and VMware may exercise its right to audit Customer as well as any other available contractual or legal remedy.

[…]

Since Broadcom ended VMware’s perpetual licenses and increased pricing, numerous users and channel partners, especially small-to-medium-sized companies, have had to reduce or end business with VMware. Most of Members IT Group’s VMware customer base is now running VMware unsupported

[…]

Source: VMware perpetual license holders receive cease-and-desist letters from Broadcom – Ars Technica

Hackers Manage To Take Control of Nissan Leaf’s Steering Remotely

Connected cars are great, as they let you communicate with other systems and devices via the internet, but connectivity opens the door to hacking. As it turns out, hacking a Nissan Leaf isn’t nearly as difficult as it might sound if you’ve got the right tools and the right knowledge.

Researchers from Budapest-based PCAutomotive traveled to Black Hat Asia 2025 to demonstrate how they managed to hack into a 2020 Nissan Leaf. Luckily, they had good intentions—they simply wanted to show that it could be done. Someone with less-than-good intentions could have caused a great deal of damage with the same tools. Most of the parts used to hack into the car were sourced from eBay or a junkyard.

The first part of the project involved building a working test bench around a Leaf touchscreen and the EV’s digital instrument cluster. They then bypassed the anti-theft safeguards by implementing a Python script, which is a programming language, and hacked into the system. The steps taken to break in were detailed in a presentation. They look complicated if you don’t know what you’re dealing with and have no programming experience, but someone with a great deal of programming experience shouldn’t find the process terribly daunting.

When everything was set up, it was time to launch an attack. One of the researchers connected to the Leaf remotely via a laptop while two others were riding in it. The first step was pretty straight-forward: The man with the laptop tracked the Leaf’s movements via GPS. He then recorded the conversation the passengers were having inside the car, downloaded it to his laptop, and played it in the car via the speakers.

Next, things got creepier. Using the same laptop, the researcher sounded the horn, folded the door mirrors, turned on the wipers, and even yanked the steering wheel. He was able to perform these tasks even when the car was moving. The team identified a list of 10 vulnerabilities that allowed it to access the Leaf’s infotainment system and notified Nissan. The company hasn’t responded to the video as of this writing, however.

Source: Hackers Manage To Take Control of Nissan Leaf’s Steering Remotely

Contemplating art’s beauty found to boost abstract and ‘big picture’ thinking

[…] a new study from the University of Cambridge suggests that stopping to contemplate the beauty of artistic objects in a gallery or museum boosts our ability to think in abstract ways and consider the “bigger picture” when it comes to our lives.

Researchers say the findings offer that engaging with artistic beauty helps us escape the “mental trappings of daily life,” such as current anxieties and to-do lists, and induce “psychological distancing”: the process of zooming out on your thoughts to gain clarity.

[…]

Researchers found that study participants who focused on the beauty of objects in an exhibition of ceramics were more likely to experience elevated psychological states enabling them to think “beyond the here and now,” and more likely to report feeling enlightened, moved, or transformed.

This was compared to participants who were simply asked to look intently at the artistic objects to match them with a series of line drawings. The findings are published in the journal Empirical Studies of the Arts.

[…]

“Our research indicates that engaging with the beauty of art can enhance abstract thinking and promote a different mindset to our everyday patterns of thought, shifting us into a more expansive state of mind.”

“This is known as psychological distancing, when one snaps out of the mental trappings of daily life and focuses more on the overall picture.”

[…]

Participants were randomly split into two groups: the ‘beauty’ group was asked to actively consider and then rate the beauty of each ceramic object they viewed, while the second group just matched a line drawing of the object with the artwork itself.

All participants were then tested on how they process information, and if it’s in a more practical or abstract way. For example, does ‘writing a letter’ mean literally putting pen to paper or sharing your thoughts? Is ‘voting’ marking a ballot or influencing an election? Is ‘locking a door’ inserting a key or securing a house?

“These tests are designed to gauge whether a person is thinking in an immediate, procedural way, as we often do in our day-to-day lives, or is attuned to the deeper meaning and bigger picture of the actions they take,” said Dr. Elzė Sigutė Mikalonytė, lead author of the study and a researcher at Cambridge’s Department of Psychology.

Across all participants, those in the beauty group scored almost 14% higher on average than the control group in abstract thinking. While they were told the study was about cognitive processes, participants were asked about interests, with around half saying they had an artistic hobby.

Among those, the effect was greater: those with an artistic hobby in the ‘beauty’ group scored over 25% higher on average for abstract thinking than those with an artistic hobby in the control group.

[…]

Emotional states of participants were also measured by asking about their feelings while completing the gallery task. Across all participants, those in the beauty group reported an average of 23% higher levels of “transformative and self-transcendent feelings”—such as feeling moved, enlightened and inspired—than the control group.

“Our findings offer empirical support for a long-standing philosophical idea that beauty appreciation can help people detach from their immediate practical concerns and adopt a broader, more abstract perspective,” said Mikalonytė.

Importantly, however, the beauty group did not report feeling any happier than the , suggesting that it was the engagement with beauty that influenced abstract thinking, rather than any overall positivity from the experience.

The latest study is part of a wider project led by Schnall exploring the effects of aesthetic experiences on cognition

[…]

Source: Contemplating art’s beauty found to boost abstract and ‘big picture’ thinking

Apple Hit with Class-Action Lawsuit for App Store Injunction Violation after Judge rules apple execs lied and willfully ignored injunction – join here

[…]The new lawsuit was filed May 2, 2025, following news that a federal judge found the tech giant in contempt of court for violating a 2021 antitrust injunction which required Apple to permit its app developers to sell subscriptions and other in-app products directly to their customers using links within their apps. Without the injunction in place Apple charges app developers uniform transaction fees (defaulting at 30%, and 15% under some programs). The court found that Apple implemented a scheme to violate the injunction and prevent developers from directing customers to their own websites and payment platforms.

“It appears as though Apple has been caught red-handed blatantly seeking to undercut the law,” said Steve Berman, Hagens Berman managing partner and co-founder. “We believe app developers deserve a fair market to promote and sell their products, and the world’s largest corporation doesn’t get to bully them out of this billion-dollar revenue stream.”

If you sold an in-app digital product (including subscriptions) through Apple’s App Store after Jan. 16, 2024, find out your rights as an iOS app developer.

[…]

The court ultimately held that Apple willfully violated the injunction to protect its revenues, and then “reverse engineered justification[s] to proffer to the Court” often with “lies on the witness stand,”

[…]

The lawsuit’s named plaintiff is Pure Sweat Basketball Inc., a corporation offering an app used by players across the country to train and improve their basketball skills. Had Apple complied with the injunction, as required, Pure Sweat would have been able to sell subscriptions to its app directly to its customers, using “link-out” buttons directing customers to Pure Sweat’s own website.

As a result of Apple’s misconduct, attorneys estimate that potentially more than 100,000 similarly situated app developers were prevented from selling in-app products (including subscriptions) directly to their customers, and were forced to pay Apple commissions on in-app sales that Apple was not entitled to receive.

Find out more about the class-action lawsuit against Apple on behalf of iOS app developers.

[…]

Source: Apple Hit with Class-Action Lawsuit for App Store Injunction Violation by Same Law Firm That Secured $100M iOS Developer Win | Hagens Berman

Minecraft ended virtual reality support – no reason given why

Minecraft is no longer (officially) available on virtual and mixed reality platforms. The change was confirmed in today’s patch notes for the game’s Bedrock edition following an announcement from developer Mojang in October. Those fall patch notes suggested that the platforms would be removed in March, so players who favored VR wound up getting a few extra weeks to fully immerse themselves in their blocky worlds.

Removing entire platforms isn’t a choice game devs make lightly. Especially when Minecraft‘s player base still numbers in the hundreds of millions at any given time, it seems unlikely that Mojang would take away virtual and mixed reality unless it wouldn’t cause a serious disruption for its many fans. There are still plenty of critically received games that make VR ownership worthwhile (Beat Saber, anyone?), but a title as major as Minecraft abandoning the hardware isn’t a great look.

Source: Minecraft ended virtual reality support today

Scientists rewrite 100 year old textbooks on how cells divide

Scientists from The University of Manchester have changed our understanding of how cells in living organisms divide, which could revise what students are taught at school.

In a Wellcome funded study published today (01/05/25) in Science – one of the world’s leading scientific journals – the researchers challenge conventional wisdom taught in schools for over 100 years.

Students are currently taught that during cell division, a ‘parent’ cell will become spherical before splitting into two ‘daughter’ cells of equal size and shape.

However, the study reveals that cell rounding is not a universal feature of cell division and is not how it often works in the body.

Dividing cells, they show, often don’t round up into sphere-like shapes. This lack of rounding breaks the symmetry of division to generate two daughter cells that differ from each other in both size and function, known as asymmetric division.

Asymmetric divisions are an important way that the different types of cells in the body are generated, to make different tissues and organs.

Until now, asymmetric cell division has predominantly only been associated with highly specialised cells, known as stem cells.

The scientists found that it is the shape of a parent cell before it even divides that can determine if they will round or not in division and determines how symmetric, or not, its daughter cells are going to be.

Cells which are shorter and wider in shape tend to round up and divide into two cells which are similar to each other. However, cells which are longer and thinner don’t round up and divide asymmetrically, so that one

daughter is different to the other.

The findings could have far reaching implications on our understanding of the role of cell division in disease. For example, in the context of cancer cells, this type of ‘non-round’, asymmetric division could generate different cell behaviours known to promote cancer progression through metastasis.

[…]

Source: Scientists rewrite textbooks on how cells divide

Judge: Apple Lied In Fortnite Case, chose to not comply with court order, must immediately allow external payments without a cut

Epic Games v. Apple judge Yvonne Gonzalez Rogers has ruled that, effective immediately, Apple can no longer take a cut from purchases made outside apps and has blocked the tech giant from restricting how developers can point people to third-party payment options. The judge was also not happy that Apple has seemingly not complied with a previous court order and has referred the case to the U.S. Attorney’s Office for possible contempt charges. Apple is already planning to appeal the ruling.

This is the latest development in the Epic v Apple court case that started back in 2020 after Epic added its own payment option to Fortnite on iOS and Apple pulled the game as a result. The Fortnite maker’s case against Apple was focused primarily on the large fees the tech giant took from all in-app purchases and its strict restrictions against allowing other app stores and third-party options on iOS devices.

In 2021 the judge sided with Apple on most points, but declared the company needed to allow app makers to use third-party payment systems that could avoid Apple’s cut. In 2023, after a series of appeals, Apple declared a “resounding victory” over Epic, though it was still forced by the court to allow third-party payment options and to not take a cut of outside app purchases. Epic alleges that Apple never complied with that order. Now Apple finds itself in a lot of trouble with judge Yvonne Gonzalez Rogers.

“That [Apple] thought this Court would tolerate such insubordination was a gross miscalculation,” wrote the judge in a ruling filed on April 30 in California. “Apple willfully chose not to comply with this Court’s Injunction. It did so with the express intent to create new anticompetitive barriers which would, by design and in effect, maintain a valued revenue stream; a revenue stream previously found to be anticompetitive.”

Elsewhere in the filing, the judge says that an Apple executive lied under oath when talking about forcing devs to pay a 27 percent fee for outside app purchases and wrote that Apple CEO Tim Cook “chose poorly” when listening to execs at the company who convinced him to ignore the injunction.

“Vice-President of Finance, Alex Roman, outright lied under oath. Internally, Phillip Schiller had advocated that Apple comply with the Injunction, but Tim Cook ignored Schiller and instead allowed Chief Financial Officer Luca Maestri and his finance team to convince him otherwise. Cook chose poorly,” wrote the judge. In the filing the judge also suggested that Apple’s actions might constitute contempt charges and has referred the case to the U.S. Attorney’s office.

As explained in the filing, Apple must now “immediately” comply with the court’s orders to allow developers to include third-party payment options, to not take a cut of those purchases, and to not block or hinder devs from including these outside payment methods through various means and UI messages.

[…]

Source: Judge: Apple Lied In Fortnite Case And Just Blew App Store Open

Huge molecular cloud 10x the size of moon detected right next to earth

 A longstanding prediction in interstellar theory posits that significant quantities of molecular gas, crucial for star formation, may be undetected due to being ’dark’ in commonly used molecular gas tracers, such as carbon monoxide. We report the discovery of Eos, a dark molecular cloud located just 94 pc from the Sun.This cloud is identified using H2 far-ultraviolet fluorescent line emission, which traces molecular gas at the boundary layers of star-forming and supernova remnant regions. The cloud edge is outlined along the high-latitude side of the North Polar Spur

[…]

 

Source: A nearby dark molecular cloud in the Local Bubble revealed via H2 fluorescence | Nature Astronomy

Messaging App Used by Mike Waltz, Trump Deportation Airline GlobalX Both Hacked in Separate Breaches

TeleMessage, a communications app used by former Trump national security adviser Mike Waltz, has suspended services after a reported hack exposed some user messages. The breach follows controversy over Waltz’s use of the app to coordinate military updates, including accidentally adding a journalist to a sensitive Signal group chat. From the report: In an email, Portland, Oregon-based Smarsh, which runs the TeleMessage app, said it was “investigating a potential security incident” and was suspending all its services “out of an abundance of caution.” A Reuters photograph showed Waltz using TeleMessage, an unofficial version of the popular encrypted messaging app Signal, on his phone during a cabinet meeting on Wednesday. A separate report from 404 Media says hackers have also targeted GlobalX Air — one of the main airlines the Trump administration is using as part of its deportation efforts — and claim to have stolen flight records and passenger manifests for all its flights, including those for deportation. From the report: The data, which the hackers contacted 404 Media and other journalists about unprompted, could provide granular insight into who exactly has been deported on GlobalX flights, when, and to where, with GlobalX being the charter company that facilitated the deportation of hundreds of Venezuelans to El Salvador. “Anonymous has decided to enforce the Judge’s order since you and your sycophant staff ignore lawful orders that go against your fascist plans,” a defacement message posted to GlobalX’s website reads. Anonymous, well-known for its use of the Guy Fawkes mask, is an umbrella some hackers operate under when performing what they see as hacktivism.

Source: Messaging App Used by Mike Waltz, Trump Deportation Airline GlobalX Both Hacked in Separate Breaches | Slashdot

Dating app Raw exposed users’ location data and personal information

A security lapse at dating app Raw publicly exposed the personal data and private location data of its users, TechCrunch has found.

The exposed data included users’ display names, dates of birth, dating and sexual preferences associated with the Raw app, as well as users’ locations. Some of the location data included coordinates that were specific enough to locate Raw app users with street-level accuracy.

Raw, which launched in 2023, is a dating app that claims to offer more genuine interactions with others in part by asking users to upload daily selfie photos. The company does not disclose how many users it has, but its app listing on the Google Play Store notes more than 500,000 Android downloads to date.

News of the security lapse comes in the same week that the startup announced a hardware extension of its dating app, the Raw Ring, an unreleased wearable device that it claims will allow app users to track their partner’s heart rate and other sensor data to receive AI-generated insights, ostensibly to detect infidelity.

Notwithstanding the moral and ethical issues of tracking romantic partners and the risks of emotional surveillance, Raw claims on its website and in its privacy policy that its app, and its unreleased device, both use end-to-end encryption, a security feature that prevents anyone other than the user — including the company — from accessing the data.

When we tried the app this week, which included an analysis of the app’s network traffic, TechCrunch found no evidence that the app uses end-to-end encryption. Instead, we found that the app was publicly spilling data about its users to anyone with a web browser.

[…]

Source: Dating app Raw exposed users’ location data and personal information | TechCrunch

The UN Ditches Google for Form Submissions, Opts for Open Source ‘CryptPad’ Instead

Did you know there’s an initiative to drive Open Source adoption both within the United Nations — and globally? Launched in March, it’s the work of the Digital Technology Network (under the UN’s chief executive board) which “works to advance open source technologies throughout UN agencies,” promoting “collaboration and scalable solutions to support the UN’s digital transformation.” Fun fact: The first group to endorse the initiative’s principles was the Open Source Initiative

“The Open Source Initiative applauds the United Nations for recognizing the growing importance of Open Source in solving global challenges and building sustainable solutions, and we are honored to be the first to endorse the UN Open Source Principles,” said Stefano Maffulli, executive director of OSI.
But that’s just the beginining, writes It’s FOSS News: As part of the UN Open Source Principles initiative, the UN has invited other organizations to support and officially endorse these principles. To collect responses, they are using CryptPad instead of Google Forms… If you don’t know about CryptPad, it is a privacy-focused, open source online collaboration office suite that encrypts all of its content, doesn’t log IP addresses, and supports a wide range of collaborative documents and tools for people to use.

While this happened back in late March, we thought it would be a good idea to let people know that a well-known global governing body like the UN was slowly moving towards integrating open source tech into their organization… I sincerely hope the UN continues its push away from proprietary Big Tech solutions in favor of more open, privacy-respecting alternatives, integrating more of their workflow with such tools.

16 groups have already endorsed the UN Open Source Principles (including the GNOME Foundation, the Linux Foundation, and the Eclipse Foundation).

Here’s the eight UN Open Source Principles:

  1. Open by default: Making Open Source the standard approach for projects
  2. Contribute back: Encouraging active participation in the Open Source ecosystem
  3. Secure by design: Making security a priority in all software projects
  4. Foster inclusive participation and community building: Enabling and facilitating diverse and inclusive contributions
  5. Design for reusability: Designing projects to be interoperable across various platforms and ecosystems
  6. Provide documentation: Providing thorough documentation for end-users, integrators and developers
  7. RISE (recognize, incentivize, support and empower): Empowering individuals and communities to actively participate
  8. Sustain and scale: Supporting the development of solutions that meet the evolving needs of the UN system and beyond.

Source: The UN Ditches Google for Form Submissions, Opts for Open Source ‘CryptPad’ Instead

Army Will Seek Right to Repair Clauses in All Its Contracts

A new memo from Secretary of Defense Pete Hegseth is calling on defense contractors to grant the Army the right-to-repair. The Wednesday memo is a document about “Army Transformation and Acquisition Reform” that is largely vague but highlights the very real problems with IP constraints that have made it harder for the military to repair damaged equipment.

Hegseth made this clear at the bottom of the memo in a subsection about reform and budget optimization. “The Secretary of the Army shall…identify and propose contract modifications for right to repair provisions where intellectual property constraints limit the Army’s ability to conduct maintenance and access the appropriate maintenance tools, software, and technical data—while preserving the intellectual capital of American industry,” it says. “Seek to include right to repair provisions in all existing contracts and also ensure these provisions are included in all new contracts.”

[…]

appliance manufacturers and tractor companies have lobbied against bills that would make it easier for the military to repair its equipment.

This has been a huge problem for decades. In the 1990s, the Air Force bought Northrop Grumman’s B-2 Stealth Bombers for about $2 billion each. When the Air Force signed the contract for the machines, it paid $2.6 billion up front just for spare parts. Now, for some reason, Northrop Grumman isn’t able to supply replacement parts anymore. To fix the aging bombers, the military has had to reverse engineer parts and do repairs themselves.

Similarly, Boeing screwed over the DoD on replacement parts for the C-17 military transport aircraft to the tune of at least $1 million. The most egregious example was a common soap dispenser. “One of the 12 spare parts included a lavatory soap dispenser where the Air Force paid more than 80 times the commercially available cost or a 7,943 percent markup,” a Pentagon investigation found. Imagine if they’d just used a 3D printer to churn out the part it needed.

[…]

Source: Army Will Seek Right to Repair Clauses in All Its Contracts

The Not-Pebble Core Devices Watch is something different in that area

[…]The regular slate of smartwatches from the likes of Google, Samsung, and Apple hasn’t left us truly excited. Samsung and Apple are in a race to add as many health sensors to the back of their devices, and still the most we can hope for on the impending Apple Watch Series 11 is a slimmer body and slightly better display. Core Devices’ Core 2 Duo, in its current iteration, is practically the same device as a Pebble 2 but with a few tweaks. It includes a relatively small, 1.2-inch display and—get this—no touchscreen. You control it with buttons. The Core 2 Duo screen is black and white, while the $225 Core Time 2 has a 1.5-inch color touchscreen display and a heart rate monitor.

 

In a video posted Thursday, Migicovsky offered some insight into the smartwatch itself, plus more on what people in the U.S. can expect to pay for one due to Trump tariffs. He confirmed the Core 2 Duo is being made in China

[…]

There are upgrades on the way. Migicovsky said he hopes to integrate complications—aka those little widgets that tell you the time or offer app alerts— alongside deeper Beeper integration for having an all-in-one chat app. The Pebble founder said he would also like to add some sort of AI companion onto the smartwatch. He cited the app Bob.ai, which can offer quick answers to simple queries through Google’s Gemini AI model. The maker has already mentioned users could connect with ChatGPT via a built-in microphone, but the new smartwatches will have a speaker for ChatGPT to talk back.

The Core 2 Duo is supposed to retail for $150

[…]

Source: The Not-Pebble Watch Is a Sign We Crave Something Unique

I hope it works and I hope they have an e-ink display with a huge battery life.

Iconic Gun-Makers Gave Sensitive Customer Information to Political Operatives in secret for decades

For years, America’s most iconic gun-makers turned over sensitive personal information on hundreds of thousands of customers to political operatives.

Those operatives, in turn, secretly employed the details to rally firearm owners to elect pro-gun politicians running for Congress and the White House, a ProPublica investigation has found.

The clandestine sharing of gun buyers’ identities — without their knowledge and consent — marked a significant departure for an industry that has long prided itself on thwarting efforts to track who owns firearms in America.

At least 10 gun industry businesses, including Glock, Smith & Wesson, Remington, Marlin and Mossberg, handed over names, addresses and other private data to the gun industry’s chief lobbying group, the National Shooting Sports Foundation. The NSSF then entered the gun owners’ details into what would become a massive database.

The data initially came from decades of warranty cards filled out by customers and returned to gun manufacturers for rebates and repair or replacement programs.

A ProPublica review of dozens of warranty cards from the 1970s through today found that some promised customers their information would be kept strictly confidential. Others said some information could be shared with third parties for marketing and sales. None of the cards informed buyers their details would be used by lobbyists and consultants to win elections.

[…]

The undisclosed collection of intimate gun owner information is in sharp contrast with the NSSF’s public image.

[…]

For two decades, the group positioned itself as an unwavering watchdog of gun owner privacy. The organization has raged against government and corporate attempts to amass information on gun buyers. As recently as this year, the NSSF pushed for laws that would prohibit credit card companies from creating special codes for firearms dealers, claiming the codes could be used to create a registry of gun purchasers.

As a group, gun owners are fiercely protective about their personal information. Many have good reasons. Their ranks include police officers, judges, domestic violence victims and others who have faced serious threats of harm.

In a statement, the NSSF defended its data collection. Any suggestion of “unethical or illegal behavior is entirely unfounded,” the statement said, adding that “these activities are, and always have been, entirely legal and within the terms and conditions of any individual manufacturer, company, data broker, or other entity.”

The gun industry companies either did not respond to ProPublica or declined to comment, noting they are under different ownership today and could not find evidence that customer information was previously shared. One ammunition maker named in the NSSF documents as a source of data said it never gave the trade group or its vendors any “personal information.”

ProPublica established the existence of the secret program after reviewing tens of thousands of internal corporate and NSSF emails, reports, invoices and contracts. We also interviewed scores of former gun executives, NSSF employees, NRA lobbyists and political consultants in the U.S. and the United Kingdom.

The insider accounts and trove of records lay bare a multidecade effort to mobilize gun owners as a political force. Confidential information from gun customers was central to what NSSF called its voter education program. The initiative involved sending letters, postcards and later emails to persuade people to vote for the firearms industry’s preferred political candidates. Because privacy laws shield the names of firearm purchasers from public view, the data NSSF obtained gave it a unique ability to identify and contact large numbers of gun owners or shooting sports enthusiasts.

It also allowed the NSSF to figure out whether a gun buyer was a registered voter. Those who weren’t would be encouraged to register and cast their ballots for industry-supported politicians.

From 2000 to 2016, the organization poured more than $20 million into its voter education campaign, which was initially called Vote Your Sport and today is known as GunVote. The NSSF trumpeted the success of its electioneering in reports, claiming credit for putting both George W. Bush and Donald J. Trump in the White House and firearm-friendly lawmakers in the U.S. House and Senate.

In April 2016, a contractor on NSSF’s voter education project delivered a large cache of data to Cambridge Analytica

[…]

The data given to Cambridge included 20 years of gun owners’ warranty card information as well as a separate database of customers from Cabela’s, a sporting goods retailer with approximately 70 stores in the U.S. and Canada.

Cambridge combined the NSSF data with a wide array of sensitive particulars obtained from commercial data brokers. It included people’s income, their debts, their religion, where they filled prescriptions, their children’s ages and purchases they made for their kids. For women, it revealed intimate elements such as whether the underwear and other clothes they purchased were plus size or petite.

The information was used to create psychological profiles of gun owners and assign scores to behavioral traits, such as neuroticism and agreeableness. The profiles helped Cambridge tailor the NSSF’s political messages to voters based on their personalities.

[…]

As the body count from mass shootings at schools and elsewhere in the nation has climbed, those politicians have halted proposals to resurrect the assault weapons ban and enact other gun control measures, even those popular with voters, such as raising the minimum age to buy an assault rifle from 18 to 21.

In response to questions from ProPublica, the NSSF acknowledged it had used the customer information in 2016 for “creating a data model” of potentially sympathetic voters. But the group said the “existence and proven success of that model then obviated the need to continue data acquisition via private channels and today, NSSF uses only commercial-source data to which the data model is then applied.”

[…]

Source: Iconic Gun-Makers Gave Sensitive Customer Information to Political Operatives — ProPublica

Brain implant does thought to speech

Marking a breakthrough in the field of brain-computer interfaces (BCIs), a team of researchers from UC Berkeley and UC San Francisco has unlocked a way to restore naturalistic speech for people with severe paralysis.

This work solves the long-standing challenge of latency in speech neuroprostheses, the time lag between when a subject attempts to speak and when sound is produced. Using recent advances in artificial intelligence-based modeling, the researchers developed a streaming method that synthesizes brain signals into audible speech in near-real time.

As reported today in Nature Neuroscience, this technology represents a critical step toward enabling communication for people who have lost the ability to speak. […]

we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming. The result is more naturalistic, fluent speech synthesis.

[…]

The researchers also showed that their approach can work well with a variety of other brain sensing interfaces, including microelectrode arrays (MEAs) in which electrodes penetrate the brain’s surface, or non-invasive recordings (sEMG) that use sensors on the face to measure muscle activity.

“By demonstrating accurate brain-to-voice synthesis on other silent-speech datasets, we showed that this technique is not limited to one specific type of device,” said Kaylo Littlejohn, Ph.D. student at UC Berkeley’s Department of Electrical Engineering and Computer Sciences and co-lead author of the study. “The same algorithm can be used across different modalities provided a good signal is there.”

[…]

the neuroprosthesis works by sampling neural data from the motor cortex, the part of the brain that controls speech production, then uses AI to decode brain function into speech.

“We are essentially intercepting signals where the thought is translated into articulation and in the middle of that motor control,” he said. “So what we’re decoding is after a thought has happened, after we’ve decided what to say, after we’ve decided what words to use and how to move our vocal-tract muscles.”

[…]

 

Source: Brain-to-voice neuroprosthesis restores naturalistic speech – Berkeley Engineering

Neurotech companies are selling your brain data, senators warn

Three Democratic senators are sounding the alarm over brain-computer interface (BCI) technologies’ ability to collect — and potentially sell — our neural data. In a letter to the Federal Trade Commission (FTC), Sens. Chuck Schumer (D-NY), Maria Cantwell (D-IN), and Ed Markey (D-MA) called for an investigation into neurotechnology companies’ handling of user data, and for tighter regulations on their data-sharing policies.

“Unlike other personal data, neural data — captured directly from the human brain — can reveal mental health conditions, emotional states, and cognitive patterns, even when anonymized,” the letter reads. “This information is not only deeply personal; it is also strategically sensitive.”

While the concept of neural technologies may conjure up images of brain implants like Elon Musk’s Neuralink, there are far less invasive — and less regulated — neurotech products on the market, including headsets that help people meditate, purportedly trigger lucid dreaming, and promise to help users with online dating by helping them swipe through apps “based on your instinctive reaction.” These consumer products gobble up insights about users’ neurological data — and since they aren’t categorized as medical devices, the companies behind them aren’t barred from sharing that data with third parties.

“Neural data is the most private, personal, and powerful information we have—and no company should be allowed to harvest it without transparency, ironclad consent, and strict guardrails. Yet companies are collecting it with vague policies and zero transparency,” Schumer told The Verge via email.

The letter cites a 2024 report by the Neurorights Foundation, which found that most neurotech companies not only have few safeguards on user data but also have the ability to share sensitive information with third parties. The report looked at the data policies of 30 consumer-facing BCI companies and found that all but one “appear to have access to” users’ neural data, “and provide no meaningful limitations to this access.” The Neurorights Foundation only surveyed companies whose products are available to consumers without the help of a medical professional; implants like those made by Neuralink weren’t among them.

The companies surveyed by the Neurorights Foundation make it difficult for users to opt out of having their neurological data shared with third parties. Just over half the companies mentioned in the report explicitly let consumers revoke consent for data processing, and only 14 of the 30 give users the ability to delete their data. In some instances, user rights aren’t universal — for example, some companies only let users in the European Union delete their data but don’t grant the same rights to users elsewhere in the world.

To safeguard against potential abuses, the senators are calling on the FTC to:

  • investigate whether neurotech companies are engaging in unfair or deceptive practices that violate the FTC Act
  • compel companies to report on data handling, commercial practices, and third-party access
  • clarify how existing privacy standards apply to neural data
  • enforce the Children’s Online Privacy Protection Act as it relates to BCIs
  • begin a rulemaking process to establish safeguards for neural data, and setting limits on secondary uses like AI training and behavioral profiling
  • and ensure that both invasive and noninvasive neurotechnologies are subject to baseline disclosure and transparency standards, even when the data is anonymized

Though the senators’ letter calls out Neuralink by name, Musk’s brain implant tech is already subject to more regulations than other BCI technologies. Since Neuralink’s brain implant is considered a “medical” technology, it’s required to comply with the Health Insurance Portability and Accountability Act (HIPAA), which safeguards people’s medical data.

Stephen Damianos, the executive director of the Neurorights Foundation, said that HIPAA may not have entirely caught up to existing neurotechnologies, especially with regards to “informed consent” requirements.

“There are long-established and validated models for consent from the medical world, but I think there’s work to be done around understanding the extent to which informed consent is sufficient when it comes to neurotechnology,” Damianos told The Verge. “The analogy I like to give is, if you were going through my apartment, I would know what you would and wouldn’t find in my apartment, because I have a sense of what exactly is in there. But brain scans are overbroad, meaning they collect more data than what is required for the purpose of operating a device. It’s extremely hard — if not impossible — to communicate to a consumer or a patient exactly what can today and in the future be decoded from their neural data.”

Data collection becomes even trickier for “wellness” neurotechnology products, which don’t have to comply with HIPAA, even when they advertise themselves as helping with mental health conditions like depression and anxiety.

Damianos said there’s a “very hazy gray area” between medical devices and wellness devices.

“There’s this increasingly growing class of devices that are marketed for health and wellness as distinct from medical applications, but there can be a lot of overlap between those applications,” Damianos said. The dividing line is often whether a medical intermediary is required to help someone obtain a product, or whether they can “just go online, put in your credit card, and have it show up in a box a few days later.”

There are very few regulations on neurotechnologies advertised as being for “wellness.” In April 2024, Colorado passed the first-ever legislation protecting consumers’ neural data. The state updated its existing Consumer Protection Act, which protects users’ “sensitive data.” Under the updated legislation, “sensitive data” now includes “biological data” like biological, genetic, biochemical, physiological, and neural information. And in September, California amended its Consumer Privacy Act to protect neural data.

“We believe in the transformative potential of these technologies, and I think sometimes there’s a lot of doom and gloom about them,” Damianos told The Verge. “We want to get this moment right. We think it’s a really profound moment that has the potential to reshape what it means to be human. Enormous risks come from that, but we also believe in leveraging the potential to improve people’s lives.”

Source: Neurotech companies are selling your brain data, senators warn | The Verge

AI Helps Unravel a Cause of Alzheimer’s Disease and Identify a Therapeutic Candidate

graphic showing how Alzheimer's severity increases with PHGDH expression

A new study found that a gene recently recognized as a biomarker for Alzheimer’s disease is actually a cause of it, due to its previously unknown secondary function. Researchers at the University of California San Diego used artificial intelligence to help both unravel this mystery of Alzheimer’s disease and discover a potential treatment that obstructs the gene’s moonlighting role.

[…]

hong and his team took a closer look at phosphoglycerate dehydrogenase (PHGDH), which they had previously discovered as a potential blood biomarker for early detection of Alzheimer’s disease. In a follow-up study, they later found that expression levels of the PHGDH gene directly correlated with changes in the brain in Alzheimer’s disease; in other words, the higher the levels of protein and RNA produced by the PHGDH gene, the more advanced the disease.

[…]

Using mice and human brain organoids, the researchers found that altering the amounts of PHGDH expression had consequential effects on Alzheimer’s disease: lower levels corresponded to less disease progression, whereas increasing the levels led to more disease advancement. Thus, the researchers established that PHGDH is indeed a causal gene to spontaneous Alzheimer’s disease.

In further support of that finding, the researchers determined—with the help of AI—that PHGDH plays a previously undiscovered role: it triggers a pathway that disrupts how cells in the brain turn genes on and off. And such a disturbance can cause issues, like the development of Alzheimer’s disease.

[…]

another Alzheimer’s project in his lab, which did not focus on PHGDH, changed all this. A year ago, that project revealed a hallmark of Alzheimer’s disease: a widespread imbalance in the brain in the process where cells control which genes are turned on and off to carry out their specific roles.

The researchers were curious if PHGDH had an unknown regulatory role in that process, and they turned to modern AI for help.

With AI, they could visualize the three-dimensional structure of the PHGDH protein. Within that structure, they discovered that the protein has a substructure that is very similar to a known DNA-binding domain in a class of known transcription factors. The similarity is solely in the structure and not in the protein sequence.>

Zhong said, “It really demanded modern AI to formulate the three-dimensional structure very precisely to make this discovery.”

After discovering the substructure, the team then demonstrated that with it, the protein can activate two critical target genes. That throws off the delicate balance, leading to several problems and eventually the early stages of Alzheimer’s disease. In other words, PHGDH has a previously unknown role, independent of its enzymatic function, that through a novel pathway leads to spontaneous Alzheimer’s disease.

That ties back to the team’s earlier studies: the PHGDH gene produced more proteins in the brains of Alzheimer’s patients compared to the control brains, and those increased amounts of the protein in the brain triggered the imbalance. While everyone has the PHGDH gene, the difference comes down to the expression level of the gene, or how many proteins are made by it.

[…]

Given that PHGDH is such an important enzyme, there are past studies on its possible inhibitors. One small molecule, known as NCT-503, stood out to the researchers because it is not quite effective at impeding PHGDH’s enzymatic activity (the production of serine), which they did not want to change. NCT-503 is also able to penetrate the blood-brain-barrier, which is a desirable characteristic.

They turned to AI again for three-dimensional visualization and modeling. They found that NCT-503 can access that DNA-binding substructure of PHGDH, thanks to a binding pocket. With more testing, they saw that NCT-503 does indeed inhibit PHGDH’s regulatory role.

When the researchers tested NCT-503 in two mouse models of Alzheimer’s disease, they saw that it significantly alleviated Alzheimer’s progression. The treated mice demonstrated substantial improvement in their memory and anxiety tests. These tests were chosen because Alzheimer’s patients suffer from cognitive decline and increased anxiety.

The researchers do acknowledge limitations of their study. One being that there is no perfect animal model for spontaneous Alzheimer’s disease. They could test NCT-503 only in the mouse models that are available, which are those with mutations in those known disease-causing genes.

Still, the results are promising, according to Zhong.

[…]

Source: AI Helps Unravel a Cause of Alzheimer’s Disease and Identify a Therapeutic Candidate

Can the EU’s Dual Strategy of Regulation and Investment Redefine AI Leadership?

Beyond sharing a pair of vowels, AI and the EU both present significant challenges when it comes to setting the right course. This article makes the case that reducing regulation for large general-purpose AI providers under the EU’s competitiveness agenda is not a silver bullet for catching Europe up to the US and China, and would only serve to entrench European dependencies on US tech. Instead, by combining its regulatory toolkit and ambitious investment strategy, the EU is uniquely positioned to set the global standard for trustworthy AI and pursue its tech sovereignty. It is an opportunity that Europe must take.

Recent advances in AI have drastically shortened the improvement cycle from years to months, thanks to new inference-time compute techniques that enable self-prompting, chain-of-thought reasoning in models like OpenAI’s o1 and DeepSeek’s R1. However, these rapid gains also increase risks like AI-enabled cyber offenses and biological attacks. Meanwhile, the EU and France recently committed €317 billion to AI development in Europe, joining a global race with comparably large announcements from both the US and China.

Now turning to EU AI policy, the newly established AI Office and 13 independent experts are nearing the end of a nine-month multistakeholder drafting process of the Code of Practice (CoP); the voluntary technical details of the AI Act’s mandatory provisions for general purpose AI providers. The vast majority of the rules will apply to only the largest model providers, ensuring proportionality: the protection of SMEs, start-ups, and other downstream industries. In the meantime, the EU has fully launched a competitiveness agenda, with the Commission’s recently published Competitiveness Compass and first omnibus simplification package outlining plans for widespread streamlining of reporting obligations amidst mounting pushback against this simplified narrative. Add to this the recent withdrawal of the AI Liability Directive, and it’s clear to see which way the political winds are blowing.

So why must this push for simplification be replaced by a push for trustworthy market creation in the case of general-purpose AI and the Code of Practice? I’ll make three main points: 1) Regulation is not the reason for Europe lacking Big Tech companies, 2) Sweeping deregulation creates legal uncertainty and liability risks for downstream deployers, and slows trusted adoption of new technologies and thereby growth, 3) Watering down the CoP for upstream model providers with systemic risk will almost exclusively benefit large US incumbents, entrenching dependency and preventing tech sovereignty.

[…]

The EU’s tech ecosystem had ample time to emerge in the years preceding and following the turn of the century, free of so-called “red tape,” but this did not happen and will not again through deregulation […] One reason presented by Bradford is that the European digital single market still remains fragmented, with differing languages, cultures, consumer preferences, administrative obstacles, and tax regimes preventing large tech companies from seamlessly growing within the bloc and throughout the world. Even more fragmented are the capital markets of the EU, resulting in poor access to venture capital for tech start-ups and scale-ups. Additional points include harsh, national-level bankruptcy laws that are “creditor-oriented” in the EU, compared to more forgiving “debtor-friendly” equivalents in the US, resulting in lower risk appetite for European entrepreneurs. Finally, skilled migration is significantly more streamlined in the US, with federal-level initiatives like the H-1B visa leading to the majority of Big Tech CEOs hailing from overseas

[…]

The downplaying of regulation as Europe’s AI hindrance has been repeated by leading industry voices such as US VC firm a16z, European VC firm Merantix Capital, and French provider MistralAI. To reiterate: the EU ‘lagging behind’ on trillion-dollar tech companies and the accompanying innovation was not a result of regulation before there was regulation, and is also not a result of regulation after.

[…]

Whether for planes, cars, or drugs, early use of dangerous new technologies, without accompanying rules, saw frequent preventable accidents, reducing consumer trust and slowing market growth. Now, with robust checks and balances in place from well-resourced regulatory authorities, such markets have been able to thrive, providing value and innovation to citizens. Other sectors, like nuclear energy and, more recently, crypto, have suffered from an initial lack of regulation, causing industry corner-cutting, leading to infamous disasters (from Fukushima to the collapse of FTX) from which public trust has been difficult to win back. Regulators around the world are currently risking the same fate for AI.

This point is particularly relevant for so-called ‘downstream deployers’: companies that build applications on top of (usually) Big Tech, provided underlying models. Touted by European VC leader Robert Lacher as Europe’s “huge opportunity” in AI, downstream deployers, particularly SMEs, serve to gain from the Code of Practice, which ensures that necessary regulatory checks and balances occur upstream at the level of model provider.

[…]

Finally, the EU’s enduring and now potentially crippling dependency on US technology companies has been importantly addressed by the new Commission, best exemplified by the title of Executive Vice President Henna Virkkunen’s file: Tech Sovereignty, Security and Democracy. With the last few months’ geopolitical developments, including all-time-low transatlantic relations and an unfolding trade war, some have gone as far as warning of the possibility of US technology being used for surveillance of Europe and of the US sharing intelligence with Russia. Clearly, the urgency of tech sovereignty has drastically increased. A strong Code of Practice would return agency to the EU, ensuring that US upstream incumbents meet basic security, safety, and ethical standards whilst also easing the EU’s AI adoption problem by ensuring technology is truly trustworthy.

So, concretely, what needs to be done? Bruegel economist Mario Mariniello summed it up concisely: “On tech regulation, the European Union should be bolder.”

[…]

This article has outlined why deregulating highly capable AI models, produced by the world’s largest companies, is not a solution to Europe’s growth problem. Instead of stripping back obligations, ensuring protections of European citizens, the EU must combine its ambitious AI investment plan with boldly pursuing leadership in setting global standards, accelerating trustworthy adoption and ensuring tech sovereignty. This combination will put Europe on the right path to drive this technological revolution forward for the benefit of all.

Source: Can the EU’s Dual Strategy of Regulation and Investment Redefine AI Leadership? | TechPolicy.Press

Europe’s Tech Sovereignty Demands More Than Competitiveness

BRUSSELS – As part of his confrontational stance toward Europe, US President Donald Trump could end up weaponizing critical technologies. The European Union must appreciate the true nature of this threat instead of focusing on competing with the US as an economic ally. To achieve true tech sovereignty, the EU should transcend its narrow focus on competitiveness and deregulation and adopt a far more ambitious strategy

[…]

Europe’s growing anxiety about competitiveness is fueled by its inability to challenge US-based tech giants where it counts: in the market. As the Draghi report points out, the productivity gap between the United States and the EU largely reflects the relative weakness of Europe’s tech sector. Recent remarks by European Commission President Ursula von der Leyen and Tech Commissioner Henna Virkkunen suggest that policymakers have taken Draghi’s message to heart, making competitiveness the central focus of EU tech policy. But this singular focus is both insufficient and potentially counterproductive at a time of technological and geopolitical upheaval. While pursuing competitiveness could reduce Big Tech’s influence over Europe’s economy and democratic institutions, it could just as easily entrench it. European leaders’ current fixation on deregulationturbocharged by the Draghi report – leaves EU policymaking increasingly vulnerable to lobbying by powerful corporate interests and risks legitimizing policies that are incompatible with fundamental European values.

As a result, the European Commission’s deregulatory measures – including its recent decision to shelve draft AI and privacy rules, and its forthcoming “simplification” of tech legislation including the GDPR – are more likely to benefit entrenched tech giants than they are to support startups and small and medium-size enterprises. Meanwhile, Europe’s hasty and uncritical push for “AI competitiveness” risks reinforcing Big Tech’s tightening grip on the AI technology stack.

It should come as no surprise that the Draghi report’s deregulatory agenda was warmly received in Silicon Valley, even by Elon Musk himself. But the ambitions of some tech leaders go far beyond cutting red tape. Musk’s use of X (formerly Twitter) and Starlink to interfere in national elections and the war in Ukraine, together with the Trump administration’s brazen attacks on EU tech regulation, show that Big Tech’s quest for power poses a serious threat to European sovereignty.

Europe’s most urgent task, then, is to defend its citizens’ rights, sovereignty, and core values from increasingly hostile American tech giants and their allies in Washington. The continent’s deep dependence on US-controlled digital infrastructure – from semiconductors and cloud computing to undersea cables – not only undermines its competitiveness by shutting out homegrown alternatives but also enables the owners of that infrastructure to exploit it for profit.

[…]

Strong enforcement of competition law and the Digital Markets Act, for example, could curb Big Tech’s influence while creating space for European startups and challengers to thrive. Similarly, implementing the Digital Services Act and the AI Act will protect citizens from harmful content and dangerous AI systems, empowering Europe to offer a genuine alternative to Silicon Valley’s surveillance-driven business models. Against this backdrop, efforts to develop homegrown European alternatives to Big Tech’s digital infrastructure have been gaining momentum. A notable example is the so-called “Eurostack” initiative, which should be viewed as a key step in defending Europe’s ability to act independently.

[…]

A “competitive” economy holds little value if it comes at the expense of security, a fair and safe digital environment, civil liberties, and democratic values. Fortunately, Europe doesn’t have to choose. By tackling its technological dependencies, protecting democratic governance, and upholding fundamental rights, it can foster the kind of competitiveness it truly needs.

Source: Europe’s Tech Sovereignty Demands More Than Competitiveness by Marietje Schaake & Max von Thun – Project Syndicate

Deregulation has led to huge amounts of problems globally, such as the monopoly / duopoly problems we can’t seem to deal with; reliance on external markets and companies that whimsically change their minds; unsustainable hardware and software choices allowing devices to be bricked, poorly secured and irreparable; vendor lock-in to closed source ecosystems; damage to innovation; privacy invasions which lead to hacking attacks; etc etc. As Europe we can make our own choices about our own values – we are not determined by the singular motive of profit. European values are inclusive and also promote things like education and happiness.

Content Moderation or Security Theater? How Social Media Platforms Really Enforce Community Guidelines

[…] To better understand how major platforms moderate content, we studied and compared the community guidelines of Meta, TikTok, YouTube, and X.

We must note that platforms’ guidelines often evolve, so the information used in this study is based only on the latest available data at the time of publication. Moreover, the strictness and regularity of policy implementation may vary per platform.

Content Moderation

We were able to categorize 3 main methods of content moderation in major platforms’ official policies: AI-based enforcement, human or staff review, and user reporting.

Content moderation practices (AI enforcement, human review, and user reporting) across Meta, TikTok, YouTube, and X

Notably, TikTok is the only platform that doesn’t officially employ all 3 content moderation methods. It only clearly defines the process of user reporting, although it mentions that it relies on a “combination of safety approaches.” Content may go through an automated review, especially those from accounts with previous violations, and human moderation when necessary.

Human or staff review and AI enforcement are observed in the other 3 platforms’ policies. In most cases, the platforms claim to employ the methods hand-in-hand. YouTube and X (formerly Twitter) describe using a combination of machine learning and human reviewers. Meta has a unique Oversight Board that manages more complicated cases.

Criteria for

Banning Accounts

Meta TikTok YouTube X
Severe Single Violation
Repeated Violations
Circumventing Enforcement

All platform policies include the implementation of account bans for repeat or single “severe” violations. Of the 4 platforms, TikTok and X are the only ones to include circumventing moderation enforcement as additional grounds for account banning.

Content Restrictions

Age Restrictions Adult Content Gore Graphic Violence
Meta 10-12 (supervised), 13+ Allowed with conditions Allowed with conditions Allowed with conditions
TikTok 13+ Prohibited Allowed with conditions Prohibited
YouTube Varies Prohibited Prohibited Prohibited
X 18+ Allowed (with labels) Allowed with conditions Prohibited

Content depicting graphic violence is the most widely prohibited in platforms’ policies, with only Meta allowing it with conditions (the content must be “newsworthy” or “professional”).

Adult content is also heavily moderated per the official community guidelines. X allows them given there are adequate labels, while other platforms restrict any content with nudity or sexual activity that isn’t for educational purposes.

YouTube is the only one to impose a blanket prohibition on gory or distressing materials. The other platforms allow such content but might add warnings for users.

Policy strictness across platforms, ranked from least (1) to most (5) strict across 6 categories

All platforms have a zero-tolerance policy for content relating to child exploitation. Other types of potentially unlawful content — or those that threaten people’s lives or safety — are also restricted with varying levels of strictness. Meta allows discussions of crime for awareness or news but prohibits advocating for or coordinating harm.

Other official metrics for restriction include the following:

Platforms' official community guidelines regarding free speech vs. fact-checking, news and education, and privacy and security

What Gets Censored the Most?

Overall, major platforms’ community and safety guidelines are generally strict and clear regarding what’s allowed or not. However, what content moderation looks like in practice may be very different.

We looked at censorship patterns for videos on major social media platforms, including Instagram Reels, TikTok, Facebook Reels, YouTube Shorts, and X.

The dataset considered a wide variety of videos, ranging from entertainment and comedy to news, opinion, and true crime. Across the board, the types of content we observed to be most commonly censored include:

  • Profanity: Curse words were censored via audio muting, bleeping, or subtitle redaction.
  • Explicit terms: Words pertaining to sexual activity or self-harm were omitted or replaced with alternative spellings.
  • Violence and conflict: References to weapons, genocide, geopolitical conflicts, or historical violence resulted in muted audio, altered captions, or warning notices, especially on TikTok and Instagram.
  • Sexual abuse: Content related to human trafficking and sexual abuse had significant censorship, often requiring users to alter spellings (e.g., “s3x abuse” or “trffcked”).
  • Racial slurs: Some instances of censored racial slurs were found in rap music videos on TikTok and X.

Pie charts showing the types of content censored and censorship methods observed across platforms

Instagram seems to heavily censor explicit language, weapons, and sexual content, mostly through muting and subtitle redaction. Content depicting war, conflict, graphic deaths and injuries, or other potentially distressing materials often require users to click through a “graphic content” warning before being able to view the image or video.

Facebook primarily censors profanity and explicit terms through audio bleeping and subtitle removal. However, some news-related posts are able to retain full details.

On the other hand, TikTok uses audio censorship and alters captions. As such, many creators regularly use coded language when discussing sensitive topics. YouTube also employs similar filters, muting audio or blurring visuals extensively to hide profanity and explicit words or graphics. However, it still allows offensive words in some contexts (educational, scientific, etc.).

X combines a mix of redactions, visual blurring, and muted audio. Profanity and graphic violence are sometimes left uncensored, but sensitive content will typically get flagged or blurred, especially once reported by users.

Censorship Method Platforms Using It Description/Example
Muted or Bleeped Audio Instagram, TikTok, Facebook, YouTube, X Profanity, explicit terms, and violence-related speech altered or omitted
Redacted or Censored Subtitles Instagram, TikTok, Facebook, X Sensitive words (e.g., words like “n*****,” “fu*k,” and “traff*cked”) altered or omitted
Blurred Video or Images Instagram, Facebook, X Sensitive content (e.g., death and graphic injuries) blurred and labeled with a warning

News and Information Accounts

Our study confirmed that news outlets and credible informational accounts are sometimes subject to different moderation standards.

Posts on Instagram, YouTube, and X (from accounts like CNN or BBC) discussing war or political violence were only blurred and presented with an initial viewing warning, but they were not muted or altered in any way. Meanwhile, user-generated content discussing similar topics faced audio censorship.

On the other hand, comedic and entertainment posts still experienced strict regulations on profanity, even on news outlets. This suggests that humor and artistic contexts likely don’t exempt content from moderation, regardless of the type of account or creator.

The Coded Language Workaround

A widespread workaround for censorship is the use of coded language to bypass automatic moderation. Below are some of the most common ones we observed:

  • “Fuck” → “fk,” “f@ck,” “fkin,” or a string of 4 special characters
  • “Ass” → “a$$,” “a**,” or “ahh”
  • “Gun” → “pew pew” or a hand gesture in lieu of saying the word
  • “Genocide” → “g*nocide”
  • “Sex” → “s3x,” “seggs,” or “s3ggs”
  • “Trafficking” → “tr@fficking,” or “trffcked”
  • “Kill” → “k-word”
  • “Dead” → “unalive”
  • “Suicide” → “s-word,” or “s**cide”
  • “Porn” → “p0rn,” “corn,” or corn emoji
  • “Lesbian” → “le$bian” or “le dollar bean”
  • “Rape” → “r@pe,” “grape,” or grape emoji

This is the paradox of modern content moderation: how effective are “strict” guidelines when certain types of accounts are occasionally exempt from them and other users can exploit simple loopholes?

Since coded words are widely and easily understood, it suggests that AI-based censorship mainly filters out direct violations rather than stopping or removing sensitive discussions altogether.

Is Social Media Moderation Just Security Theater?

Overall, it’s clear that platform censorship for content moderation is enforced inconsistently.

Given that our researchers are also subject to the algorithmic biases of the platforms tested, and we’re unlikely to be able to interact with shadowbanned accounts, we can’t fully quantify or qualify the extent of restrictions that some users suffer for potentially showing inappropriate content.

However, we know that many creators are able to circumvent or avoid automated moderation. Certain types of accounts receive preferential treatment in terms of restrictions. Moreover, with social media apps’ heavy reliance on AI moderation, users are able to evade restrictions with the slightest modifications or substitutions.

Are Platforms Capable of Implementing Strict Blanket Restrictions on “Inappropriate” Content?

Especially with how most people rely on social media to engage with the world, it could be considered impractical or even ineffective to try and restrict sensitive conversations. This is particularly true when contexts are excluded, and restrictions focus solely on keywords, which is often the case for automated moderation.

Also, one might ponder whether content restrictions are primarily in place for liability protection instead of user safety — especially if platforms know about the limitations of AI-based moderation but continue to use it as their primary means of enforcing community guidelines.

Are Social Media Platforms Deliberately Performing Selective Moderation?

At the beginning of 2025, Meta made waves after it announced that it would be removing fact-checkers. Many suggested that this change was influenced by the seemingly new goodwill between its founder and CEO, Mark Zuckerberg, and United States President Donald Trump.

Double standards are also apparent in other platforms whose owners have clear political ties. Elon Musk, a popular supporter and backer of Trump, has been reported to spread misinformation about government spending — posting or reposting false claims on X, the platform he owns.

This is despite the platform’s guidelines clearly prohibiting “media that may result in widespread confusion on public issues, impact public safety, or cause serious harm.”

Given the seemingly one-sided implementation of policies on different social media sites, we believe individuals and organizations must practice careful scrutiny when consuming media or information on these platforms.

Community guidelines aren’t fail-safes for ensuring safe, uplifting, and constructive spaces online. We believe that what AI algorithms or fact-checkers consider safe shouldn’t be seen as the standard or universal truth. That is, not all restricted posts are automatically “harmful,” the same way not all retained posts are automatically true or reliable.

Ultimately, the goal of this study is to help digital marketers, social media professionals, journalists, and the general public learn more about the evolving mechanics of online expression. With the insights gathered from this research, we hope to spark conversation about the effectiveness and fairness of content moderation in the digital space.

[…]

Source: Content Moderation or Security Theater? How Social Media Platforms Really Enforce Community Guidelines

1 Million customers from French Boulanger’s Customers Exposed Online for free

In a recent discovery, SafetyDetectives’ Cybersecurity Team stumbled upon a clear web forum post where a threat actor publicized a database allegedly belonging to Boulanger Electroménager & Multimédia purportedly exposing 5 Million of their customers.

What is Boulanger Electroménager & Multimédia?

Boulanger Electroménager & Multimédia is a French company that specializes in the sale of household appliances and multimedia products.

Founded in 1954, according to their website, Boulanger has physical stores and delivers its products to clients across France. The company also offers an app, which has over 1 million downloads on the Google Play Store and Apple’s App Store.

Where Was The Data Found?

The data was found in a forum post available on the clear surface web. This well-known forum operates message boards dedicated to database downloads, leaks, cracks, and more.

What Was Leaked?

The author of the post included two links to the unparsed and clean datasets, which purportedly belong to Boulanger. They claim the unparsed dataset consists of a 16GB .JSON file with 27,561,591 million records, whereas the clean dataset is comprised of a 500MB .CSV file with 5 million records.

Links to both datasets were hidden and set to be shown after giving a like or leaving a comment on the post. As a result, the data was set to be unlocked for free by anyone with an account on the forum who was willing to simply interact with the post.

Our Cybersecurity Team reviewed part of the datasets to assess their authenticity, and we can confirm that the data appears to be legitimate. After running a comparative analysis, it seems like these datasets correspond to the purportedly stolen data from the 2024 cyberincident.

Back in September 2024, Boulanger was one of the targets of a ransomware attack that also affected other retailers, such as Truffaut and Cultura. A threat author with the nickname “horrormar44” claimed responsibility for the breach.

At the time, the data was offered on a different well-known clear web forum — which is currently offline — at a price of €2,000. Although there allegedly were some potential buyers, it is unclear if the sale was actually finalized. In any case, it seems the data has resurfaced now as free to download.

While reviewing the data, we found that the clean dataset contains just over 1 million rows containing one customer per row and includes some duplicates. While that’s still a considerable number of customers, it’s far smaller than the 5 million claimed by the author of the post.

The sensitive information allegedly belonging to Boulanger’s customers included:

  • Name
  • Surname
  • Full physical address
  • Email address
  • Phone number

[….]

Source: 27 Million Records from French Boulanger’s Customers Allegedly Exposed Online

Google turns early Nest Thermostats into dumb thermostats

Google has just announced that it’s ending software updates for the first-generation Nest Learning Thermostat, released in 2011, and the second-gen model that came a year later. This decision also affects the European Nest Learning Thermostat from 2014. “You will no longer be able to control them remotely from your phone or with
Google Assistant, but can still adjust the temperature and modify schedules directly on the thermostat,“ the company wrote in a Friday blog post.

[…]

Google is flatly stating that it has no plans to release additional Nest thermostats in Europe. “Heating systems in Europe are unique and have a variety of hardware and software requirements that make it challenging to build for the diverse set of homes,“ the company said. “The Nest Learning Thermostat (3rd gen, 2015) and Nest Thermostat E (2018) will continue to be sold in Europe while current supplies last.”

[…]

Source: Google is killing software support for early Nest Thermostats | The Verge

Yes, so in about a year they will be dumb thermostats too. I don’t think I would buy one of those then.

Microsoft mystery folder fix needs a fix of its own with simple POC

Turns out Microsoft’s latest patch job might need a patch of its own, again. This time, the culprit is a mysterious inetpub folder quietly deployed by Redmond, now hijacked by a security researcher to break Windows updates.

The folder, typically c:\inetpub, reappeared on Windows systems in April as part of Microsoft’s mitigation for CVE-2025-21204, an exploitable elevation-of-privileges flaw within Windows Process Activation. Rather than patching code directly, Redmond simply pre-created the folder to block a symlink attack path. For many administrators, the reappearance of this old IIS haunt raised eyebrows, especially since the mitigation did little beyond ensuring the folder existed.

For at least one security researcher, in this case Kevin Beaumont, the fix also presented an opportunity to hunt for more vulnerabilities. After poking around, he discovered that the workaround introduced a new flaw of its own, triggered using the mklink command with the /j parameter.

It’s a simple enough function. According to Microsoft’s documentation, mklink “creates a directory or file symbolic or hard link.” And with the /j flag, it creates a directory junction – a type of filesystem redirect.

Beaumont demonstrated this by running: “mklink /j c:\inetpub c:\windows\system32\notepad.exe.” This turned the c:\inetpub folder – precreated in Microsoft’s April 2025 update to block symlink abuse – into a redirect to a system executable. When Windows Update tried to interact with the folder, it hit the wrong target, errored out, and rolled everything back.

“So you just go without security updates,” he noted.

The kicker? No admin rights are required. On many default-configured systems, even standard users can run the same command, effectively blocking Windows updates without ever escalating privileges.

[…]

Source: Microsoft mystery folder fix might need a fix of its own • The Register

Employee monitoring app exposes 21M work screens​ to internet

A surveillance tool meant to keep tabs on employees is leaking millions of real-time screenshots onto the open web.

Your boss watching your screen isn’t the end of the story. Everyone else might be watching, too. Researchers at Cybernews have uncovered a major privacy breach involving WorkComposer, a workplace surveillance app used by over 200,000 people across countless companies.

The app, designed to track productivity by logging activity and snapping regular screenshots of employees’ screens, left over 21 million images exposed in an unsecured Amazon S3 bucket, broadcasting how workers go about their day frame by frame.

[…]

WorkComposer is one of many time-tracking tools that have crept into modern work culture. Marketed as a way to keep teams “accountable,” the software logs keystrokes, tracks how long you spend on each app, and snaps desktop screenshots every few minutes.

The leak shows just how dangerous this setup becomes when basic security hygiene is ignored. A leak of this magnitude turns everyday work activity into a goldmine for cybercriminals.

[…]

The leak’s real-time nature only amplifies the danger, as threat actors could monitor unfolding business operations as they happen, giving them access to otherwise locked-down environments.

[…]

Source: Employee monitoring app exposes 21M work screens​ | Cybernews

Can you imagine the time lost by employers going over all this data?

Microsoft Windows new built in spying tool Microsoft Recall is a really great idea too. Not.