Elons Falcon 9 dumps huge amounts of Lithium over the EU during burn up

The SpaceX Falcon 9 rocket that burned up over Europe last year left a massive lithium plume in its wake, say a group of scientists. They warn the disaster is likely a sign of things to come as Earth’s atmosphere continues to become a heavily trafficked superhighway to space.

In a paper published Thursday, an international group of scientists reports what they say is the first measurement of upper-atmosphere pollution resulting from the re-entry of space debris, as well as the first time ground-based light detection and ranging (lidar) has been shown to be able to detect space debris ablation.

The measurements stem from a SpaceX Falcon 9 upper stage that sprung an oxygen leak about a year ago, sending it into an uncontrolled re-entry. Then it broke up and rained debris down on Poland. The rocket not only littered farm fields, but also injected lithium into the Mesosphere and Lower Thermosphere (MLT), where ground-based sensors detected a tenfold increase at an altitude of 96 km about 20 hours after the rocket re-entered the atmosphere, according to the paper.

Lithium was selected for the study because of its considerable presence in spacecraft, both in lithium-ion batteries and lithium-aluminum alloy used in the construction of spacecraft. A single Falcon 9 upper stage, like the one that broke up over Poland and released the lithium plume, is estimated to contain 30 kg of lithium just in the alloy used in tank walls. 

By contrast, around 80 grams of lithium enter the atmosphere per day from cosmic dust particles, the researchers noted. 

“This finding supports growing concerns that space traffic may pollute the upper atmosphere in ways not yet fully understood,” the paper notes, adding that the continued re-entry of spacecraft and satellites is of particular concern given how the composition of spacecraft is different from natural meteoroids.

“Satellites and rocket stages introduce engineered materials such as aluminium alloys, composite structures, and rare earth elements from onboard electronics, substances rarely found in natural extraterrestrial matter,” the paper explained. “The consequences of increasing pollution from re-entering space debris on radiative transfer, ozone chemistry, and aerosol microphysics remain largely unknown.”

The effect on Earth’s atmosphere posed by spacecraft and satellite re-entry is one that’s been a growing concern for astrophysicists like Harvard sky-watcher Jonathan McDowell, who has echoed similar concerns to The Register as the European scientists raised in their paper.

[…]

Source: Euro boffins track lithium plume from Falcon 9 burn-up • The Register

Discord’s First Age-Verification ‘Experiment’ Alarms Hackers: Supplier “Persona” not only leaky, but also uses IDs for various purposes not age related

Last week, Discord users reported seeing prompts to submit personal information to Persona, a third-party age-verification service. As Discord commits to universal age-verification, the new measures have come under intense scrutiny after previous security failures. Now a trio of hacktivists say they’ve successfully breached Persona, getting a closer look at how the company uses submitted biometrics. They say their findings raise alarms beyond the possibility of leaks.

According to The Rage, Persona’s front-end security left a lot to be desired. Worse, however, were investigative findings that suggested Persona’s surveillance of the users whose data it collected was way more sprawling than originally believed.

“It was initially meant to be a passive recon investigation,” writes vmfunc, a cybersecurity researcher and one of the hackers, “that quickly turned into a rabbit hole deep dive into how commercial AI and federal government operations work together to violate our privacy every waking second.”

On top of finding it surprisingly easy to access data gathered by Persona, the research showed that faces and biometrics were not just being scanned for age verification, but flagged for suspicious behavior and bounced off watchlists as well. To some, particularly those who don’t worry about their face being deemed “suspicious,” this may not sound like an Orwellian level of intrusion, until you remember Persona’s full network.

Persona received $150 million in 2021 from the Founders Fund, a long-running tech investor group headed by Peter Thiel. Thiel’s main business, on top of palling around in Jeffery Epstein’s emails and waiting for the antichrist, is Palantir, an intentionally ominously-named data brokering service that is currently peddling user information to support ICE raids. The findings of vmfunc and co’s research doesn’t directly tether Persona and Discord’s operations to Palantir or Thiel, but it wouldn’t be conspiratorial to point out that all this data seems to be funnelling along similar slopes.

Trust but verify

Persona has confirmed the breach, CEO Rick Song corresponding and even thanking the hackers for flagging the security exploit. This has not, however, tempered concerns among those hacktivists about how the user information is ultimately being used.

“Transparently, we are actively working on a couple of potential contracts which would be publicly visible if we move forward,” writes Christie Kim, chief operating officer at Persona, in an email regarding the security breach and speculation around Discord. “However, these engagements are strictly for workforce account security of government employees and do not include ICE or any agency within the Department of Homeland Security.”

After the alarm was initially raised about Persona, Discord claimed its work with the Thiel-backed firm was only temporary, and that it didn’t have new contacts with it moving forward. It also promised user info was being wiped from servers within seven days of being gathered.

Source: Discord’s First Age-Verification ‘Experiment’ Alarms Hackers

Man accidentally gains control of 7,000 DJI robot vacuums – with live camera feeds, microphone audio, maps, and status data

A software engineer’s earnest effort to steer his new DJI robot vacuum with a video game controller inadvertently granted him a sneak peak into thousands of people’s homes.

While building his own remote-control app, Sammy Azdoufal reportedly used an AI coding assistant to help reverse-engineer how the robot communicated with DJI’s remote cloud servers. But he soon discovered that the same credentials that allowed him to see and control his own device also provided access to live camera feeds, microphone audio, maps, and status data from nearly 7,000 other vacuums across 24 countries. The backend security bug effectively exposed an army of internet-connected robots that, in the wrong hands, could have turned into surveillance tools, all without their owners ever knowing.

robot vaccum
The DJI Romo. Image: DJI

Luckily, Azdoufal chose not to exploit that. Instead, he shared his findings with The Verge, which quickly contacted DJI to report the flaw. While DJI tells Popular Science the issue has been “resolved,” the dramatic episode underscores warnings from cybersecurity experts who have long-warned that internet-connected robots and other smart home devices present attractive targets for hackers.

[…]

Source: Man accidentally gains control of 7,000 robot vacuums | Popular Science

The Stop Killing Games campaign will set up NGOs in the EU and US

The Stop Killing Games campaign is evolving into more than just a movement. In a YouTube video, the campaign’s creator, Ross Scott, explained that organizers are planning to establish two non-governmental organizations, one for the European Union and another for the US. According to Scott, these NGOs would allow for “long-term counter lobbying” when publishers end support for certain video games.

“Let me start off by saying I think we’re going to win this, namely the problem of publishers destroying video games that you’ve already paid for,” Scott said in the video. According to Scott, the NGOs will work on getting the original Stop Killing Games petition codified into EU law, while also pursuing more watchdog actions, like setting up a system to report publishers for revoking access to purchased video games.

The Stop Killing Games campaign started as a reaction to Ubisoft’s delisting of The Crew from players’ libraries. The controversial decision stirred up concerns about how publishers have the ultimate say on delisting video games. After crossing a million signatures last year, the movement’s leadership has been busy exploring the next steps.

According to Scott, the campaign leadership will meet with the European Commission soon, but is also working on a 500-page legal paper that reveals some of the industry’s current controversial practices. In the meantime, the ongoing efforts have led to a change of heart from Ubisoft since the publisher updated The Crew 2 with an offline mode.

Source: The Stop Killing Games campaign will set up NGOs in the EU and US

IDMerit data breach: 1 billion records of personal data exposed in ID verification leak – which no-one except everyone saw coming.

Because IDMerit is an AI-powered KYC (Know Your Customer) provider, the data it collects is incredibly sensitive. The unsecured 1-terabyte database didn’t just leak passwords—it leaked the core personal identifiers used for your financial and digital life. The following structured data was left open for anyone to download:

  • Full names
  • Addresses
  • Post codes
  • Dates of birth
  • National IDs
  • Phone numbers
  • Genders
  • Email addresses
  • Telco metadata
  • Breach status and social profile annotations

The last data point – breach status and social profile annotations – could refer to a database identifier indicating whether the data originated from a data breach or a leaked database. However, at this point, the true meaning of the data point is unclear. The team noted that this specific data point was present only in some regions.

“At this scale, downstream risks include account takeovers, targeted phishing, credit fraud, SIM swaps, and long-tail privacy harms. Industry-wide, the case underlines how third-party identity vendors have become critical infrastructure and can become single points of catastrophic failure,” our team explained.

Who is IDMerit and How Did This Happen?

Our team believes the exposed database belongs to IDMerit, an AI-powered digital identity verification solutions provider. The company serves the fintech and financial services sectors, helping businesses with real-time verification tools. KYC (Know Your Customer) practices are a global norm for users to verify their identities when setting up various accounts.

Our researchers noticed the exposed instance on November 11th, 2025 and immediately contacted the company, which promptly secured the database. While there is no current evidence of malicious misuse, automated crawlers set up by threat actors constantly prowl the web for exposed instances, downloading them almost instantly once they appear.

Global data leak spans multiple countries

What’s most striking about the IDMerit data leak is its scale and global geography, with three billion records spanning over 20 countries. Several databases appeared to contain overlapping slices for the same country. However, our team believes most of the records were unique.

The country with the most exposed records was the United States, having over 203 million records leaked. The US was followed by Mexico (124M) and the Philippines (72M). Behind the first three, we see a trio of European nations: Germany (61M), Italy (53M), and France (53M).

[…]

Source: IDMerit data breach: 1 billion records of personal data exposed in KYC data leak | Cybernews

scary stories which predicted this a long long time coming:

https://www.linkielist.com/?s=age+verification&submit=Search

Country that censors: criticism of prez by lawfare; books; reporters in the white house; etc Is Working on a Site to Help Europeans Bypass Content Bans on Hate Speech

The U.S. State Department is reportedly working on an online portal that would allow people in Europe and other regions to access content banned by their governments. The move comes at a time when conservative figures like Elon Musk and J.D. Vance have railed against European attempts to clamp down on hate speech, terrorist propaganda, and revenge porn.

Reuters reported Wednesday, citing unnamed sources, that the initiative is intended to fight censorship and could include a virtual private network (VPN) feature.

The portal would reportedly be hosted at Freedom.gov. The site currently displays a landing page featuring a small animation of Paul Revere on horseback above the words “Freedom is Coming.” Smaller text below reads, “Information is power. Reclaim your human right to free expression. Get Ready.”

[…]

Reuters reported that the portal was expected to launch at the conference, but was delayed.

“We don’t comment on draft laws, and that’s what it is,” European Commission Spokesperson Thomas Regnier said when asked about the portal during a press briefing today. “Let me say that the Commission does not block access to websites. It’s up to national authorities to do this kind of thing. If a website breaches EU law or international law, talking about sites which promote hate speech, for example, or have terrorist content, obviously that does not belong in Europe. That’s why we have a regulation on digital services, the DSA, which protects freedom of expression.”

[…]

Ironically, The Guardian reported today that DOGE cuts to the State Department and U.S. Agency for Global Media’s Internet Freedom program have effectively gutted the program.

The initiative funded grassroots tools to help people bypass government internet controls worldwide. It distributed over $500 million over the past decade but issued no funding in 2025, according to The Guardian.

Source: The US Is Working on a Site to Help Europeans Bypass Content Bans on Hate Speech: Report

MS demostrates Laser writing in glass for dense, fast, efficient 10k+ year archival data storage

Long-term preservation of digital information is vital for safeguarding the knowledge of humanity for future generations. Existing archival storage solutions, such as magnetic tapes and hard disk drives, suffer from limited media lifespans that render them unsuitable for long-term data retention1,2,3. Optical storage approaches, particularly laser writing in robust media such as glass, have emerged as promising alternatives with the potential for increased longevity. Previous work4,5,6,7,8,9,10,11,12,13,14,15,16 has predominantly optimized individual aspects such as data density but has not demonstrated an end-to-end system, including writing, storing and retrieving information. Here we report an optical archival storage technology based on femtosecond laser direct writing in glass that addresses the practical demands of archival storage, which we call Silica. We achieve a data density of 1.59 Gbit mm−3 in 301 layers for a capacity of 4.8 TB in a 120 mm square, 2 mm thick piece of glass. The demonstrated write regimes enable a write throughput of 25.6 Mbit s−1 per beam, limited by the laser repetition rate, with an energy efficiency of 10.1 nJ per bit. Moreover, we extend the storage ability to borosilicate glass, offering a lower-cost medium and reduced writing and reading complexity. Accelerated ageing tests on written voxels in borosilicate suggest data lifetimes exceeding 10,000 years.

[…]

Source: Laser writing in glass for dense, fast and efficient archival data storage | Nature

Copilot summarises emails it has been specifically told not to read

Microsoft has some sort of apology (at the bottom) saying that copilot permissions did not extend beyond the user permissions, but that merrily skips along the fact that copilot permissions are not equal to user permissions: this is a governance issue: data ingested by copilot is used as training data. MS cannot guarantee that this will not be moved to a US server, where the data can be (and is!) read by the US government and given to competitors.

Microsoft 365 Copilot Chat has been summarizing emails labeled “confidential” even when data loss prevention policies were configured to prevent it.

Though there are data sensitivity labels and data loss prevention policies in place for email, Copilot has been ignoring those and talking about secret stuff in the Copilot Chat tab. It’s just this sort of scenario that has led 72 percent of S&P 500 companies to cite AI as a material risk in regulatory filings.

Redmond, earlier this month, acknowledged the problem in a notice to Office admins that’s tracked as CW1226324, as reposted by the UK’s National Health Service support portal. Customers are said to have reported the problem on January 21, 2026.

“Users’ email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat,” the notice says. “The Microsoft 365 Copilot ‘work tab’ Chat is summarizing email messages even though these email messages have a sensitivity label applied and a DLP policy is configured.”

Microsoft explains that sensitivity labels can be applied manually or automatically to files as a way to comply with organizational information security policies. These labels may function differently in different applications, the company says.

The software giant’s documentation makes clear that these labels do not function in a consistent way.

“Although content with the configured sensitivity label will be excluded from Microsoft 365 Copilot in the named Office apps, the content remains available to Microsoft 365 Copilot for other scenarios,” the documentation explains. “For example, in Teams, and in Microsoft 365 Copilot Chat.”

DLP, implemented through applications like Microsoft Purview, is supposed to provide policy support to prevent data loss.

“DLP monitors and protects against oversharing in enterprise apps and on devices,” Microsoft explains. “It targets Microsoft 365 locations, like Exchange and SharePoint, and locations you add, like on-premises file shares, endpoint devices, and non-Microsoft cloud apps.”

In theory, DLP policies should be able to affect Microsoft 365 Copilot and Copilot Chat. But that hasn’t been happening in this instance.

The root cause is said to be “a code issue [that] is allowing items in the sent items and draft folders to be picked up by Copilot even though confidential labels are set in place.”

In a statement provided to The Register after this story was filed, a Microsoft spokesperson said, “We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labeled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop. This did not provide anyone access to information they weren’t already authorized to see. While our access controls and data protection policies remained intact, this behavior did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access. A configuration update has been deployed worldwide for enterprise customers.” ®

Source: Copilot Chat bug bypasses DLP on ‘Confidential’ email • The Register

Survey of over 12,000 EU firms shows AI adoption increases labour productivity levels by 4% on average, with no evidence of reduced employment in the short run for medium + large firms

Artificial intelligence promises to reshape economies worldwide, but firm-level evidence on its effects in Europe remains scarce. This column uses survey data to examine how AI adoption affects productivity and employment across more than 12,000 European firms. The authors find that AI adoption increases labour productivity levels by 4% on average in the EU, with no evidence of reduced employment in the short run. The productivity benefits, however, are unevenly distributed. Medium and large firms, as well as firms that have the capacity to integrate AI through investments in intangible assets and human capital, experience substantially stronger productivity gains.

[…]

we find that on average, AI adoption levels are similar in the EU and the US. Notably, important heterogeneity emerges beneath the surface. Financially developed EU countries – such as Sweden and the Netherlands – match US adoption rates, with around 36% of firms using big data analytics and AI in 2024. In contrast, firms in less financially developed EU economies, such as Romania and Bulgaria, lag substantially behind, with adoption rates around 28% in 2024. Figure 1 illustrates this divide, showing how the gap has persisted and even widened in recent years.

Adoption also varies dramatically by firm size. Among large firms (more than 250 employees), 45% have deployed AI, compared with only 24% of small firms (10 to 49 employees). This echoes classic patterns in technology diffusion (Comin and Hobijn 2010): larger firms possess the resources, technical expertise, and economies of scale needed to absorb integration costs. AI-adopting firms are also systematically different – they invest more, are more innovative, and face tighter constraints in finding skilled workers. These patterns suggest that simply observing which firms adopt AI and comparing their performance could yield misleading results, as adoption itself is endogenous to firm characteristics.

Isolating AI’s causal effect

To credibly identify the causal effect of AI on productivity, we develop a novel instrumental variable strategy, inspired by Rajan and Zingales’ (1998) seminal work on financial dependence and growth. Their key insight was that industry characteristics measured in one economy – where they are arguably less affected by local distortions – can serve as an exogenous source of variation when applied to other countries.

We extend this logic to the firm level. For each EU firm in our sample, we identify comparable US firms – matched on sector, size, investment intensity, innovation activity, financing structure and management practices. We then assign the AI adoption rate of these matched US firms as a proxy for the EU firm’s exogenous exposure to AI. Because US firms operate under different institutional, regulatory and policy environments, their adoption patterns capture technological drivers that are plausibly independent of EU-specific factors. Rigorous propensity-score balancing tests confirm that our matched US and EU firms are virtually identical across key observable characteristics, validating the identification strategy. Our analysis draws on survey data from EIBIS combined with balance sheet data from Moody’s Orbis.

Productivity gains without job losses

Our results reveal three key findings. First, AI adoption causally increases labour productivity levels by 4% on average in the EU. This effect is statistically robust and economically meaningful

[…]

Second, and crucially, we find no evidence that AI reduces employment in the short run. While naïve comparisons suggest AI-adopting firms employ more workers, this relationship disappears once we account for selection effects through our instrumental variable approach. The absence of negative employment effects, combined with significant productivity gains, points to a specific mechanism: capital deepening. AI augments worker output – enabling employees to complete tasks faster and make better decisions – without displacing labour

[…]

Third, AI’s productivity benefits are far from evenly distributed. Breaking down our results by firm size reveals that medium and large companies experience substantially stronger productivity gains than their smaller counterparts (see Figure 2). This differential effect reflects the role of scale in absorbing AI integration costs and accessing complementary assets – data infrastructure, technical talent, and organisational capacity to redesign workflows. The finding raises concerns about widening productivity gaps between firms and regions, particularly given Europe’s industrial structure, which is dominated by small and medium-sized enterprises.

[…]

Source: How AI is affecting productivity and jobs in Europe | CEPR

Leaked Email Suggests Ring Plans To Expand ‘Search Party’ Surveillance Beyond Dogs, surprising? Not really.

Ring’s AI-powered “Search Party” feature, which links neighborhood cameras into a networked surveillance system to find lost dogs, was never intended to stop at pets, according to an internal email from founder Jamie Siminoff obtained by 404 Media.

Siminoff told employees in early October, shortly after the feature launched, that Search Party was introduced “first for finding dogs” and that the technology would eventually help “zero out crime in neighborhoods.” The on-by-default feature faced intense backlash after Ring promoted it during a Super Bowl ad. Ring has since also rolled out “Familiar Faces,” a facial recognition tool that identifies friends and family on a user’s camera, and “Fire Watch,” an AI-based fire alert system.

A Ring spokesperson told the publication Search Party does not process human biometrics or track people.

Source: Leaked Email Suggests Ring Plans To Expand ‘Search Party’ Surveillance Beyond Dogs | Slashdot

A few weeks of X’s algorithm can make you more right-wing—and it doesn’t wear off quickly

A new study published in Nature has found that X’s algorithm—the hidden system or “recipe” that governs which posts appear in your feed and in which order—shifts users’ political opinions in a more conservative direction.

Led by Germain Gauthier from Bocconi University in Italy, it is a rare, real-world randomized experimental study on a major social media platform. And it builds on a growing body of research that shows how these platforms can shape people’s political attitudes.

Two different algorithms

The researchers randomly assigned 4,965 active US-based X users to one of two groups.

The first group used X’s default “For You” feed. This features an algorithm that selects and ranks posts it thinks users will be more likely to engage with, including posts from accounts that they don’t necessarily follow.

The second group used a chronological feed. This only shows posts from accounts users follow, displayed in the order they were posted. The experiment ran for seven weeks during 2023.

Users who switched from the chronological feed to the “For You” feed were 4.7 percentage points more likely to prioritize policy issues favored by US Republicans (for example, crime, inflation and immigration). They were also more likely to view the criminal investigation into US President Donald Trump as unacceptable.

They also shifted in a more pro-Russia direction in regards to the war in Ukraine. For example, these users became 7.4 percentage points less likely to view Ukrainian President Volodymyr Zelenskyy positively, and scored slightly higher on a pro-Russian attitude index overall.

The researchers also examined how the algorithm produced these effects.

They found evidence that the algorithm increased the share of right-leaning content by 2.9 percentage points overall (and 2.5 points among political posts), compared with the chronological feed.

It also significantly demoted the share of posts from traditional news organizations‘ accounts while promoting or boosting posts from political activists.

One of the most concerning findings of the study is the longer-term effects of X’s algorithmic feed. The study showed the algorithm nudged users towards following more right-leaning accounts, and that the new following patterns endured even after switching back to the chronological feed.

In other words, turning the algorithm off didn’t simply “reset” what people see. It had a longer-lasting impact beyond its day-to-day effects.

One piece of a much bigger picture

This new study supports findings of similar studies.

For example, a study in 2022, before Elon Musk had bought Twitter and rebranded it as X, found the platform’s algorithmic systems amplified content from the mainstream political right more than the left in six out of the seven countries.

An experimental study from 2025 re-ranked X feeds to reduce exposure to content that expresses antidemocratic attitudes and partisan animosity. They found this shifted feelings towards their political opponents by more than two points on a 0–100 “feeling thermometer.” This is a shift the authors argued would have normally taken about three years to occur organically in the general population.

My own research offers another piece of evidence to this picture of algorithmic bias on X. Along with my colleague Mark Andrejevic, I analyzed engagement data (such as likes and reposts) from prominent political accounts during the final stages of the 2024 US election.

Our findings unearthed a sudden and unusual spike in engagement with Musk’s account after his endorsement of Trump on July 13—the day of the assassination attempt on Trump. Views on Musk’s posts surged by 138%, retweets by 238%, and likes by 186%. This far outstripped increases on other accounts.

After July 13, right-leaning accounts on X gained significantly greater visibility than progressive ones. The “playing field” for attention and engagement on the platform was tilted thereafter towards right-leaning accounts—a trend that continued for the remainder of the time period we analyzed in that study.

[…]

Publication details

Germain Gauthier et al, The political effects of X’s feed algorithm, Nature (2026). DOI: 10.1038/s41586-026-10098-2

Journal information: Nature

Provided by The Conversation

Source: A few weeks of X’s algorithm can make you more right-wing—and it doesn’t wear off quickly

Popular Password managers don’t protect secrets if servers are compromised.

[…]The team, comprised of researchers from ETH Zurich and Università della Svizzera italiana (USI), examined the “zero-knowledge encryption” promises made by Bitwarden, LastPass, and Dashlane, finding all three could expose passwords if attackers compromised servers.

The premise of zero-knowledge encryption is that user passwords are encrypted on their device, and the password manager’s server acts merely as a dumb storage box for the encrypted credentials. Therefore, in the event that the vendor’s servers are controlled by malicious parties, attackers wouldn’t be able to view users’ secrets.

As one of the most popular alternatives to Apple and Google’s own password managers, which together dominate the market, the researchers found Bitwarden was most susceptible to attacks, with 12 working against the open-source product. Seven distinct attacks worked against LastPass, and six succeeded in Dashlane.

The attacks don’t exploit weaknesses in the same way that remote attackers could exploit vulnerabilities and target specific users. Instead, the researchers worked to test each platform’s ability to keep secrets safe in the event they were compromised.

In most cases where attacks were successful, the researchers said they could retrieve encrypted passwords from the user, and in some cases, change the entries.

They used a malicious server model to test all of this – setting up servers that behaved like hacked versions of those used by the password managers. Seven of Bitwarden’s 12 successful attacks led to password disclosure, whereas only three of LastPass’s attacks led to the same end, and one for Dashlane.

All three vendors claim their products come with zero-knowledge encryption. The researchers noted that none of them outline the specific threat model their password manager secures against.

The researchers said: “The majority of our attacks require simple interactions which users or their clients perform routinely as part of their usage of the product, such as logging in to their account, opening the vault and viewing the items, or performing periodic synchronization of data.

“We also present attacks that require more complex user actions, such as key rotations, joining an organization, sharing credentials, or even clicking on a misleading dialog. Although assessing the probability of these actions is challenging, we believe that, within a vast user base, many users will likely perform them.”

In the full paper [PDF], they went on to argue that password managers have escaped deep academic scrutiny until now, unlike end-to-end encrypted messaging apps. It is perhaps due to a perception that password managers are simple applications – deriving keys and then encrypting them. However, their codebases are more complex than that, often offering features such as the ability to share accounts with family members and featuring various ways to maintain backward-compatibility with older encryption standards.

Kenneth Paterson, professor of computer science at ETH Zurich, said “we were surprised by the severity of the security vulnerabilities” affecting the password managers.

“Since end-to-end encryption is still relatively new in commercial services, it seems that no one had ever examined it in detail before.”

[…]

Source: Password managers don’t protect secrets if pwned • The Register

Canada Goose says ShinyHunters only breached old data – why did they not disclose this when it happened then?

Canada Goose says an advertised breach of 600,000 records is an old raid and there are no signs of a recent compromise.

The down-filled jacket purveyor did not answer questions about how old the data is or how it was originally taken, but told us it relates to past customer purcahses.

“Canada Goose is aware that a historical dataset relating to past customer transactions has recently been published online,” a spokesperson said. “At this time, we have no indication of any breach of our own systems. We are currently reviewing the newly released dataset to assess its accuracy and scope, and will take any further steps as may be appropriate.”

“To be clear, our review shows no evidence that unmasked financial data was involved. Canada Goose remains committed to protecting customer information.”

ShinyHunters posted the company’s data for download on February 14 via their leak site. The criminals’ advert for the data claimed there were more than 600,000 records, each containing personally identifiable information, as well as payment/financial details.

The Register reviewed a number of the records available online via a JSON file, and ShinyHunters’ description of the data appears accurate.

It includes names and other usual PII data points, as well as partial payment information and order details, such as price and delivery address.

[…]

Source: Canada Goose says ShinyHunters only breached old data • The Register

Watch how capitalism breaks innovation: OpenAI hires OpenClaw AI agent developer Peter Steinberg

OpenClaw is a huge disruptor in the agentic AI space – it has an actual orchestrator, is super easy to implement and destroys many business models. You can bet that despite all the sounds, the open source repository will be laid to rest and all new development will go in to the closed OpenAI space so they can regain their competitive advantage and maybe actually make some money, despite the best efforts of megalomanic compulsive liar and general poor man’s baddie, Sam Altman.

So this move kills a real gamechanger and moves EU top talent to the US in one go. What great things money does for us. Not.

Peter Steinberger, creator of popular open-source artificial intelligence program OpenClaw, will be joining OpenAI Inc. to help bolster the ChatGPT developer’s product offerings.

“OpenClaw will live in a foundation as an open source project that OpenAI will continue to support,” OpenAI Chief Executive Officer Sam Altman wrote in a post on X Sunday, adding that Steinberger is “joining OpenAI to drive the next generation of personal agents.”

Steinberger wrote in a separate post on his website Saturday that he will be joining OpenAI to be “part of the frontier of AI research and development, and continue building.”

“It’s always been important to me that OpenClaw stays open source and given the freedom to flourish,” Steinberger wrote. “Ultimately, I felt OpenAI was the best place to continue pushing on my vision and expand its reach.”

OpenClaw, previously called Clawdbot and Moltbot, has garnered a cult following since launching in November for its ability to operate autonomously, clearing users’ inboxes, making restaurant reservations and checking in for flights, among other tasks. Users can also connect the tool to messaging apps such as WhatsApp and Slack and direct the agent through those platforms.

“My next mission is to build an agent that even my mum can use,” Steinberger wrote. “That’ll need a much broader change, a lot more thought on how to do it safely, and access to the very latest models and research.”

[…]

 

Source: OpenAI hires OpenClaw AI agent developer Peter Steinberg | Fortune

European Parliament bars lawmakers from AI tools over cloud data concerns

Slowly but very slowly Europeans are starting to understand that funding the US software companies, creating a dependency on them and giving them their data to grab in (which they do) is not a good idea.

The European Parliament has reportedly turned off AI features on lawmakers’ devices amid concerns about content going where it shouldn’t.

According to Politico, staff were notified that AI features on corporate devices (including tablets) were disabled because the IT department could not guarantee data security.

The bone of contention is that some AI assistants require the use of cloud services to perform tasks including email summarization, and so send the data off the device – a challenge for data protection.

It’s a unfortunate for device vendors that promote on-device processing, but the European Parliament’s tech support desk reportedly stated: “As these features continue to evolve and become available on more devices, the full extent of data shared with service providers is still being assessed. Until this is fully clarified, it is considered safer to keep such features disabled.”

The Register contacted the European Parliament for comment.

Data privacy and AI services have not been the greatest of bedfellows. Studies have shown that employees regularly leak company secrets via assistants, and on-device AI services are a focus of vendors amid concerns about exactly what is being sent to the cloud.

The thought of confidential data being sent to an unknown location in the cloud to generate a helpful summary has clearly worried lawmakers, which is why there is a blanket ban. However, the issue has less relevance if the process occurs on the device itself.

The Politico report noted that day-to-day tools, such as calendar applications, are not affected by the edict. The ban is temporary until the tech boffins can clarify what is being shared and where it is going.

[…]

Source: European Parliament bars lawmakers from AI tools • The Register

In a blind test, audiophiles couldn’t tell the difference between audio signals sent through copper wire, a banana, or wet mud

A moderator on diyAudio set up an experiment to determine whether listeners could differentiate between audio run through pro audio copper wire, a banana, and wet mud. Spoiler alert: the results indicated that users were unable to accurately distinguish between these different ‘interfaces.’

Pano, the moderator who built the experiment, invited other members on the forum to listen to various sound clips with four different versions: one taken from the original CD file, with the three others recorded through 180cm of pro audio copper wire, via 20cm of wet mud, through 120cm of old microphone cable soldered to US pennies, and via a 13cm banana, and 120cm of the same setup as earlier.

Initial test results showed that it’s extremely difficult for listeners to correctly pick out which audio track used which wiring setup. “The amazing thing is how much alike these files sound. The mud should sound perfectly awful, but it doesn’t,” Pano said. “All of the re-recordings should be obvious, but they aren’t.”

[…]

Source: In a blind test, audiophiles couldn’t tell the difference between audio signals sent through copper wire, a banana, or wet mud — ‘The mud should sound perfectly awful, but it doesn’t,’ notes the experiment creator | Tom’s Hardware

Research: How Product Images Influence Online Purchase Decisions and When They Don’t

Online shopping continues to grow globally, with total e-commerce sales estimated at $6 trillion by 2024 and consumers in Western Europe making nearly 19 online purchases per year. In this rapidly expanding landscape, online retailers are increasingly investing in interactive and contextual product images to capture customer attention. But do these visual strategies actually work? This is what PhD candidate Rowena Summerlin of Tilburg University  investigated .

Summerlin concludes that product images—especially interactive versions—can indeed influence consumer purchase intentions, but that this effect is context-dependent. According to her research, interactive images, especially for individual consumers, increase the perception of a product’s higher quality, which can strengthen purchase intentions. However, the impact depends on factors such as product price, customer type, and the platform on which the images are displayed. For business customers, the effect appears to be virtually nonexistent.

Analyses of real marketplace data show that contextual images can increase sales on some websites, but have little effect on other platforms. The researchers point out that this is likely related to the differing norms and customs of different online environments. The strongest effects occurred with more expensive products and during peak shopping periods, when consumers themselves are more uncertain about their choices.

Source: Research: How Product Images Influence Online Purchase Decisions and When They Don’t – Emerce

AH-64 Apache Is Getting Proximity Fuzed 30mm Cannon Ammo For Swatting Down Drones

The AH-64 Apache attack helicopter has evolved into a counter-drone platform in recent years — something we have been following closely. While the Israeli Air Force had pioneered this role for the AH-64 for years, the U.S. Army has now formally codified it and added new capabilities in the process. Now, as we had suggested some time ago, the Apache is getting proximity-fuzed 30mm cannon shells for its chin-mounted M230 cannon that will add to its drone-killing arsenal, giving it a cheaper and more plentiful engagement option than some of the alternatives.

Apaches live-fire tested the 30x113mm XM1225 Aviation Proximity Explosive (APEX) ammo last December, according to a recent Army release. The trials occurred at the service’s sprawling Yuma Proving Ground (YPG) in southern Arizona. Multiple test engagements occurred against various types of drone targets.

The specialized APEX ammunition works by detonating only when it is close to an object, then it explodes in a spray of shrapnel. This is critical to shooting down drones as they are small, independently moving targets, and the Apache’s monocle-targeted chin gun isn’t exactly a sniper rifle in terms of precision. At the same time, the rounds could also be used against targets on the surface — including personnel, soft-skinned vehicles, and small boats, for instance — offering unique area effects compared to the Apache’s standard impact-detonating, high-explosive ammunition.

[…]

 

Source: AH-64 Apache Is Getting Proximity Fuzed 30mm Cannon Ammo For Swatting Down Drones

F-35 Software Could Be Jailbreaked Like An iPhone: Dutch Defense Secretary

The F-35’s ‘computer brain,’ including its cloud-based components, could be cracked to accept third-party software updates, just like ‘jailbreaking‘ a cellphone, according to the Dutch State Secretary for Defense. The statement comes as foreign operators of the jets continue to be pressed on what could happen if the United States were ever to cut off support. President Donald Trump’s administration has pursued a number of policies that have resulted in new diplomatic strains with some long-time allies, especially in Europe.

“If, despite everything, you still want to upgrade, I’m going to say something I should never say, but I will anyway: you can jailbreak an F-35 just like an iPhone,” Gijs Tuinman said during an episode of BNR Nieuwsradio‘s “Boekestijn en de Wijk” podcast posted online yesterday, according to a machine translation.

[…]

As we have explored in detail in the past, the F-35 program imposes unique limits on the ability of operators to make changes to the jet’s software, as well as to associated systems on the ground. Virtually all F-35s in service today see software updates come through a cloud-based network, the original version of which is known as the Autonomic Logistics Information System (ALIS). Persistent issues with ALIS have led to the development of a follow-on Operational Data Integrated Network (ODIN), the transition to which is still ongoing.

The ALIS/ODIN network is designed to handle much more than just software updates and logistical data. It is also the port used to upload mission data packages containing highly sensitive planning information, including details about enemy air defenses and other intelligence, onto F-35s before missions and to download intelligence and other data after a sortie.

To date, Israel is the only country known to have successfully negotiated a deal giving it the right to install domestically-developed software onto its F-35Is, as well as otherwise operate its jets outside of the ALIS/ODIN network. The Israelis also have the ability to conduct entirely independent depot-level maintenance, something we will come back to later.

Issues with ALIS, as well as concerns about the transfer of nationally sensitive information within the network, have led certain operators, including the Netherlands, to firewall off aspects of their software reprogramming activities in the past.

[…]

TWZ previously explored many of these same issues in detail last year, amid a flurry of reports about the possibility that F-35s have some type of discreet ‘kill switch’ built in that U.S. authorities could use to remotely disable the jets. Rumors of this capability are not new and remain completely unsubstantiated.

At that time, we stressed that a ‘kill switch’ would not even be necessary to hobble F-35s in foreign service. At present, the jets are heavily dependent on U.S.-centric maintenance and logistics chains that are subject to American export controls and agreements with manufacturer Lockheed Martin. Just reliably sourcing spare parts has been a huge challenge for the U.S. military itself, as you can learn more about in this past in-depth TWZ feature. F-35s would be quickly grounded without this sustainment support.

[…]

Source: F-35 Software Could Be Jailbreaked Like An iPhone: Dutch Defense Secretary

Studying Genes that existed before all life on Earth (LUCA – Last Universal Common Ancestor)

Every organism alive today traces its lineage back to a single shared ancestor that lived about four billion years ago. Scientists refer to this organism as the “last universal common ancestor,” and it represents the earliest form of life that can currently be examined using established evolutionary methods.

Research on this ancient ancestor shows that many features seen in modern life were already in place at that time. Cells already had membranes, and genetic information was stored in DNA. Because these essential traits were already established, scientists seeking to understand how life first took shape must look even further back in time, to evolutionary events that occurred before this shared ancestor existed.

Studying Life Before the First Common Ancestor

In a study published in the journal Cell Genomics, researchers Aaron Goldman (Oberlin College), Greg Fournier (MIT), and Betül Kaçar (University of Wisconsin-Madison) describe a way to explore that earlier period of evolution. “While the last universal common ancestor is the most ancient organism we can study with evolutionary methods,” said Goldman, “some of the genes in its genome were much older.” The team focuses on a special group of genes called “universal paralogs,” which preserve evidence of biological changes that took place before the last universal common ancestor.

A paralog is a group of related genes that appear multiple times within a single genome. Humans provide a clear example. Our DNA contains eight different hemoglobin genes, all of which produce proteins that carry oxygen through the blood. These genes all originated from a single ancestral globin gene that existed around 800 million years ago. Over long periods of time, repeated copying errors produced extra versions of the gene, and each copy gradually developed its own specialized role.

What Makes Universal Paralogs Unique

Universal paralogs are much rarer. These gene families appear in at least two copies in the genomes of nearly all living organisms. Their widespread presence suggests that the original gene duplication occurred before the last universal common ancestor emerged. Those duplicated genes were then passed down through countless generations and remain present in life today.

Because of this deep evolutionary reach, the authors argue that universal paralogs are a critical yet often overlooked resource for studying the earliest history of life on Earth. This approach is becoming more practical as new AI-based techniques and AI-optimized hardware make it easier to analyze ancient genetic patterns in detail.

“While there are precious few universal paralogs that we know,” says Goldman, “they can give us a lot of information about what life was like before the time of the last universal common ancestor.” Fournier adds, “The history of these universal paralogs is the only information we will ever have about these earliest cellular lineages, and so we need to carefully extract as much knowledge as we can from them.”

Clues to the First Cellular Functions

In their analysis, Goldman, Fournier, and Kaçar reviewed all known universal paralogs. Every one of these genes plays a role in either building proteins or moving molecules across cell membranes. This finding suggests that protein production and membrane transport were among the first biological functions to evolve.

The researchers also emphasize the importance of reconstructing the ancient forms of these genes. In one study from Goldman’s lab at Oberlin, scientists examined a universal paralog family involved in inserting enzymes and other proteins into cell membranes. Using standard methods from evolutionary biology and computational biology, they reconstructed the protein produced by the original ancestral gene.

Their results showed that this simpler, ancient protein could still attach to cell membranes and interact with the machinery that makes proteins. It likely helped early proteins embed themselves into primitive membranes, offering insight into how the earliest cells may have operated

[…]

Source: Scientists find genes that existed before all life on Earth | ScienceDaily

Claude agent allows allows Google Calendar zero click exploits

LayerX, a security company based in Tel Aviv, says it has identified a zero-click remote code execution vulnerability in Claude Desktop Extensions that can be triggered by processing a Google Calendar entry.

Informed of the issue – worthy of a CVSS score of 10/10, LayerX argues – Anthropic has opted not to address it.

Claude Desktop Extensions, recently renamed MCP Bundles, are packaged applications that extend the capabilities of Claude Desktop using the Model Context Protocol, a standard way to give generative AI models access to other software and data. Stored as .dxt files (with Anthropic transitioning the format to .mcpb), they are ZIP archives that package a local MCP server alongside a manifest.json file describing the extension’s capabilities.

The Claude Desktop Extensions hub webpage claims the extensions are secure and undergo security review. “Extensions run in sandboxed environments with explicit permission controls, and enterprise features include Group Policy support and extension blocklisting,” the FAQs explain.

LayerX argues otherwise. According to principal security researcher Roy Paz, Claude Desktop extensions “execute without sandboxing and with full privileges on the host system.”

Paz told The Register, “By design, you cannot sandbox something if it is expected to have full system access. Perhaps they containerize it but that’s not the same thing. Relative to Windows Sandbox, Sandboxie or VMware, Claude DXT’s container falls noticeably short of what is expected from a sandbox. From an attacker’s point of view it is the equivalent of setting your building code to 1234 and then leaving it unlocked because locking it would prevent delivery people from coming in and out.”

Paz says that the vulnerability arises from the fact that Claude will process input from public-facing connectors like Google Calendar and that the AI model also decides on its own which installed MCP connectors should be used to fulfill that request.

The result is that when extensions with risky capabilities like command line access are present, extensions with less concerning capabilities can present an attack vector. In this instance, a Google Calendar event was used to make malicious instructions available to Claude, which the model then used to download, compile, and execute harmful code.

“There are no hardcoded safeguards that prevent Claude from constructing a malformed or dangerous workflow,” Paz claims. “Consequently, data extracted from a relatively low-risk connector (Google Calendar) can be forwarded directly into a local MCP server with code-execution capabilities.”

What Paz is describing is a form of indirect prompt injection – AI models that read webpages, other documents, or interface elements may interpret that content as instructions. This is a known, unresolved problem, which may explain Anthropic’s apparent disinterest in the LayerX report.

[…]

Source: Claude add-on turns Google Calendar into malware courier • The Register

Specific cognitive training has ‘astonishing’ effect on dementia risk

[…]

a 20-year study of 2832 people aged 65 and older suggests specific exercises may offer benefits.

The participants were randomly assigned to one of three intervention groups or to a control group. One group engaged in speed training, using a computer-based task called Double Decision, which briefly displays a car and a road sign within a scene before they disappear. Participants must then recall which car appeared and where the sign was located. The task is adaptive, becoming harder as performance improves.

The other two groups took part in memory or reasoning training, learning strategies designed to improve those skills.

The participants completed two 60-75-minute sessions per week for five weeks. About half of those in each group were then randomly assigned to receive booster sessions – four additional 1-hour sessions at the end of the first year, and another four at the end of the third year.

Twenty years later, the researchers assessed US Medicare claims data to determine how many of the participants had been diagnosed with dementia. They found that those who completed speed training with booster sessions had a 25 per cent lower risk of diagnosis with Alzheimer’s or a related dementia compared with the control group. No other group – including speed training without boosters – showed a significant change in risk. “The size of the effect is really quite astonishing,” says Albert.

[…]

Source: Specific cognitive training has ‘astonishing’ effect on dementia risk | New Scientist

Ranked: Defense Spending Per Capita, by Country

Ranked defense spending per person shows which countries invest most in their military on a per capita basis.

 

Global military spending is often measured in massive national budgets, where the United States and China dominate the conversation. But looking at defense spending on a per-person basis tells a very different story, one where smaller countries rise to the top.

This visualization ranks major countries by how much they spent on defense per citizen in 2024, revealing which nations invest the most in military power relative to their population — and how countries like the U.S. compare when spending is measured per person rather than in total dollars.

Data comes from the Stockholm International Peace Research Institute (SIPRI).

Why Israel Leads the World in Defense Spending Per Capita

Israel ranks first, spending nearly $5,000 per person on defense in 2024. This figure reflects the country’s ongoing security challenges and mandatory military service. Despite a total defense budget of $47 billion—small compared to global superpowers—the per-person cost is unmatched.

Below are the world’s 30 largest military spenders, ranked by defense spending per capita:

Several smaller or wealthy nations rank near the top of the list. Singapore spends over $2,500 per person, driven by its strategic location and emphasis on technological superiority. Norway and Denmark also appear in the top 10, supported by high incomes and growing commitments to NATO.

How Major Powers Compare

The U.S. ranks second overall, with nearly $2,900 spent per person, reflecting both its enormous military budget and large population. China, by contrast, ranks much lower at $221 per capita despite spending more than $300 billion in total.

Meanwhile, European powers like Germany, France, and the U.K. cluster in the middle of the ranking, balancing defense commitments with larger populations.

Source: Ranked: Defense Spending Per Capita, by Country

Discord will require a face scan or ID for full access next month

The creeps staring into your bedroom brigade is winning and age verification is being normalised by a group of goons who really really want to know every poop you take. It’s a dangerous and insanely bad idea, but fortunately people are starting to wise up.

Discord announced on Monday that it’s rolling out age verification on its platform globally starting next month, when it will automatically set all users’ accounts to a “teen-appropriate” experience unless they demonstrate that they’re adults.

“For most adults, age verification won’t be required, as Discord’s age inference model uses account information such as account tenure, device and activity data, and aggregated, high-level patterns across Discord communities. Discord does not use private messages or any message content in this process,” Savannah Badalich, Discord’s global head of product policy, tells The Verge.

Users who aren’t verified as adults will not be able to access age-restricted servers and channels, won’t be able to speak in Discord’s livestream-like “stage” channels, and will see content filters for any content Discord detects as graphic or sensitive. They will also get warning prompts for friend requests from potentially unfamiliar users, and DMs from unfamiliar users will be automatically filtered into a separate inbox.

Direct messages and servers that are not age-restricted will continue to function normally, but users won’t be able to send messages or view content in an age-restricted server until they complete the age check process, even if it’s a server they were part of before age verification rolled out. Badalich says those servers will be “obfuscated” with a black screen until the user verifies they’re an adult. Users also won’t be able to join any new age-restricted servers without verifying their age.

Discord asking a user for age verification after opening a restricted server
Discord asking a user for age verification to unblur sensitive content
1/2Unverified users won’t be able to enter age-restricted servers. Image: Discord

Discord’s global age verification launch is part of a wave of similar moves at other online platforms, driven by an international legal push for age checks and stronger child safety measures. This is not the first time Discord has implemented some form of age verification, either. It initially rolled out age checks for users in the UK and Australia last year, which some users figured out how to circumvent using Death Stranding’s photo mode. Badalich says Discord “immediately fixed it after a week,” but expects users will continue finding creative ways to try getting around the age checks, adding that Discord will “try to bug bash as much as we possibly can.”

It’s not just teens trying to cheat the system who might attempt to dodge age checks. Adult users could avoid verifying, as well, due to concerns around data privacy, particularly if they don’t want to use an ID to verify their age. In October, one of Discord’s former third-party vendors suffered a data breach that exposed users’ age verification data, including images of government IDs.

If Discord’s age inference model can’t determine a user’s age, a government ID might still be required for age verification in its global rollout. According to Discord, to remove the new “teen-by-default” changes and limitations, “users can choose to use facial age estimation or submit a form of identification to [Discord’s] vendor partners, with more options coming in the future.”

The first option uses AI to analyze a user’s video selfie, which Discord says never leaves the user’s device. If the age group estimate (teen or adult) from the selfie is incorrect, users can appeal it or verify with a photo of an identity document instead. That document will be verified by a third party vendor, but Discord says the images of those documents “are deleted quickly — in most cases, immediately after age confirmation.”

A Discord user profile showing a “teen” age group and age verification options
Users can view and update their age group from their profile. Image: Discord

Badalich also says after the October data breach, Discord “immediately stopped doing any sort of age verification flows with that vendor” and is now using a different third-party vendor. She adds, “We’re not doing biometric scanning [or] facial recognition. We’re doing facial estimation. The ID is immediately deleted. We do not keep any information around like your name, the city that you live in, if you used a birth certificate or something else, any of that information.”

“A majority of people are not going to see a change in their experience.”

Badalich goes on to explain that the addition of age assurance will mainly impact adult content: “A majority of people on Discord are not necessarily looking at explicit or graphic content. When we say that, we’re really talking about things that are truly adult content [and] age inappropriate for a teen. So, the way that it will work is a majority of people are not going to see a change in their experience.”

Even so, there’s still a risk that some users will leave Discord as a result of the age verification rollout. “We do expect that there will be some sort of hit there, and we are incorporating that into what our planning looks like,” Badalich says. “We’ll find other ways to bring users back.”

Source: Discord will require a face scan or ID for full access next month | The Verge

If you want to look at more people blowing up about age verification you can try this Slashdot thread: Discord Will Require a Face Scan or ID for Full Access Next Month