Cybercriminals armed with off-the-shelf generative AI tools compromised more than 600 internet-exposed FortiGate firewalls across 55 countries in just over a month, according to a new incident report from AWS.
The campaign, which ran from mid-January to mid-February, relied less on clever zero-days and more on the equivalent of trying every digital door handle – just at machine speed, with AI lending a hand behind the scenes.
AWS says the financially motivated Russian-speaking crew behind the campaign scanned for exposed FortiGate management interfaces, tried commonly reused or weak credentials, and then hoovered up configuration files once inside, giving them a roadmap of victim networks.
The cloud giant’s security team says the actor used multiple commercial AI tools to generate attack playbooks, scripts, and operational notes, effectively allowing a relatively low-skilled outfit to run a campaign that would previously have required more people or time. Investigators even found evidence of AI-generated code and planning artifacts on compromised infrastructure, suggesting the tools were embedded throughout the workflow rather than just used for the odd bit of scripting.
“The volume and variety of custom tooling would typically indicate a well-resourced development team,” said CJ Moses, CISO at Amazon. “Instead, a single actor or very small group generated this entire toolkit through AI-assisted development.”
Once the firewall was cracked, the attackers pulled configuration files containing administrator and VPN credentials, network topology details, and firewall rules. From there, they moved deeper into environments, going after Active Directory, dumping credentials, and probing for ways to move laterally. Backup systems, including Veeam servers, were also on the shopping list.
AWS says the tooling it observed was functional but rough around the edges, with simplistic parsing logic and the sort of redundant comments that suggest a machine wrote the first draft. That didn’t stop it from being effective enough for broad automation, though the miscreants reportedly tended to abandon targets that put up too much resistance and move on to softer ones, reinforcing the idea that volume rather than finesse was the winning strategy.
Geographically, the activity was opportunistic rather than tightly targeted, with victims spread across multiple regions, including parts of Europe, Asia, Africa, and Latin America. Clusters of activity suggested that some compromises may have enabled access to managed service providers or larger shared environments, amplifying downstream risk.
The report leans heavily on the idea that basic hygiene – keeping management interfaces off the public internet, enforcing multi-factor authentication, and not recycling passwords – would have shut down much of the activity before it got going.
The findings land just weeks after Google warned that criminals are increasingly wiring generative AI directly into their operations, including its own Gemini AI chatbot, for tasks ranging from reconnaissance and target profiling to phishing and malware development.
[…] Cybersecurity researchers have confirmed they discovered a massive “treasure trove” of unsecured data, with information on individuals from 26 countries, including, at the top of the list, the U.S., which appears to be linked to an AI-powered identity verification service. Totalling almost a terabyte of data and 1 billion records, the exposed information included national IDs, full names, addresses, phone numbers, and email.
Just when you think things couldn’t get any worse, those same researchers have now disclosed yet another AI-related data leak. This time impacting users of an Android app that deploys AI to provide “cinematic makeovers” for selfies. While not in quite the same league as the first, that will be cold comfort if your photos and videos were among the 2 million left exposed.
Unsecured AI Service Know Your Customer Data Exposed In 1 Billion Record Leak
There is a danger, given the sheer number of published reports concerning data leaks, including, most recently, 48 million Gmail passwords and usernames, as part of a 149 million records exposed database event, that we become used to such incidents and shrug them off. When an exposed database contains 1 billion records, with 203 million of them impacting the U.S., questions need to be asked and notice taken. The Cybernews research team has confirmed that the databases, a collection of them within a single exposed MongoDB instance, were discovered on November 11, and the company concerned, which they said was an AI-powered digital identity verification provider called IDMerit, was contacted on November 12. The leak was plugged by the company the same day.
Currently, I can’t check my Bluesky direct messages until I’ve allowed the Epic Games-owned KWS to look at either my bank card, my ID, or my wizened visage. As I’m based in the UK, it’s not just Bluesky I’ve got to worry about either, with similar verification processes now present on Reddit, Discord, and even my partner’s Xbox.
This is all due to the Online Safety Act, which came into effect in the UK last year. For many, these age checks are an annoyance at best—but they also represent something that will have ramifications far beyond the British Isles. The UK’s Act was designed in part to ensure children in the UK could not easily access “harmful content.” This is a broad term that includes but is not limited to pornography, content that promotes “self-harm, eating disorders, or suicide,” and “bullying”.
To comply with the act and differentiate children from the adults, many platforms have opted for age-gates like the one I’m encountering on Bluesky. Almost 70% of Brits surveyed shortly after the Online Safety Act came into effect said they supported it…though 64% didn’t think it would be all that effective. Indeed, I could log into a VPN to get past the UK-based Bluesky block—though unfortunately for me, I am stubborn, lazy, and cheap (apologies if you’ve been trying to get ahold of me).
As Jacob has previously outlined, there are better ways to implement age checks. As it stands, though, I’m not naive enough to think the data I keep elsewhere is in hands that are any safer. However, not submitting to an age assurance check makes for one less point of failure from which my likeness or even my official documents can leak out.
On the one hand, yeah, I’d rather children growing up today didn’t see all the things I saw thanks to having unfettered internet access throughout the early oughts.
Why not? I survived rotten.com and goatse – but then again, the internet didn’t have much in the way of fake news, hate speech or echo chambers…
I’d also rather young’uns now didn’t have to experience all the harassment I experienced at the hands of my own peers, newly empowered by that unfettered internet access.
On the other hand, the internet answered a lot of questions I was absolutely not going to ask my parents; when I see a vague term like “harmful content” I do have to wonder what genuinely educational resources on the wider internet—say, regarding art history or personal health—might end up age-gated because someone somewhere has decided they’re tantamount to ‘pornography.’
I’m only just the other side of 30, but Section 28 was still in effect for some of my school years. For those who don’t know, Section 28 was a law that prevented schools in England, Scotland, and Wales from doing anything that could be interpreted as “intentionally [promoting] homosexuality or [publishing] material with the intention of promoting homosexuality”. So, until the law was repealed in the early 2000’s, a lot of schools simply pretended LGBTQIA+ folks didn’t exist. The internet, for all of its faults, helped to fill that deafening silence for me.
(Image credit: PromptPirate on GitHub)
Even so, I remember there being content blocks back in my day, too, and I know I found more than a few ways around those. Indeed, if we take just Discord today, our James has found not one but two different ways to fool its face scans—though the platform may already be formulating a counter to these workarounds.
Shortly after issuing assurances that not all users will even have to undergo an age check, a since-edited support article revealed that some UK users “may be part of an experiment where your information will be processed by an age-assurance vendor, Persona.” Amid reports of folks easily fooling its primary third-party vendor’s age verification checks, Discord may have been seeking to diversify its defences.
Persona’s investors include Peter Thiel, co-founder of ICE’s premier surveillance provider, Palantir. Though Persona and Palantir are two totally separate companies that do not share either data or operations, that’s still a pretty grimy connection. Not least of all because earlier this week, the US Department of Homeland Security reportedly subpoenaed a number of major online platforms—including Discord, Reddit, Google, and Meta—in order to obtain the personal details of accountholders who had been critical of ICE or identified the locations of its agents. We don’t yet know if Discord complied, though we have reached out for comment.
(Image credit: Artur Widak/NurPhoto via Getty Images)
There is an even worse wrinkle in the Discord-Persona ‘experiment’: while Discord had previously said that data like age verification face scans would only be stored and processed on users’ own devices, those who ended up part of the Persona experiment may have their information “temporarily stored for up to 7 days, then deleted.”
All of that said, Persona is not part of Discord’s long-term strategy, with the platform telling Kotaku earlier this week that its dealings with the vendor were part of a “limited test” that has since been concluded. That leaves K-id’s on-device processing in effect, but even that doesn’t necessarily end the privacy nightmare. Data breaches usually leave platforms scrambling for user good will, but Discord seems all too happy to keep walking into rakes.
One could jump ship and shop around for a free Discord alternative as I recently did, but all of the platforms I tested will likely have to implement some sort of age assurance check if they haven’t already in order to continue serving users based in the UK in the future. That doesn’t mean I’ll be letting them scan my face any time soon; I may have to deploy Norman Reedus and his funky foetus before long as third-party age verification vendors have done little to earn my trust or a gander at my actual face.
This article is riddled in huge assumptions about causality and the amplification that social media can offer, completely unhampered by any research. But the actual research that they do have interspersed in the article is interesting.
[…]Discovering that an ordinary purchase may be tied to exploitation or environmental damage creates a jolt of personal responsibility. In our research, we found that when environmental consequences are clearly linked to people’s own buying choices, many are willing to switch products—especially when credible alternatives exist.
But guilt is private. It nudges personal behavior. It does not automatically reshape systems. The shift happens when private discomfort becomes public voice.
Consumers are often also the first to make hidden environmental harms visible. They post evidence on social media. They question corporate claims. They compare sustainability promises with independent reporting. They organize petitions, boycotts and review campaigns. By shining a spotlight on the truth, the scrutiny shifts from shoppers to brands.
That shift matters because modern brands depend on trust. Reputation is an asset. When sustainability claims are publicly challenged, credibility is at risk. Research in organisational behaviourshows that firms respond quickly to threats to legitimacy. Reputational damage affects customer loyalty, investor confidence and regulatory attention.
[…]
When the gap between what companies say and what they do becomes visible, maintaining that gap becomes harder.
Our research explores how that visibility can be strengthened. The findings were clear. When environmental and social consequences are personalized and traceable, sustainability feels less distant. People see both their own role and the role of particular firms. That dual awareness encourages two responses: behavioral change driven by guilt and corporate accountability driven by shame.
Shame works because it is social. Brands care about how they are seen. When the negative environmental and social effects of supply chains can be publicly connected to named products, corporate narratives become contestable in real time.
A broken motor in an automated machine can bring production on a busy factory floor to a halt. If engineers can’t find a replacement part, they may have to order one from a distributor hundreds of miles away, leading to costly production delays.
It would be easier, faster, and cheaper to make a new motor onsite, but fabricating electric machines typically requires specialized equipment and complicated processes, which restricts production to a few manufacturing centers.
In an effort to democratize the manufacturing of complex devices, MIT researchers have developed a multimaterial 3D-printing platform that could be used to fully print electric machines in a single step.
They designed their system to process multiple functional materials, including electrically conductive materials and magnetic materials, using four extrusion tools that can handle varied forms of printable material. The printer switches between extruders, which deposit material by squeezing it through a nozzle as it fabricates a device one layer at a time.
The researchers used this system to produce a fully 3D-printed electric linear motor in a matter of hours using five materials. They only needed to perform one post-processing step for the motor to be fully functional.
The assembled device performed as well or better than similar motors that require more complex fabrication methods or additional post-processing steps.
In the long run, this 3D printing platform could be used to rapidly fabricate customizable electronic components for robots, vehicles, or medical equipment with much less waste.
The SpaceX Falcon 9 rocket that burned up over Europe last year left a massive lithium plume in its wake, say a group of scientists. They warn the disaster is likely a sign of things to come as Earth’s atmosphere continues to become a heavily trafficked superhighway to space.
In a paper published Thursday, an international group of scientists reports what they say is the first measurement of upper-atmosphere pollution resulting from the re-entry of space debris, as well as the first time ground-based light detection and ranging (lidar) has been shown to be able to detect space debris ablation.
The measurements stem from a SpaceX Falcon 9 upper stage that sprung an oxygen leak about a year ago, sending it into an uncontrolled re-entry. Then it broke up and rained debris down on Poland. The rocket not only littered farm fields, but also injected lithium into the Mesosphere and Lower Thermosphere (MLT), where ground-based sensors detected a tenfold increase at an altitude of 96 km about 20 hours after the rocket re-entered the atmosphere, according to the paper.
Lithium was selected for the study because of its considerable presence in spacecraft, both in lithium-ion batteries and lithium-aluminum alloy used in the construction of spacecraft. A single Falcon 9 upper stage, like the one that broke up over Poland and released the lithium plume, is estimated to contain 30 kg of lithium just in the alloy used in tank walls.
By contrast, around 80 grams of lithium enter the atmosphere per day from cosmic dust particles, the researchers noted.
“This finding supports growing concerns that space traffic may pollute the upper atmosphere in ways not yet fully understood,” the paper notes, adding that the continued re-entry of spacecraft and satellites is of particular concern given how the composition of spacecraft is different from natural meteoroids.
“Satellites and rocket stages introduce engineered materials such as aluminium alloys, composite structures, and rare earth elements from onboard electronics, substances rarely found in natural extraterrestrial matter,” the paper explained. “The consequences of increasing pollution from re-entering space debris on radiative transfer, ozone chemistry, and aerosol microphysics remain largely unknown.”
The effect on Earth’s atmosphere posed by spacecraft and satellite re-entry is one that’s been a growing concern for astrophysicists like Harvard sky-watcher Jonathan McDowell, who has echoed similar concerns to The Register as the European scientists raised in their paper.
Last week, Discord users reported seeing prompts to submit personal information to Persona, a third-party age-verification service. As Discord commits to universal age-verification, the new measures have come under intense scrutiny after previous security failures. Now a trio of hacktivists say they’ve successfully breached Persona, getting a closer look at how the company uses submitted biometrics. They say their findings raise alarms beyond the possibility of leaks.
According to The Rage, Persona’s front-end security left a lot to be desired. Worse, however, were investigative findings that suggested Persona’s surveillance of the users whose data it collected was way more sprawling than originally believed.
“It was initially meant to be a passive recon investigation,” writes vmfunc, a cybersecurity researcher and one of the hackers, “that quickly turned into a rabbit hole deep dive into how commercial AI and federal government operations work together to violate our privacy every waking second.”
On top of finding it surprisingly easy to access data gathered by Persona, the research showed that faces and biometrics were not just being scanned for age verification, but flagged for suspicious behavior and bounced off watchlists as well. To some, particularly those who don’t worry about their face being deemed “suspicious,” this may not sound like an Orwellian level of intrusion, until you remember Persona’s full network.
Persona received $150 million in 2021 from the Founders Fund, a long-running tech investor group headed by Peter Thiel. Thiel’s main business, on top of palling around in Jeffery Epstein’s emails and waiting for the antichrist, is Palantir, an intentionally ominously-named data brokering service that is currently peddling user information to support ICE raids. The findings of vmfunc and co’s research doesn’t directly tether Persona and Discord’s operations to Palantir or Thiel, but it wouldn’t be conspiratorial to point out that all this data seems to be funnelling along similar slopes.
Trust but verify
Persona has confirmed the breach, CEO Rick Song corresponding and even thanking the hackers for flagging the security exploit. This has not, however, tempered concerns among those hacktivists about how the user information is ultimately being used.
“Transparently, we are actively working on a couple of potential contracts which would be publicly visible if we move forward,” writes Christie Kim, chief operating officer at Persona, in an email regarding the security breach and speculation around Discord. “However, these engagements are strictly for workforce account security of government employees and do not include ICE or any agency within the Department of Homeland Security.”
After the alarm was initially raised about Persona, Discord claimed its work with the Thiel-backed firm was only temporary, and that it didn’t have new contacts with it moving forward. It also promised user info was being wiped from servers within seven days of being gathered.
A software engineer’s earnest effort to steer his new DJI robot vacuum with a video game controller inadvertently granted him a sneak peak into thousands of people’s homes.
While building his own remote-control app, Sammy Azdoufal reportedly used an AI coding assistant to help reverse-engineer how the robot communicated with DJI’s remote cloud servers. But he soon discovered that the same credentials that allowed him to see and control his own device also provided access to live camera feeds, microphone audio, maps, and status data from nearly 7,000 other vacuums across 24 countries. The backend security bug effectively exposed an army of internet-connected robots that, in the wrong hands, could have turned into surveillance tools, all without their owners ever knowing.
The DJI Romo. Image: DJI
Luckily, Azdoufal chose not to exploit that. Instead, he shared his findings with The Verge, which quickly contacted DJI to report the flaw. While DJI tells Popular Science the issue has been “resolved,” the dramatic episode underscores warnings from cybersecurity experts who have long-warned that internet-connected robots and other smart home devices present attractive targets for hackers.
The Stop Killing Games campaign is evolving into more than just a movement. In a YouTube video, the campaign’s creator, Ross Scott, explained that organizers are planning to establish two non-governmental organizations, one for the European Union and another for the US. According to Scott, these NGOs would allow for “long-term counter lobbying” when publishers end support for certain video games.
“Let me start off by saying I think we’re going to win this, namely the problem of publishers destroying video games that you’ve already paid for,” Scott said in the video. According to Scott, the NGOs will work on getting the original Stop Killing Games petition codified into EU law, while also pursuing more watchdog actions, like setting up a system to report publishers for revoking access to purchased video games.
The Stop Killing Games campaign started as a reaction to Ubisoft’s delisting of The Crew from players’ libraries. The controversial decision stirred up concerns about how publishers have the ultimate say on delisting video games. After crossing a million signatures last year, the movement’s leadership has been busy exploring the next steps.
According to Scott, the campaign leadership will meet with the European Commission soon, but is also working on a 500-page legal paper that reveals some of the industry’s current controversial practices. In the meantime, the ongoing efforts have led to a change of heart from Ubisoft since the publisher updated The Crew 2 with an offline mode.
Because IDMerit is an AI-powered KYC (Know Your Customer) provider, the data it collects is incredibly sensitive. The unsecured 1-terabyte database didn’t just leak passwords—it leaked the core personal identifiers used for your financial and digital life. The following structured data was left open for anyone to download:
Full names
Addresses
Post codes
Dates of birth
National IDs
Phone numbers
Genders
Email addresses
Telco metadata
Breach status and social profile annotations
The last data point – breach status and social profile annotations – could refer to a database identifier indicating whether the data originated from a data breach or a leaked database. However, at this point, the true meaning of the data point is unclear. The team noted that this specific data point was present only in some regions.
“At this scale, downstream risks include account takeovers, targeted phishing, credit fraud, SIM swaps, and long-tail privacy harms. Industry-wide, the case underlines how third-party identity vendors have become critical infrastructure and can become single points of catastrophic failure,” our team explained.
Who is IDMerit and How Did This Happen?
Our team believes the exposed database belongs to IDMerit, an AI-powered digital identity verification solutions provider. The company serves the fintech and financial services sectors, helping businesses with real-time verification tools. KYC (Know Your Customer) practices are a global norm for users to verify their identities when setting up various accounts.
Our researchers noticed the exposed instance on November 11th, 2025 and immediately contacted the company, which promptly secured the database. While there is no current evidence of malicious misuse, automated crawlers set up by threat actors constantly prowl the web for exposed instances, downloading them almost instantly once they appear.
Global data leak spans multiple countries
What’s most striking about the IDMerit data leak is its scale and global geography, with three billion records spanning over 20 countries. Several databases appeared to contain overlapping slices for the same country. However, our team believes most of the records were unique.
The country with the most exposed records was the United States, having over 203 million records leaked. The US was followed by Mexico (124M) and the Philippines (72M). Behind the first three, we see a trio of European nations: Germany (61M), Italy (53M), and France (53M).
The U.S. State Department is reportedly working on an online portal that would allow people in Europe and other regions to access content banned by their governments. The move comes at a time when conservative figures like Elon Musk and J.D. Vance have railed against European attempts to clamp down on hate speech, terrorist propaganda, and revenge porn.
Reuters reported Wednesday, citing unnamed sources, that the initiative is intended to fight censorship and could include a virtual private network (VPN) feature.
The portal would reportedly be hosted at Freedom.gov. The site currently displays a landing page featuring a small animation of Paul Revere on horseback above the words “Freedom is Coming.” Smaller text below reads, “Information is power. Reclaim your human right to free expression. Get Ready.”
[…]
Reuters reported that the portal was expected to launch at the conference, but was delayed.
“We don’t comment on draft laws, and that’s what it is,” European Commission Spokesperson Thomas Regnier said when asked about the portal during a press briefing today. “Let me say that the Commission does not block access to websites. It’s up to national authorities to do this kind of thing. If a website breaches EU law or international law, talking about sites which promote hate speech, for example, or have terrorist content, obviously that does not belong in Europe. That’s why we have a regulation on digital services, the DSA, which protects freedom of expression.”
[…]
Ironically, The Guardian reported today that DOGE cuts to the State Department and U.S. Agency for Global Media’s Internet Freedom program have effectively gutted the program.
The initiative funded grassroots tools to help people bypass government internet controls worldwide. It distributed over $500 million over the past decade but issued no funding in 2025, according to The Guardian.
Long-term preservation of digital information is vital for safeguarding the knowledge of humanity for future generations. Existing archival storage solutions, such as magnetic tapes and hard disk drives, suffer from limited media lifespans that render them unsuitable for long-term data retention1,2,3. Optical storage approaches, particularly laser writing in robust media such as glass, have emerged as promising alternatives with the potential for increased longevity. Previous work4,5,6,7,8,9,10,11,12,13,14,15,16 has predominantly optimized individual aspects such as data density but has not demonstrated an end-to-end system, including writing, storing and retrieving information. Here we report an optical archival storage technology based on femtosecond laser direct writing in glass that addresses the practical demands of archival storage, which we call Silica. We achieve a data density of 1.59 Gbit mm−3 in 301 layers for a capacity of 4.8 TB in a 120 mm square, 2 mm thick piece of glass. The demonstrated write regimes enable a write throughput of 25.6 Mbit s−1 per beam, limited by the laser repetition rate, with an energy efficiency of 10.1 nJ per bit. Moreover, we extend the storage ability to borosilicate glass, offering a lower-cost medium and reduced writing and reading complexity. Accelerated ageing tests on written voxels in borosilicate suggest data lifetimes exceeding 10,000 years.
Microsoft has some sort of apology (at the bottom) saying that copilot permissions did not extend beyond the user permissions, but that merrily skips along the fact that copilot permissions are not equal to user permissions: this is a governance issue: data ingested by copilot is used as training data. MS cannot guarantee that this will not be moved to a US server, where the data can be (and is!) read by the US government and given to competitors.
Microsoft 365 Copilot Chat has been summarizing emails labeled “confidential” even when data loss prevention policies were configured to prevent it.
Though there are data sensitivity labels and data loss prevention policies in place for email, Copilot has been ignoring those and talking about secret stuff in the Copilot Chat tab. It’s just this sort of scenario that has led 72 percent of S&P 500 companies to cite AI as a material risk in regulatory filings.
Redmond, earlier this month, acknowledged the problem in a notice to Office admins that’s tracked as CW1226324, as reposted by the UK’s National Health Service support portal. Customers are said to have reported the problem on January 21, 2026.
“Users’ email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat,” the notice says. “The Microsoft 365 Copilot ‘work tab’ Chat is summarizing email messages even though these email messages have a sensitivity label applied and a DLP policy is configured.”
Microsoft explains that sensitivity labels can be applied manually or automatically to files as a way to comply with organizational information security policies. These labels may function differently in different applications, the company says.
The software giant’s documentation makes clear that these labels do not function in a consistent way.
“Although content with the configured sensitivity label will be excluded from Microsoft 365 Copilot in the named Office apps, the content remains available to Microsoft 365 Copilot for other scenarios,” the documentation explains. “For example, in Teams, and in Microsoft 365 Copilot Chat.”
DLP, implemented through applications like Microsoft Purview, is supposed to provide policy support to prevent data loss.
“DLP monitors and protects against oversharing in enterprise apps and on devices,” Microsoft explains. “It targets Microsoft 365 locations, like Exchange and SharePoint, and locations you add, like on-premises file shares, endpoint devices, and non-Microsoft cloud apps.”
In theory, DLP policies should be able to affect Microsoft 365 Copilot and Copilot Chat. But that hasn’t been happening in this instance.
The root cause is said to be “a code issue [that] is allowing items in the sent items and draft folders to be picked up by Copilot even though confidential labels are set in place.”
In a statement provided to The Register after this story was filed, a Microsoft spokesperson said, “We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labeled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop. This did not provide anyone access to information they weren’t already authorized to see. While our access controls and data protection policies remained intact, this behavior did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access. A configuration update has been deployed worldwide for enterprise customers.” ®
Artificial intelligence promises to reshape economies worldwide, but firm-level evidence on its effects in Europe remains scarce. This column uses survey data to examine how AI adoption affects productivity and employment across more than 12,000 European firms. The authors find that AI adoption increases labour productivity levels by 4% on average in the EU, with no evidence of reduced employment in the short run. The productivity benefits, however, are unevenly distributed. Medium and large firms, as well as firms that have the capacity to integrate AI through investments in intangible assets and human capital, experience substantially stronger productivity gains.
[…]
we find that on average, AI adoption levels are similar in the EU and the US. Notably, important heterogeneity emerges beneath the surface. Financially developed EU countries – such as Sweden and the Netherlands – match US adoption rates, with around 36% of firms using big data analytics and AI in 2024. In contrast, firms in less financially developed EU economies, such as Romania and Bulgaria, lag substantially behind, with adoption rates around 28% in 2024. Figure 1 illustrates this divide, showing how the gap has persisted and even widened in recent years.
Adoption also varies dramatically by firm size. Among large firms (more than 250 employees), 45% have deployed AI, compared with only 24% of small firms (10 to 49 employees). This echoes classic patterns in technology diffusion (Comin and Hobijn 2010): larger firms possess the resources, technical expertise, and economies of scale needed to absorb integration costs. AI-adopting firms are also systematically different – they invest more, are more innovative, and face tighter constraints in finding skilled workers. These patterns suggest that simply observing which firms adopt AI and comparing their performance could yield misleading results, as adoption itself is endogenous to firm characteristics.
Isolating AI’s causal effect
To credibly identify the causal effect of AI on productivity, we develop a novel instrumental variable strategy, inspired by Rajan and Zingales’ (1998) seminal work on financial dependence and growth. Their key insight was that industry characteristics measured in one economy – where they are arguably less affected by local distortions – can serve as an exogenous source of variation when applied to other countries.
We extend this logic to the firm level. For each EU firm in our sample, we identify comparable US firms – matched on sector, size, investment intensity, innovation activity, financing structure and management practices. We then assign the AI adoption rate of these matched US firms as a proxy for the EU firm’s exogenous exposure to AI. Because US firms operate under different institutional, regulatory and policy environments, their adoption patterns capture technological drivers that are plausibly independent of EU-specific factors. Rigorous propensity-score balancing tests confirm that our matched US and EU firms are virtually identical across key observable characteristics, validating the identification strategy. Our analysis draws on survey data from EIBIS combined with balance sheet data from Moody’s Orbis.
Productivity gains without job losses
Our results reveal three key findings. First, AI adoption causally increases labour productivity levels by 4% on average in the EU. This effect is statistically robust and economically meaningful
[…]
Second, and crucially, we find no evidence that AI reduces employment in the short run. While naïve comparisons suggest AI-adopting firms employ more workers, this relationship disappears once we account for selection effects through our instrumental variable approach. The absence of negative employment effects, combined with significant productivity gains, points to a specific mechanism: capital deepening. AI augments worker output – enabling employees to complete tasks faster and make better decisions – without displacing labour
[…]
Third, AI’s productivity benefits are far from evenly distributed. Breaking down our results by firm size reveals that medium and large companies experience substantially stronger productivity gains than their smaller counterparts (see Figure 2). This differential effect reflects the role of scale in absorbing AI integration costs and accessing complementary assets – data infrastructure, technical talent, and organisational capacity to redesign workflows. The finding raises concerns about widening productivity gaps between firms and regions, particularly given Europe’s industrial structure, which is dominated by small and medium-sized enterprises.
Ring’s AI-powered “Search Party” feature, which links neighborhood cameras into a networked surveillance system to find lost dogs, was never intended to stop at pets, according to an internal email from founder Jamie Siminoff obtained by 404 Media.
Siminoff told employees in early October, shortly after the feature launched, that Search Party was introduced “first for finding dogs” and that the technology would eventually help “zero out crime in neighborhoods.” The on-by-default feature faced intense backlash after Ring promoted it during a Super Bowl ad. Ring has since also rolled out “Familiar Faces,” a facial recognition tool that identifies friends and family on a user’s camera, and “Fire Watch,” an AI-based fire alert system.
A Ring spokesperson told the publication Search Party does not process human biometrics or track people.
A new study published in Nature has found that X’s algorithm—the hidden system or “recipe” that governs which posts appear in your feed and in which order—shifts users’ political opinions in a more conservative direction.
Led by Germain Gauthier from Bocconi University in Italy, it is a rare, real-world randomized experimental study on a major social media platform. And it builds on a growing body of research that shows how these platforms can shape people’s political attitudes.
Two different algorithms
The researchers randomly assigned 4,965 active US-based X users to one of two groups.
The first group used X’s default “For You” feed. This features an algorithm that selects and ranks posts it thinks users will be more likely to engage with, including posts from accounts that they don’t necessarily follow.
The second group used a chronological feed. This only shows posts from accounts users follow, displayed in the order they were posted. The experiment ran for seven weeks during 2023.
Users who switched from the chronological feed to the “For You” feed were 4.7 percentage points more likely to prioritize policy issues favored by US Republicans (for example, crime, inflation and immigration). They were also more likely to view the criminal investigation into US President Donald Trump as unacceptable.
They also shifted in a more pro-Russia direction in regards to the war in Ukraine. For example, these users became 7.4 percentage points less likely to view Ukrainian President Volodymyr Zelenskyy positively, and scored slightly higher on a pro-Russian attitude index overall.
The researchers also examined how the algorithm produced these effects.
They found evidence that the algorithm increased the share of right-leaning content by 2.9 percentage points overall (and 2.5 points among political posts), compared with the chronological feed.
It also significantly demoted the share of posts from traditional news organizations‘ accounts while promoting or boosting posts from political activists.
One of the most concerning findings of the study is the longer-term effects of X’s algorithmic feed. The study showed the algorithm nudged users towards following more right-leaning accounts, and that the new following patterns endured even after switching back to the chronological feed.
In other words, turning the algorithm off didn’t simply “reset” what people see. It had a longer-lasting impact beyond its day-to-day effects.
One piece of a much bigger picture
This new study supports findings of similar studies.
For example, a study in 2022, before Elon Musk had bought Twitter and rebranded it as X, found the platform’s algorithmic systems amplified content from the mainstream political right more than the left in six out of the seven countries.
An experimental study from 2025 re-ranked X feeds to reduce exposure to content that expresses antidemocratic attitudes and partisan animosity. They found this shifted feelings towards their political opponents by more than two points on a 0–100 “feeling thermometer.” This is a shift the authors argued would have normally taken about three years to occur organically in the general population.
My own research offers another piece of evidence to this picture of algorithmic bias on X. Along with my colleague Mark Andrejevic, I analyzed engagement data (such as likes and reposts) from prominent political accounts during the final stages of the 2024 US election.
Our findings unearthed a sudden and unusual spike in engagement with Musk’s account after his endorsement of Trump on July 13—the day of the assassination attempt on Trump. Views on Musk’s posts surged by 138%, retweets by 238%, and likes by 186%. This far outstripped increases on other accounts.
After July 13, right-leaning accounts on X gained significantly greater visibility than progressive ones. The “playing field” for attention and engagement on the platform was tilted thereafter towards right-leaning accounts—a trend that continued for the remainder of the time period we analyzed in that study.
[…]The team, comprised of researchers from ETH Zurich and Università della Svizzera italiana (USI), examined the “zero-knowledge encryption” promises made by Bitwarden, LastPass, and Dashlane, finding all three could expose passwords if attackers compromised servers.
The premise of zero-knowledge encryption is that user passwords are encrypted on their device, and the password manager’s server acts merely as a dumb storage box for the encrypted credentials. Therefore, in the event that the vendor’s servers are controlled by malicious parties, attackers wouldn’t be able to view users’ secrets.
As one of the most popular alternatives to Apple and Google’s own password managers, which together dominate the market, the researchers found Bitwarden was most susceptible to attacks, with 12 working against the open-source product. Seven distinct attacks worked against LastPass, and six succeeded in Dashlane.
The attacks don’t exploit weaknesses in the same way that remote attackers could exploit vulnerabilities and target specific users. Instead, the researchers worked to test each platform’s ability to keep secrets safe in the event they were compromised.
In most cases where attacks were successful, the researchers said they could retrieve encrypted passwords from the user, and in some cases, change the entries.
They used a malicious server model to test all of this – setting up servers that behaved like hacked versions of those used by the password managers. Seven of Bitwarden’s 12 successful attacks led to password disclosure, whereas only three of LastPass’s attacks led to the same end, and one for Dashlane.
All three vendors claim their products come with zero-knowledge encryption. The researchers noted that none of them outline the specific threat model their password manager secures against.
The researchers said: “The majority of our attacks require simple interactions which users or their clients perform routinely as part of their usage of the product, such as logging in to their account, opening the vault and viewing the items, or performing periodic synchronization of data.
“We also present attacks that require more complex user actions, such as key rotations, joining an organization, sharing credentials, or even clicking on a misleading dialog. Although assessing the probability of these actions is challenging, we believe that, within a vast user base, many users will likely perform them.”
In the full paper [PDF], they went on to argue that password managers have escaped deep academic scrutiny until now, unlike end-to-end encrypted messaging apps. It is perhaps due to a perception that password managers are simple applications – deriving keys and then encrypting them. However, their codebases are more complex than that, often offering features such as the ability to share accounts with family members and featuring various ways to maintain backward-compatibility with older encryption standards.
Kenneth Paterson, professor of computer science at ETH Zurich, said “we were surprised by the severity of the security vulnerabilities” affecting the password managers.
“Since end-to-end encryption is still relatively new in commercial services, it seems that no one had ever examined it in detail before.”
Canada Goose says an advertised breach of 600,000 records is an old raid and there are no signs of a recent compromise.
The down-filled jacket purveyor did not answer questions about how old the data is or how it was originally taken, but told us it relates to past customer purcahses.
“Canada Goose is aware that a historical dataset relating to past customer transactions has recently been published online,” a spokesperson said. “At this time, we have no indication of any breach of our own systems. We are currently reviewing the newly released dataset to assess its accuracy and scope, and will take any further steps as may be appropriate.”
“To be clear, our review shows no evidence that unmasked financial data was involved. Canada Goose remains committed to protecting customer information.”
ShinyHunters posted the company’s data for download on February 14 via their leak site. The criminals’ advert for the data claimed there were more than 600,000 records, each containing personally identifiable information, as well as payment/financial details.
The Register reviewed a number of the records available online via a JSON file, and ShinyHunters’ description of the data appears accurate.
It includes names and other usual PII data points, as well as partial payment information and order details, such as price and delivery address.
OpenClaw is a huge disruptor in the agentic AI space – it has an actual orchestrator, is super easy to implement and destroys many business models. You can bet that despite all the sounds, the open source repository will be laid to rest and all new development will go in to the closed OpenAI space so they can regain their competitive advantage and maybe actually make some money, despite the best efforts of megalomanic compulsive liar and general poor man’s baddie, Sam Altman.
So this move kills a real gamechanger and moves EU top talent to the US in one go. What great things money does for us. Not.
Peter Steinberger, creator of popular open-source artificial intelligence program OpenClaw, will be joining OpenAI Inc. to help bolster the ChatGPT developer’s product offerings.
“OpenClaw will live in a foundation as an open source project that OpenAI will continue to support,” OpenAI Chief Executive Officer Sam Altman wrote in a post on X Sunday, adding that Steinberger is “joining OpenAI to drive the next generation of personal agents.”
Steinberger wrote in a separate post on his website Saturday that he will be joining OpenAI to be “part of the frontier of AI research and development, and continue building.”
“It’s always been important to me that OpenClaw stays open source and given the freedom to flourish,” Steinberger wrote. “Ultimately, I felt OpenAI was the best place to continue pushing on my vision and expand its reach.”
OpenClaw, previously called Clawdbot and Moltbot, has garnered a cult following since launching in November for its ability to operate autonomously, clearing users’ inboxes, making restaurant reservations and checking in for flights, among other tasks. Users can also connect the tool to messaging apps such as WhatsApp and Slack and direct the agent through those platforms.
“My next mission is to build an agent that even my mum can use,” Steinberger wrote. “That’ll need a much broader change, a lot more thought on how to do it safely, and access to the very latest models and research.”
Slowly but very slowly Europeans are starting to understand that funding the US software companies, creating a dependency on them and giving them their data to grab in (which they do) is not a good idea.
The European Parliament has reportedly turned off AI features on lawmakers’ devices amid concerns about content going where it shouldn’t.
According to Politico, staff were notified that AI features on corporate devices (including tablets) were disabled because the IT department could not guarantee data security.
The bone of contention is that some AI assistants require the use of cloud services to perform tasks including email summarization, and so send the data off the device – a challenge for data protection.
It’s a unfortunate for device vendors that promote on-device processing, but the European Parliament’s tech support desk reportedly stated: “As these features continue to evolve and become available on more devices, the full extent of data shared with service providers is still being assessed. Until this is fully clarified, it is considered safer to keep such features disabled.”
The Register contacted the European Parliament for comment.
Data privacy and AI services have not been the greatest of bedfellows. Studies have shown that employees regularly leak company secrets via assistants, and on-device AI services are a focus of vendors amid concerns about exactly what is being sent to the cloud.
The thought of confidential data being sent to an unknown location in the cloud to generate a helpful summary has clearly worried lawmakers, which is why there is a blanket ban. However, the issue has less relevance if the process occurs on the device itself.
The Politico report noted that day-to-day tools, such as calendar applications, are not affected by the edict. The ban is temporary until the tech boffins can clarify what is being shared and where it is going.
A moderator on diyAudio set up an experiment to determine whether listeners could differentiate between audio run through pro audio copper wire, a banana, and wet mud. Spoiler alert: the results indicated that users were unable to accurately distinguish between these different ‘interfaces.’
Pano, the moderator who built the experiment, invited other members on the forum to listen to various sound clips with four different versions: one taken from the original CD file, with the three others recorded through 180cm of pro audio copper wire, via 20cm of wet mud, through 120cm of old microphone cable soldered to US pennies, and via a 13cm banana, and 120cm of the same setup as earlier.
Initial test results showed that it’s extremely difficult for listeners to correctly pick out which audio track used which wiring setup. “The amazing thing is how much alike these files sound. The mud should sound perfectly awful, but it doesn’t,” Pano said. “All of the re-recordings should be obvious, but they aren’t.”
Online shopping continues to grow globally, with total e-commerce sales estimated at $6 trillion by 2024 and consumers in Western Europe making nearly 19 online purchases per year. In this rapidly expanding landscape, online retailers are increasingly investing in interactive and contextual product images to capture customer attention. But do these visual strategies actually work? This is what PhD candidate Rowena Summerlin of Tilburg University investigated .
Summerlin concludes that product images—especially interactive versions—can indeed influence consumer purchase intentions, but that this effect is context-dependent. According to her research, interactive images, especially for individual consumers, increase the perception of a product’s higher quality, which can strengthen purchase intentions. However, the impact depends on factors such as product price, customer type, and the platform on which the images are displayed. For business customers, the effect appears to be virtually nonexistent.
Analyses of real marketplace data show that contextual images can increase sales on some websites, but have little effect on other platforms. The researchers point out that this is likely related to the differing norms and customs of different online environments. The strongest effects occurred with more expensive products and during peak shopping periods, when consumers themselves are more uncertain about their choices.
The AH-64 Apache attack helicopter has evolved into a counter-drone platform in recent years — something we have been following closely. While the Israeli Air Force had pioneered this role for the AH-64 for years, the U.S. Army has now formally codified it and added new capabilities in the process. Now, as we had suggested some time ago, the Apache is getting proximity-fuzed 30mm cannon shells for its chin-mounted M230 cannon that will add to its drone-killing arsenal, giving it a cheaper and more plentiful engagement option than some of the alternatives.
Apaches live-fire tested the 30x113mm XM1225 Aviation Proximity Explosive (APEX) ammo last December, according to a recent Army release. The trials occurred at the service’s sprawling Yuma Proving Ground (YPG) in southern Arizona. Multiple test engagements occurred against various types of drone targets.
The specialized APEX ammunition works by detonating only when it is close to an object, then it explodes in a spray of shrapnel. This is critical to shooting down drones as they are small, independently moving targets, and the Apache’s monocle-targeted chin gun isn’t exactly a sniper rifle in terms of precision. At the same time, the rounds could also be used against targets on the surface — including personnel, soft-skinned vehicles, and small boats, for instance — offering unique area effects compared to the Apache’s standard impact-detonating, high-explosive ammunition.
The F-35’s ‘computer brain,’ including its cloud-based components, could be cracked to accept third-party software updates, just like ‘jailbreaking‘ a cellphone, according to the Dutch State Secretary for Defense. The statement comes as foreign operators of the jets continue to be pressed on what could happen if the United States were ever to cut off support. President Donald Trump’s administration has pursued a number of policies that have resulted in new diplomatic strains with some long-time allies, especially in Europe.
“If, despite everything, you still want to upgrade, I’m going to say something I should never say, but I will anyway: you can jailbreak an F-35 just like an iPhone,” Gijs Tuinman said during an episode of BNR Nieuwsradio‘s “Boekestijn en de Wijk” podcast posted online yesterday, according to a machine translation.
[…]
As we have explored in detail in the past, the F-35 program imposes unique limits on the ability of operators to make changes to the jet’s software, as well as to associated systems on the ground. Virtually all F-35s in service today see software updates come through a cloud-based network, the original version of which is known as the Autonomic Logistics Information System (ALIS). Persistent issues with ALIS have led to the development of a follow-on Operational Data Integrated Network (ODIN), the transition to which is still ongoing.
The ALIS/ODIN network is designed to handle much more than just software updates and logistical data. It is also the port used to upload mission data packages containing highly sensitive planning information, including details about enemy air defenses and other intelligence, onto F-35s before missions and to download intelligence and other data after a sortie.
Issues with ALIS, as well as concerns about the transfer of nationally sensitive information within the network, have led certain operators, including the Netherlands, to firewall off aspects of their software reprogramming activities in the past.
[…]
TWZ previously explored many of these same issues in detail last year, amid a flurry of reports about the possibility that F-35s have some type of discreet ‘kill switch’ built in that U.S. authorities could use to remotely disable the jets. Rumors of this capability are not new and remain completely unsubstantiated.
At that time, we stressed that a ‘kill switch’ would not even be necessary to hobble F-35s in foreign service. At present, the jets are heavily dependent on U.S.-centric maintenance and logistics chains that are subject to American export controls and agreements with manufacturer Lockheed Martin. Just reliably sourcing spare parts has been a huge challenge for the U.S. military itself, as you can learn more about in this past in-depth TWZ feature. F-35s would be quickly grounded without this sustainment support.