135,000 OpenClaw instances open to the internet because default settings

SecurityScorecard’s STRIKE threat intelligence team is sounding the alarm over the sheer volume of internet-exposed OpenClaw instances it discovered, which numbers more than 135,000 as of this writing. When combined with previously known vulnerabilities in the vibe-coded AI assistant platform and links to prior breaches, STRIKE warns that there’s a systemic security failure in the open-source AI agent space.

“Our findings reveal a massive access and identity problem created by poorly secured automation at scale,” the STRIKE team wrote in a report released Monday. “Convenience-driven deployment, default settings, and weak access controls have turned powerful AI agents into high-value targets for attackers.”

[…]

That’s not to say users aren’t at least partially to blame for the issue. Take the way OpenClaw’s default network connection is configured.

“Out of the box, OpenClaw binds to `0.0.0.0:18789`, meaning it listens on all network interfaces, including the public internet,” STRIKE noted. “For a tool this powerful, the default should be `127.0.0.1` (localhost only). It isn’t.”

STRIKE recommends all OpenClaw users, at the very least, immediately change that binding to point it to localhost. Outside of that, however, SecurityScorecard’s VP of threat intelligence and research Jeremy Turner wants users to know that most of the flaws in the system aren’t due to user inattention to defaults. He told The Register in an email that many of OpenClaw’s problems are there by design because it’s built to make system changes and expose additional services to the web by its nature.

“It’s like giving some random person access to your computer to help do tasks,” Turner said. “If you supervise and verify, it’s a huge help. If you just walk away and tell them all future instructions will come via email or text message, they might follow instructions from anyone.”

As STRIKE pointed out, compromising an OpenClaw instance means gaining access to everything the agent can access, be that a credential store, filesystem, messaging platform, web browser, or just its cache of personal details gathered about its user.

And with many of the exposed OpenClaw instances coming from organizational IP addresses and not just home systems, it’s worth pointing out that this isn’t just a problem for individuals mucking around with AI.

[…]

“Consider carefully how you integrate this, and test in a virtual machine or separate system where you limit the data and access with careful consideration,” Turner explained. “Think of it like hiring a worker with a criminal history of identity theft who knows how to code well and might take instructions from anyone.”

That said, Turner isn’t advocating for individuals and organizations to completely abandon agentic AI like OpenClaw – he simply wants potential users to be wary and consider the risks when deploying a potentially revolutionary new tech product that’s rife with vulnerabilities.

“All these new capabilities are incredible, and the researchers deserve a lot of credit for democratizing access to these new technologies,” Turner told us. “Learn to swim before jumping in the ocean.”

[…]

Source: OpenClaw instances open to the internet present ripe targets • The Register

France to ditch US platforms Microsoft Teams, Zoom for ‘sovereign platform’ with unfortunate name amid security concerns

Why they couldn’t fund a French company to contribute to a well working open source platform like Jitsi is beyond me.

France will replace the American platforms Microsoft Teams and Zoom with its own domestically developed video conferencing platform, which will be used in all government departments by 2027, the country announced on Monday.

The move is part of France’s strategy to stop using foreign software vendors, especially those from the United States, and regain control over critical digital infrastructure. It comes at a crucial moment as France, like Europe, reaches a turning point regarding digital sovereignty.

“The aim is to end the use of non-European solutions and guarantee the security and confidentiality of public electronic communications by relying on a powerful and sovereign tool,” said David Amiel, minister for the civil service and state reform.

On Monday, the government announced it will instead be using the French-made videoconference platform Visio. The platform has been in testing for a year and has around 40,000 users.

What is Visio?

Visio is part of France’s Suite Numérique plan, a digital ecosystem of sovereign tools designed to replace the use of US online services such as Gmail and Slack. These tools are for civil servants and not for public or private company use.

The platform also has an artificial intelligence-powered meeting transcript and speaker diarization feature, using the technology of the French start-up Pyannote.

Viso is also hosted on the French company Outscale’s sovereign cloud infrastructure, which is a subsidiary of French software company Dassault Systèmes.

The French government said that switching to Visio could cut licensing costs and save as much as €1 million per year for every 100,000 users.

The move also comes as Europe has questioned its overreliance on US information technology (IT) infrastructure following US cloud outages last year.

“This strategy highlights France’s commitment to digital sovereignty amid rising geopolitical tensions and fears of foreign surveillance or service disruptions,” Amiel said.

Source: France to ditch US platforms Microsoft Teams, Zoom for ‘sovereign platform’ amid security concerns | Euronews

Microsoft will give the FBI your BitLocker keys if asked. Can do so because of cloud accounts.

Great target for hackers then, the server with unencrypted bitlocker keys on it.

Microsoft has confirmed in a statement to Forbes that the company will provide the FBI access to BitLocker encryption keys if a valid legal order is requested. These keys enable the ability to decrypt and access the data on a computer running Windows, giving law enforcement the means to break into a device and access its data.

The news comes as Forbes reports that Microsoft gave the FBI the BitLocker encryption keys to access a device in Guam that law enforcement believed to have “evidence that would help prove individuals handling the island’s Covid unemployment assistance program were part of a plot to steal funds” in early 2025.

Source: Microsoft gave FBI BitLocker keys, raising privacy fears | Windows Central

Tea – a way to secure FOSS by offering financial incentives – brews massive token farming campaigns (and dissolves them)

No good idea – like rewarding open source software developers and maintainers for their contributions – goes unabused by cybercriminals, and this was the case with the Tea Protocol and two token farming campaigns.

Both incidents gave the project’s founders a real-time view into how far – and fast – attackers will go to chase financial gain, and they helped shape “radical changes” that will roll out in the Tea network’s mainnet launch early next year, co-founder and CEO Tim Lewis told The Register.

The Tea Protocol was founded by Max Howell, who created open source package manager Homebrew, and Lewis, who established DEVxDAO, a non-profit that distributes grants to support decentralized computing projects, to reward open source developers and help secure software supply chains via financial incentives.

“When you think about the different package management ecosystems, they all have different gates in front of them, and none of them have been a financial gate,” Lewis said in an interview.

“There’s a human that sits in the front who has to be this gate, but it takes a toll on the human to go through all the data, and that’s only getting worse,” he said. “There’s the proliferation of the AI-induced pull requests, which are great, but that’s become like a DDoS attack.”

Last year, the duo rolled out the Tea Protocol testnet – essentially a test run for the incentives program that allows open source developers to earn cryptocurrency – specifically Tea tokens – for valuable code and fixes, while users can stake Tea to support specific projects and also earn rewards. A portion of the protocol rewards is shared with project maintainers and users who stake their tokens.

“Again, this was on a test network for fake internet points that could eventually potentially have some value,” Lewis said. “Our incentive for that period only lasted about three weeks.”

We got to watch this happen in real time, and we recognized how fast, how far people had gone to create scripts that have a worm-like behavior

In April 2024, the Tea team shut down the incentive program’s rewards after about 15,000 spammy packages flooded the npm registry to farm Tea points. These contained little or zero useful functionality, and were instrumented with “tea.yaml” metadata that linked back to Tea accounts in an attempt to inflate developers’ reputation and earn payouts.

“We got to watch this happen in real time, and we recognized how fast, how far people had gone to create scripts that have a worm-like behavior,” Lewis said.

Then it got worse. In 2025, the earlier Tea farming campaign grew into the IndonesianFoods and Indonesian Tea campaigns that polluted more than 1 percent of npm with spam packages. And in November, Amazon uncovered more than 150,000 malicious npm packages, all linked to another Tea token farming campaign, that the cloud giant described as “one of the largest package flooding incidents in open source registry history.”

“I view this as a canary in the coal mine,” Lewis said.

In these token farming campaigns, the fraudsters flooded registries with spam, as opposed to cryptocurrency- and other secret-stealing laced code –  and neither of the latter two is hypothetical. North Korea’s Lazarus Group and other sophisticated attackers have previously targeted npm for these illicit purposes.

“When you are a destructive organization like Lazarus Group, there’s incentive to use this same techniques to attack [supply chains],” Lewis said. “So we need to fix the core.”

How to reward secure code and penalize spam

To this end, Tea’s founders are working to fix the protocol’s design to ensure that the incentives program can’t be abused when the mainnet launches in early 2026.

This involves requiring packages and projects to pass ownership and provenance checks, and ensuring contributions aren’t just automated spam. The Tea team is also designing monitoring features that will check for Sybil attacks and flag surges in low-quality package creation and suspicious identities.

If malicious-looking patterns are detected, the developer won’t receive rewards and their registrations will be quarantined, pending further review.

Additional key quality and security improvements will happen via integration with PKGX, which Howell wrote. It’s a package runner that creates a containerized environment for projects and manages developer tools across environments. PKGX verifies maintainers using cryptographic signatures and identity checks, and also evaluates their contributions to various projects for quality, along with security posture and dependencies.

This registry will integrate directly with Tea upon the protocol’s mainnet launch, and will auto-detect and penalize, if needed, spammy packages at the point of registration – not after – while rewarding maintainers for their legit contributions.

Automated SBOMs, bug bounties

In the future, Lewis says that this design will also allow enterprises to automate bug bounties, and SBOMs (software bills of materials) that provide an inventory of all the components found in a piece of software. This will make it easier for large companies to map out their dependencies, and then reward developers for fixing any critical security issues they find.

[…]

“Some CISO, somewhere, every day is looking at his tens of thousands of packages that he approved for use, and now he’s responsible for whether or not these things are secure,” Lewis said. “He can’t have all the people that work within his department spend all of their time trying to get some guy in Nebraska to review a pull request and get the critical bug for his architecture solved en masse. We’re hoping this creates a tool that allows that value distribution without impermanent loss en masse.”

Lewis’ goal, he says, is to see upwards of “millions of dollars a day, retrieved for issue completion.”

Project developers and maintainers write the fixes, and chief security officers can confirm to their boards of directors that their dependencies and critical code is secure. “Plus, the meantime for resolution for these issues comes down – and they are not funding groups like North Korea’s Lazarus,” he added.

In other words: Tea’s goal reaches fruition. Open source project maintainers get paid for their valuable work, code becomes more secure, financially motivated crews can’t game the system, and the world becomes a better place. ®

Source: CEO spills the Tea about massive token farming campaigns • The Register

Over 10,000 Docker Hub images found leaking credentials, auth keys

More than 10,000 Docker Hub container images expose data that should be protected, including live credentials to production systems, CI/CD databases, or LLM model keys.

The secrets impact a little over 100 organizations, among them are a Fortune 500 company and a major national bank.

[…]

After scanning container images uploaded to Docker Hub in November, security researchers at threat intelligence company Flare found that 10,456 of them exposed one or more keys.

The most frequent secrets were access tokens for various AI models (OpenAI, HuggingFace, Anthropic, Gemini, Groq). In total, the researchers found 4,000 such keys.

When examining the scanned images, the researchers discovered that 42% of them exposed at least five sensitive values.

“These multi-secret exposures represent critical risks, as they often provide full access to cloud environments, Git repositories, CI/CD systems, payment integrations, and other core infrastructure components,” Flare notes in a report today.

[…]

According to the researchers, one of the most frequent errors observed was the use of .ENV files that developers use to store database credentials, cloud access keys, tokens, and various authentication data for a project.

Additionally, they found hardcoded API tokens for AI services being hardcoded in Python application files, config.json files, YAML configs, GitHub tokens, and credentials for multiple internal environments.

Some of the sensitive data was present in the manifest of Docker images, a file that provides details about the image.

Many of the leaks appear to originate from the so-called ‘shadow IT’ accounts, which are Docker Hub accounts that fall outside of the stricter corporate monitoring mechanisms, such as those for personal use or belonging to contractors.

Flare notes that roughly 25% of developers who accidentally exposed secrets on Docker Hub realized the mistake and removed the leaked secret from the container or manifest file within 48 hours.

However, in 75% of these cases, the leaked key was not revoked, meaning that anyone who stole it during the exposure period could still use it later to mount attacks.

Organizations should implement active scanning across the entire software development life cycle and revoke exposed secrets and invalidate old sessions immediately.

Source: Over 10,000 Docker Hub images found leaking credentials, auth keys

All of Russia’s Porsches Were Bricked By a Satellite Outage

Imagine walking out to your car, pressing the start button, and getting absolutely nothing. No crank, no lights on the dash, nothing. That’s exactly what happened to hundreds of Porsche owners in Russia last week. The issue is with the Vehicle Tracking System, a satellite-based security system that’s supposed to protect against theft. Instead, it turned these Porsches into driveway ornaments.

The issue was first reported at the end of November, with owners reporting identical symptoms of their cars refusing to start or shutting down soon after ignition. Russia’s largest dealership group, Rolf, confirmed that the problem stems from a complete loss of satellite connectivity to the VTS. When it loses its connection, it interprets the outage as a potential theft attempt and automatically activates the engine immobilizer.

What Actually Happened

The issue affects all models and engine types, meaning any Porsche equipped with the system could potentially disable itself without warning. The malfunction impacts Porsche models dating back to 2013 that have the factory VTS installed. This includes popular models like the Cayenne, Macan, Panamera, Taycan, 911, and the 718 Cayman and Boxster. When the VTS connection drops, the anti-theft protocol kicks in, cutting fuel delivery and locking down the engine completely.

[…]

Some drivers reported success after disconnecting their car batteries for up to 10 hours, while others managed to restore function by disabling or rebooting the VTS module entirely. Rolf dealerships have been instructing technicians to manually reset the alarm units, which often requires partially dismantling the vehicle. Some cars spring back to life immediately, while others remain stubbornly offline despite multiple attempts.

[…]

Source: All of Russia’s Porsches Were Bricked By a Mysterious Satellite Outage – Autoblog

Now you might say Fuck the Russians, but this is something that could happen anywhere and to anyone.

Kohler Can Access Data and Pictures from Toilet Camera It Describes as “End-to-End Encrypted”

In October Kohler launched Dekota, a $600 (plus monthly subscription) device that attaches to the rim of your toilet and collects images and data from inside, promising to track and provide insights on gut health, hydration, and more. To allay the obvious privacy concerns, the company emphasizes the sensors are only pointed down, into the bowl, and assures potential buyers that the data collected by the device and app are protected with “end-to-end encryption”.

Kohler Health’s homepage, the page for the Kohler Health App, and a support page all use the term “end-to-end encryption” to describe the protection the app provides for data. Many media outlets included the claim in their articles covering the launch of the product.

However, responses from the company make it clear that—contrary to common understanding of the term—Kohler is able to access data collected by the device and associated application. Additionally, the company states that the data collected by the device and app may be used to train AI models.

[…]

emails exchanged with Kohler’s privacy contact clarified that the other “end” that can decrypt the data is Kohler themselves: “User data is encrypted at rest, when it’s stored on the user’s mobile phone, toilet attachment, and on our systems.  Data in transit is also encrypted end-to-end, as it travels between the user’s devices and our systems, where it is decrypted and processed to provide our service.”

They additionally told me “We have designed our systems and processes to protect identifiable images from access by Kohler Health employees through a combination of data encryption, technical safeguards, and governance controls.”

What Kohler is referring to as E2EE here is simply HTTPS encryption between the app and the server, something that has been basic security practice for two decades now, plus encryption at rest.

[…]

Source: Kohler Can Access Data and Pictures from Toilet Camera It Describes as “End-to-End Encrypted” – /var/log/simon

5 ancient bugs in Fluent Bit put major clouds at risk

A series of “trivial-to-exploit” vulnerabilities in Fluent Bit, an open source log collection tool that runs in every major cloud and AI lab, was left open for years, giving attackers an exploit chain to completely disrupt cloud services and alter data.

The Oligo Security research team found the five vulnerabilities and – in coordination with the project’s maintainers – on Monday published details about the bugs that allow attackers to bypass authentication, perform path traversal, achieve remote code execution, cause denial-of-service conditions, and manipulate tags.

Updating to the latest stable version, v4.1.1 / 4.0.12, fixes the flaws.

Fluent Bit, an open source project maintained by Chronosphere, is used by major cloud providers and tech giants, including Google, Amazon, Oracle, IBM, and Microsoft, to collect and route data.

It’s a lightweight telemetry data agent and processor for logs, metrics, and traces, and it has more than 15 billion deployments. At KubeCon earlier this month, OpenAI said it runs Fluent Bit on all of its Kubernetes nodes.

It’s been around for 14 years, and at least one of the newly disclosed bugs, a path-traversal flaw now tracked as CVE 2025-12972, has left cloud environments vulnerable for more than 8 years, according to Oligo Security researcher Uri Katz.

This, Katz told The Register, is because “the file-output behavior that makes path traversal possible has been a part of Fluent Bit since its early architecture. The other issues aren’t quite as old but are still long-standing.”

Most of these vulnerabilities are due to a new plugin being introduced, he added. “We can see based on code history, the tag-handling flaw behind CVE-2025-12977 has been present for at least four years, and the Docker input buffer overflow (CVE-2025-12970) goes back roughly 6 years.”

[…]

The five CVEs are:

CVE-2025-12977, a partial string comparison vulnerability in the tag_key configuration option. Affected inputs: HTTP, Splunk, Elasticsearch.

This type of flaw occurs when a program accepts a partial input string as a match for a complete string (like a password, username, or file path), and in this case, the vulnerability allows an attacker to control the value of tags – thus determining how and where the log data is processed – without knowing the tag_key value.

“An attacker with network access to a fluentbit http input server, Elasticsearch input data or Splunk input data, can send a json with a key from A-Z 0-9 essentially making sure one of the characters will match the key allowing them to control the tag value,” the Oligo researchers wrote. “An attacker could hijack routing, inject fake or malicious records under trusted tags, bypass filters or monitoring, and confuse downstream systems so logs end up in unexpected databases, dashboards, or alerting tools.”

CVE-2025-12978 is due to improper input validation on tag_key records. Affected inputs: HTTP, Splunk, Elasticsearch.

Fluent Bit’s tag_key option lets record fields bypass the normal sanitization process and define tags directly, which can lead to path traversal, injection, or unexpected file writes in downstream outputs.

CVE-2025-12972, a path traversal vulnerability in the File output plugin.

Vulnerable configurations:

  • Any configuration where the Tag value can be controlled (directly or indirectly) and the file output lacks a defined File key.
  • HTTP input with tag_key set and file output missing the File key.
  • Splunk input with tag_key set and file output missing the File key.
  • Elasticsearch input with tag_key set and file output missing the File key.
  • Forward input combined with file output missing the File key.

Again, because Fluent Bit uses tags straight from incoming logs without sanitizing them, attackers can use path traversal characters “../” in the tag to change the file path and name. “Since attackers can also partially control the data written to the file, this can lead to RCE on many systems,” the researchers warn.

CVE-2025-12970, a stack buffer overflow bug in the in_docker plugin, used to collect Docker container metrics.

Fluent Bit copies a container’s name into a fixed 256-byte buffer without checking its length, and this means a long container name can overflow that stack buffer. An attacker who can control container names or create containers can use a long name to trigger a stack overflow and crash the agent or execute code. “In a worse scenario, the overflow could let an attacker run code as the agent, letting them steal secrets from the host, install a backdoor, or move laterally to other services,” according to the bug hunters.

CVE-2025-12969, an authentication bypass vulnerability in the in_forward plugin – this is a network input plugin that receives logs from other Fluent Bit or Fluentd instances.

The researchers found that if the security.users configuration option is specified, no authentication occurs. This could allow all manner of nefarious activity including spamming security alerts to hide actual malicious behavior, injecting false telemetry to hide attackers’ activity, overwriting or exfiltrating logs, or feeding misleading data into detection pipelines.

Worst-case scenario

“A hypothetical worst-case scenario would be an attacker chaining these flaws together,” Katz said. “For example: an attacker sends a crafted log message that abuses the tag_key vulnerabilities (CVE-2025-12977 / CVE-2025-12978) and then embeds path-traversal characters to trigger the file-write vulnerability (CVE-2025-12972). That lets the attacker overwrite files on the host and escalate to remote code execution.”

Additionally, because Fluent Bit is commonly deployed as a Kubernetes DaemonSet, “a single compromised log agent can cascade into full node and cluster takeover, with the attacker tampering with logs to hide their activity and establishing long-term persistence across all nodes,” he added.

[…]

Source: Years-old bugs in open source took out major clouds at risk • The Register

Copy-paste now exceeds file transfer as top corporate data exfiltration vector, as well as untrustable extensions and not using SSO/MFA

It is now more common for data to leave companies through copying and paste than through file transfers and uploads, LayerX revealed in its Browser Security Report 2025.

This shift is largely due to generative AI (genAI), with 77% of employees pasting data into AI prompts, and 32% of all copy-pastes from corporate accounts to non-corporate accounts occurring within genAI tools.

Note: below it also highlights copy pasta into instant messaging services. What it doesn’t highlight is that everything you paste into Chrome is fair game for Google as far as it’s terms and services are concerned.

“Traditional governance built for email, file-sharing, and sanctioned SaaS didn’t anticipate that copy/paste into a browser prompt would become the dominant leak vector,” LayerX CEO Or Eshed wrote in a blog post summarizing the report.

The report highlights data loss blind spots in the browser, from shadow SaaS to browser extension supply chain risks, and provides a checklist for CISOs and other security leaders to gain more control over browser activity.

GenAI now accounts for 11% of enterprise application usage, with adoption rising faster than many data loss protection (DLP) controls can keep up. Overall, 45% of employees actively use AI tools, with 67% of these tools being accessed via personal accounts and ChatGPT making up 92% of all use.

Corporate data makes its way to genAI tools through both copying and pasting — with 82% of these copy-pastes occurring via personal accounts — and through file uploads, with 40% of files uploaded to genAI tools containing either personally identifiable information (PII) or payment card information (PCI).

With the rise of AI-driven browsers such as OpenAI’s Atlas and Perplexity’s Comet, governance of AI tools’ access to corporate data becomes even more urgent, the LayerX report notes.

Tackling the growing use of AI tools in the workplace includes establishing allow- and block lists for AI tools and extensions, monitoring for shadow AI activity and restricting the sharing of sensitive data with AI models, LayerX said.

Monitoring clipboards and AI prompts for PII, and blocking risky copy-pastes and prompting actions, can also address this growing data loss vector beyond just focusing on file uploads and traditional vectors like email.

AI tools are not the only vector through which copied-and-pasted data escapes organizations. LayerX found that copy-pastes containing PII or PCI were most likely to be pasted into chat services, i.e. instant messaging (IM) or SMS apps, where 62% of pastes contained sensitive information. Of this data 87% went to non-corporate accounts.

In addition to copy-paste and file upload risks, the report also delved into the browser extension supply chain, revealing that 53% of employees install extensions with “high” or “critical” permissions. Additionally, 26% of installed extensions are side-loaded rather than being installed through official stores.

Browser extensions are often difficult to vet and poorly maintained, with 54% of extension developers identified only through a free webmail account such as Gmail and 51% of extensions not receiving any updates in over a year. Yet extensions can have access to key data and resources including cookies and user account details, making it critical for organizations to audit and monitor their use.

“Permission audit alone is insufficient. Continuously score developer reputation, update cadence, sideload sources, and AI/agent capabilities. Track changes like you track third-party libraries,” Eshed wrote.

Identity security within browsers was also noted to be a major blind spot for organizations, with 68% of logins to corporate accounts completed without single sign-on (SSO), making it difficult for organizations to properly track identities across apps. Additionally, 26% of enterprise users re-used passwords across accounts and 54% of corporate account passwords were noted to be of medium strength or below.

Source: Copy-paste now exceeds file transfer as top corporate data exfiltration vector | SC Media

Post-heist reports reveal the password for the Louvre’s video surveillance was ‘Louvre,’ and suddenly the dumpster-tier opsec of videogame NPCs seems a lot less absurd

The air of criminal mystique has been dispelled somewhat in the weeks following the October 18 heist that saw $102 million of crown jewels stolen from the Louvre in broad daylight. The suspects fumbled an entire crown during their escape, before trying and failing to light their mechanical lift on fire as a diversionary tactic. Arsène Lupin would be appalled.

How exactly, then, did the most renowned gallery in France find itself pillaged by a cadre of buffoons in high visibility vests? Reporting from French newspaper Libération indicates the theft is less of an anomaly than we might expect, as the Louvre has suffered from over a decade of glaring security oversights and IT vulnerabilities.

(Image credit: Cass Marshall via Bluesky)

As Rogue cofounder and former Polygon arch-jester Cass Marshall notes on Bluesky, we owe a lot of videogame designers an apology. We’ve spent years dunking on the emptyheadedness of game characters leaving their crucial security codes and vault combinations in the open for anyone to read, all while the Louvre has been using the password “Louvre” for its video surveillance servers.

That’s not an exaggeration. Confidential documents reviewed by Libération detail a long history of Louvre security vulnerabilities, dating back to a 2014 cybersecurity audit performed by the French Cybersecurity Agency (ANSSI) at the museum’s request. ANSSI experts were able to infiltrate the Louvre’s security network to manipulate video surveillance and modify badge access.

“How did the experts manage to infiltrate the network? Primarily due to the weakness of certain passwords which the French National Cybersecurity Agency (ANSSI) politely describes as ‘trivial,'” writes Libération’s Brice Le Borgne via machine translation. “Type ‘LOUVRE’ to access a server managing the museum’s video surveillance, or ‘THALES’ to access one of the software programs published by… Thales.”

(Image credit: Starbreeze)

The museum sought another audit from France’s National Institute for Advanced Studies in Security and Justice in 2015. Concluded two years later, the audit’s 40 pages of recommendations described “serious shortcomings,” “poorly managed” visitor flow, rooftops that are easily accessible during construction work, and outdated and malfunctioning security systems.

Later documents indicate that, in 2025, the Louvre was still using security software purchased in 2003 that is no longer supported by its developer, running on hardware using Windows Server 2003.

When the safeguards for France’s crown jewels are two decades out of date, maybe we could all afford to go a little easier on the absurdity of hacking minigames, password post-it notes and extremely stealable keycards. Heists, it seems, aren’t actually all that hard.

Source: Post-heist reports reveal the password for the Louvre’s video surveillance was ‘Louvre,’ and suddenly the dumpster-tier opsec of videogame NPCs seems a lot less absurd | PC Gamer

Security bug in India’s income tax portal exposed taxpayers’ sensitive data – by swapping credential numbers :(

The Indian government’s tax authority has fixed a security flaw in its income tax filing portal that was exposing sensitive taxpayers’ data, TechCrunch has exclusively learned and confirmed with authorities.

The flaw, discovered in September by a pair of security researchers Akshay CS and “Viral,” allowed anyone who was logged into the income tax department’s e-Filing portal to access up-to-date personal and financial data of other people.

The exposed data included full names, home addresses, email addresses, dates of birth, phone numbers, and bank account details of people who pay taxes on their income in India. The data also exposed citizens’ Aadhaar number, a unique government-issued identifier used as proof of identity and for accessing government services.

[…]

The researchers found that when they signed into the portal using their Permanent Account Number (PAN), an official document issued by the Indian income tax department, they could view anyone else’s sensitive financial data by swapping out their PAN for another PAN in the network request as the web page loads.

This could be done using publicly available tools like Postman or Burp Suite (or using the web browser’s in-built developer tools) and with knowledge of someone else’s PAN, the researchers told TechCrunch.

The bug was exploitable by anyone who was logged-in to the tax portal because the Indian income tax department’s back-end servers were not properly checking who was allowed to access a person’s sensitive data. This class of vulnerability is known as an insecure direct object reference, or IDOR, a common and simple flaw that governments have warned is easy to exploit and can result in large-scale data breaches.

“This is an extremely low-hanging thing, but one that has a very severe consequence,” the researchers told TechCrunch.

[…]

Source: Security bug in India’s income tax portal exposed taxpayers’ sensitive data | TechCrunch

This kind of stuff was well known and supposed to be stopped around 20 years ago…

Another Day, Another Age Verification Data Breach: Discord’s Third-Party Partner Leaked Government IDs. That didn’t take long, did it?

Once again, we’re reminded why age verification systems are fundamentally broken when it comes to privacy and security. Discord has disclosed that one of its third-party customer service providers was breached, exposing user data, including government-issued photo IDs, from users who had appealed age determinations.

Data potentially accessed by the hack includes things like names, usernames, emails, and the last four digits of credit card numbers. The unauthorized party also accessed a “small number” of images of government IDs from “users who had appealed an age determination.” Full credit card numbers and passwords were not impacted by the breach, Discord says.

Seems pretty bad.

What makes this breach particularly instructive is that it highlights the perverse incentives created by age verification mandates. Discord wasn’t collecting government IDs because they wanted to—they were responding to age determination appeals, likely driven by legal and regulatory pressures to keep underage users away from certain content. The result? A treasure trove of sensitive identity documents sitting in the systems of a third-party customer service provider that had no business being in the identity verification game.

To “protect the children” we end up putting everyone at risk.

This is exactly the kind of incident that privacy advocates have been warning about for years as lawmakers push for increasingly stringent age verification requirements across the internet. Every time these systems are implemented, we’re told they’re secure, that the data will be protected, that sophisticated safeguards are in place. And every time, we eventually get stories like this one.

The pattern reveals a fundamental misunderstanding of how security works in practice versus theory. Age verification proponents consistently treat identity document collection as a simple technical problem with straightforward solutions, ignoring the complex ecosystem these requirements create. Companies like Discord find themselves forced to collect documents they don’t want, storing them with third-party processors they don’t fully control, creating attack surfaces that wouldn’t otherwise exist.

These third parties become attractive targets precisely because they aggregate identity documents from multiple platforms—a single breach can expose IDs collected on behalf of dozens of different services. When the inevitable breach occurs, it’s not just usernames and email addresses at risk—it’s the kind of documentation that can enable identity theft and fraud for years to come, affecting people who may have forgotten they ever uploaded an ID to appeal an automated age determination.

[…]

the fundamental problem remains: we’re creating systems that require the collection and storage of highly sensitive identity documents, often by companies that aren’t primarily in the business of securing such data. This isn’t Discord’s fault specifically—they were dealing with age verification appeals, likely driven by regulatory or legal pressures to prevent underage users from accessing certain content or features.

This breach should serve as yet another data point in the growing pile of evidence that age verification systems create more problems than they solve. The irony is that lawmakers pushing these requirements often claim to be protecting children’s privacy, while simultaneously mandating the creation of vast databases of identity documents that inevitably get breached. We’ve seen similar incidents affect everything from adult websites to social media platforms to online retailers, all because policymakers have decided that collecting copies of driver’s licenses and passports is somehow a reasonable solution to online age verification.

The real tragedy is that this won’t be the last such breach we see. As long as lawmakers continue pushing for more aggressive age verification requirements without considering the privacy and security implications, we’ll keep seeing stories like this one. The question isn’t whether these systems will be breached—it’s when, and how many people’s sensitive documents will be exposed in the process.

[…]

Source: Another Day, Another Age Verification Data Breach: Discord’s Third-Party Partner Leaked Government IDs | Techdirt

If you want to look at previous articles telling you what an insanely bad idea mandatory age verification systems are and how they are insecure, you can just search this blog.

Motion sensors in high-performance mice can be used as a microphone to spy on users, thanks to AI — Mic-E-Mouse technique harnesses mouse sensors, converts acoustic vibrations into speech

A group of researchers from the University of California, Irvine, have developed a way to use the sensors in high-quality optical mice to capture subtle vibrations and convert them into audible data. According to the abstract of Mic-E-Mouse (full PDF here), the high polling rate and sensitivity of high-performance optical mice pick up acoustic vibrations from the surface where they sit. By running the raw data through signal processing and machine learning techniques, the team could hear what the user was saying through their desk.

Mouse sensors with a 20,000 DPI or higher are vulnerable to this attack. And with the best gaming mice becoming more affordable annually, even relatively affordable peripherals are at risk.

[…]

Mic-E-Mouse Pipeline Demonstration – YouTube Mic-E-Mouse Pipeline Demonstration - YouTube

Watch On

[…]

this method is empowered by AI models, allowing the researchers to get a speech recognition accuracy of about 42 to 61%,

[…]

Source: Motion sensors in high-performance mice can be used as a microphone to spy on users, thanks to AI — Mic-E-Mouse technique harnesses mouse sensors, converts acoustic vibrations into speech | Tom’s Hardware

Unity Real-Time Development Platform Vulnerability Let Attackers Execute Arbitrary Code

Unity Technologies has issued a critical security advisory warning developers about a high-severity vulnerability affecting its widely used game development platform.

The flaw, designated CVE-2025-59489, exposes applications built with vulnerable Unity Editor versions to unsafe file loading attacks that could enable local code execution and privilege escalation across multiple operating systems.

The vulnerability stems from an untrusted search path weakness (CWE-426) that allows attackers to exploit unsafe file loading mechanisms within Unity-built applications.

With a CVSS score of 8.4, this security issue affects virtually all Unity Editor versions from 2017.1 through current releases, potentially impacting millions of deployed games and applications worldwide.

Local File Inclusion Vulnerability

The vulnerability manifests differently across operating systems, with Android applications facing the highest risk as they are susceptible to both code execution and elevation of privilege attacks.

Windows, Linux Desktop, Linux Embedded, and macOS platforms experience elevation of privilege risks, allowing attackers to gain unauthorized access at the application’s privilege level.

Security researchers at GMO Flatt Security Inc. discovered the flaw on June 4, 2025, through responsible disclosure practices.

The vulnerability exploits local file inclusion mechanisms, enabling attackers to execute arbitrary code confined to the vulnerable application’s privilege level while potentially accessing confidential information available to that process.

On Windows systems, the threat landscape becomes more complex when custom URI handlers are registered for Unity applications.

Attackers who can trigger these URI schemes may exploit the vulnerable library-loading behavior without requiring direct command-line access, significantly expanding the attack surface.

Risk Factors Details
Affected Products Unity Editor versions 2017.1+ and applications built with these versions across Android, Windows, Linux, and macOS
Impact Local code execution, privilege escalation, information disclosure
Exploit Prerequisites Local system access, vulnerable Unity-built application present on target system
CVSS 3.1 Score 8.4 (High)

Mitigations

Unity has released patches for all supported versions and extended fixes to legacy versions dating back to Unity 2019.1.

The company provides two primary remediation approaches: rebuilding applications with updated Unity Editor versions or applying binary patches using Unity’s specialized patch tool for deployed applications.

[…]

Source: Unity Real-Time Development Platform Vulnerability Let Attackers Execute Arbitrary Code

Israeli military company now owns many popular VPN products

Social media users are calling for the mass cancellation of ExpressVPN subscriptions after it was revealed that a cybersecurity firm with Israeli ties owns the popular privacy service.

In 2021, The Times of Israel reported that Kape Technologies, a British-Israeli digital security company, acquired ExpressVPN, one of the world’s largest virtual private network (VPN) providers, for nearly $1bn.

[…]

Kape Technologies, based in London and founded in 2010, has previously acquired VPN services, including CyberGhost, ZenMate, and Private Internet Access.

People across social media have urged users to delete the app, citing concerns over surveillance, military ties, and ethical complicity.

[…]

Source: Outcry over ExpressVPN ownership: What the Israeli connection means for user privacy | Middle East Eye

Seemingly safe to use at the time of writing: NordVPN, Surfshark, Mullvad (please do your own research!)

Quantum random number generator combines small size and high speed

Researchers have developed a chip-based quantum random number generator that provides high-speed, high-quality operation on a miniaturized platform. This advance could help move quantum random number generators closer to being built directly into everyday devices, where they could strengthen security without sacrificing speed.

True randomness is essential for secure online banking, private messaging, and protecting from hackers, and the rising need for stronger digital protection is driving fast-growing demand for high-quality random numbers generated at high speeds.

“The quantum properties of light make it possible to produce numbers that are truly random, unlike the numbers generated by computer algorithms, which only imitate randomness,” said research team leader Raymond Smith from Toshiba’s Cambridge Research Laboratory in the United Kingdom. “However, making this technology practical for real-world use requires the that create these to be as small as possible so they can fit inside other systems.”

In the journal Optica Quantum, the researchers describe a new quantum design that can recover the quantum signal even when it’s buried in noise, which has been challenging to accomplish with chip-integrated devices. The new device can generate unpredictable random numbers at a rate of 3 gigabits per second, fast enough to support the security needs of large-scale data centers.

“A major application of random number generators is in protecting sensitive data and communications using encryption keys,” said Smith. “Our technology can generate those keys at high speed and with strong security guarantees. High-speed random numbers are also critical for scientific simulations and and for ensuring fairness in applications like online gaming or digital lotteries.”

[…]

Source: Quantum random number generator combines small size and high speed

Viral pay to record calls for AI app Neon takes itself down after exposing users’ phone numbers, call recordings, and transcripts to world + dog

A viral app called Neon, which offers to record your phone calls and pay you for the audio so it can sell that data to AI companies, has rapidly risen to the ranks of the top-five free iPhone apps since its launch last week.

The app already has thousands of users and was downloaded 75,000 times yesterday alone, according to app intelligence provider Appfigures. Neon pitches itself as a way for users to make money by providing call recordings that help train, improve, and test AI models.

But Neon has gone offline, at least for now, after a security flaw allowed anyone to access the phone numbers, call recordings, and transcripts of any other user, TechCrunch can now report.

TechCrunch discovered the security flaw during a short test of the app on Thursday. We alerted the app’s founder, Alex Kiam (who previously did not respond to a request for comment about the app), to the flaw soon after our discovery.

Kiam told TechCrunch later Thursday that he took down the app’s servers and began notifying users about pausing the app, but fell short of informing his users about the security lapse.

 The Neon app stopped functioning soon after we contacted Kiam.

[…]

Source: Viral call-recording app Neon goes dark after exposing users’ phone numbers, call recordings, and transcripts | TechCrunch

OpenAI plugs ShadowLeak bug in ChatGPT which allowed anybody access to everybodys gmail emails and any other integrations

ChatGPT’s research assistant sprung a leak – since patched – that let attackers steal Gmail secrets with just a single carefully crafted email.

Deep Research, a tool unveiled by OpenAI in February, enables users to ask ChatGPT to browse the internet or their personal email inbox and generate a detailed report on its findings. The tool can be integrated with apps like Gmail and GitHub, allowing people to do deep dives into their own documents and messages without ever leaving the chat window.

Cybersecurity outfit Radware this week disclosed a critical flaw in the feature, dubbed “ShadowLeak,” warning that it could allow attackers to siphon data from inboxes with no user interaction whatsoever. Researchers showed that simply sending a maliciously crafted email to a Deep Research user was enough to get the agent to exfiltrate sensitive data when it later summarized that inbox.

The attack relies on hiding instructions inside the HTML of an email using white-on-white text, CSS tricks, or metadata, which a human recipient would never notice. When Deep Research later crawls the mailbox, it dutifully follows the attacker’s hidden orders and sends the contents of messages, or other requested data, to a server controlled by the attacker.

Radware stressed that this isn’t just a prompt injection on the user’s machine. The malicious request is executed from OpenAI’s own infrastructure, making it effectively invisible to corporate security tooling.

That server-side element is what makes ShadowLeak particularly nasty. There’s no dodgy link for a user to click, and no suspicious outbound connection from the victim’s laptop. The entire operation happens in the cloud, and the only trace is a benign-looking query from the user to ChatGPT asking it to “summarize today’s emails”. […] The researchers argue that the risk isn’t limited to Gmail either. Any integration that lets ChatGPT hoover up private documents could be vulnerable to the same trick if input sanitization isn’t watertight.

[…]

Radware said it reported the ShadowLeak bug to OpenAI on June 18 and the company released a fix on September 3. The Register asked OpenAI what specific changes were made to mitigate this vulnerability and whether it had seen any evidence that the vulnerability had been exploited in the wild before disclosure, but did not receive a response.

Radware is urging organizations to treat AI agents as privileged users and to lock down what they can access. HTML sanitization, stricter control over which tools agents can use, and better logging of every action taken in the cloud are all on its list of recommendations. ®

Source: OpenAI plugs ShadowLeak bug in ChatGPT • The Register

Entra ID bug granted easy access to every tenant

A security researcher claims to have found a flaw that could have handed him the keys to almost every Entra ID tenant worldwide.

Dirk-jan Mollema reported the finding to the Microsoft Security Research Center (MSRC) in July. The issue was fixed and confirmed as mitigated, and a CVE was raised on September 4.

It is, however, an alarming vulnerability involving flawed token validation that can result in cross-tenant access. “If you are an Entra ID admin,” wrote Mollema, “that means complete access to your tenant.”

There are two main elements in the vulnerability. The first, according to Mollema, is undocumented impersonation tokens called “Actor tokens” that Microsoft uses for service-to-service communication. There was a flaw in the legacy Azure Active Directory Graph API that did not properly validate the originating tenant, allowing the tokens to be used for cross-tenant access.

“Effectively,” wrote Mollema, “this means that with a token I requested in my lab tenant I could authenticate as any user, including Global Admins, in any other tenant.”

The tokens allowed full access to the Azure AD Graph API in any tenant. Any hope that a log might save the day was also dashed – “requesting Actor tokens does not generate logs.”

“Even if it did, they would be generated in my tenant instead of in the victim tenant, which means there is no record of the existence of these tokens.”

The upshot of the flaw was a possible compromise for any service that uses Entra ID for authentication, such as SharePoint Online or Exchange Online. Mollema noted that access to resources hosted in Azure was also possible.

[…]

Source: Entra ID bug could have granted access to every tenant • The Register

China: 1-hour deadline on serious cyber incident reporting

Beijing will soon expect Chinese network operators to ‘fess up to serious cyber incidents within an hour of spotting them – or risk penalties for dragging their feet.

From November 1, the Cyberspace Administration of China (CAC) will enforce its new National Cybersecurity Incident Reporting Management Measures, a sweeping set of rules that tighten how quickly incidents must be disclosed.

The rules apply to a broad category of “network operators,” which in China effectively means anyone who owns, manages, or provides network services, and mandate that serious incidents be reported to the relevant authorities within 60 minutes – or in the case of “particularly major” events, 30 minutes.

“If it is a major or particularly important network security incident, the protection department shall report to the national cyber information department and the public security department of the State Council as soon as possible after receiving the report, no later than half an hour,” the CAC states.

The regulations set out a four-tier system for classifying cyber incidents, but reserve their most challenging demands for the highest “particularly major” tier. An incident that falls within this category includes the loss or theft of core or sensitive data that threatens national security or social stability, a leak of more than 100 million citizens’ personal records, or outages that take key government or news websites offline for more than 24 hours.

The CAC also considers direct economic losses of more than ¥100 million (about £10.3 million) enough to trigger the highest classification.

Operators must file their initial report with a laundry list of details: what systems were hit, the timeline of the attack, the type of incident, what damage was done, what steps were taken to contain it, the preliminary cause, vulnerabilities exploited, and even ransom amounts if a shakedown was involved. They also need to include a grim bit of crystal-ball gazing – an assessment of possible future harm, and what government support they need in order to recover.

After the dust settles, a final postmortem must be submitted within 30 days, detailing causes, lessons learned, and where the blame lies.

Anyone caught sitting on an incident or trying to brush it under the carpet can expect to face penalties, with both network operators and government suits in the firing line.

“If the network operator reports late, omitted, falsely reported or concealed network security incidents, causing major harmful consequences, the network operator and the relevant responsible persons shall be punished more severely according to law,” the CAC warns.

Beijing’s cyber cops have rolled out a bunch of reporting channels – hotline 12387, a website, WeChat, email, and more – making it harder for anyone to plead ignorance when their network catches fire.

Compared to Europe’s leisurely 72-hour breach deadline, Beijing’s stopwatch will force many organizations to invest in real-time monitoring and compliance teams that can make a go/no-go call in minutes rather than days.

The introduction of these stringent new reporting rules comes just days after Dior’s Shanghai arm was fined for transferring customer data to its French headquarters without the legally required security screening, proper customer disclosure, or even encryption. ®

Source: China: 1-hour deadline on serious cyber incident reporting • The Register

There must be a huge government department back there waiting to “help out”. I do wonder what shape this kind of “help” will take.

Samsung patches Android WhatsApp vuln exploited in the wild on Apple devices

Samsung has fixed a critical flaw that affects its Android devices – but not before attackers found and exploited the bug, which could allow remote code execution on affected devices.

The vulnerability, tracked as CVE-2025-21043, affects Android OS versions 13, 14, 15, and 16. It’s due to an out-of-bounds write vulnerability in libimagecodec.quram.so, a parsing library used to process image formats on Samsung devices, which remote attackers can abuse to execute malicious code.

“Samsung was notified that an exploit for this issue has existed in the wild,” the electronics giant noted in its September security update.

The Meta and WhatsApp security teams found the flaw and reported it to Samsung on August 13. Apps that process images on Samsung kit, potentially including WhatsApp, may trigger this library, but Samsung didn’t name specific apps.

The warning is interesting, because Meta shortly thereafter issued a security advisory warning that attackers may have chained a WhatsApp bug with an Apple OS-level flaw in highly targeted attacks.

The WhatsApp August security update included a fix for CVE-2025-55177 that, as Meta explained, “could have allowed an unrelated user to trigger processing of content from an arbitrary URL on a target’s device.”

That security advisory went on to say, “We assess that this vulnerability, in combination with an OS-level vulnerability on Apple platforms (CVE-2025-43300), may have been exploited in a sophisticated attack against specific targeted users.”

CVE-2025-43300 is an out-of-bounds write issue that Apple addressed on August 20 with a patch that improves bounds checking in the ImageIO framework. “Processing a malicious image file may result in memory corruption,” the iThings maker said at the time. “Apple is aware of a report that this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals.”

While Meta didn’t mention the newer Android OS-level flaw in its August WhatsApp security update, it seems that CVE-2025-21043 could also be chained to CVE-2025-55177 for a similar attack targeting WhatsApp users on Samsung Android devices instead of Apple’s.

[…]

Source: Samsung patches Android 0-day exploited in the wild • The Register

Critical, make-me-super-user SAP S/4HANA bug being exploited

A critical code-injection bug in SAP S/4HANA that allows low-privileged attackers to take over your SAP system is being actively exploited, according to security researchers.

SAP issued a patch for the 9.9-rated flaw in August. It is tracked as CVE-2025-42957, and it affects both private cloud and on-premises versions.

According to SecurityBridge Threat Research Labs, which originally spotted and disclosed the vulnerability to SAP,  the team “verified actual abuse of this vulnerability.” It doesn’t appear to be widespread (yet), but the consequences of this flaw are especially severe.

“For example, SecurityBridge’s team demonstrated in a lab environment how an attacker could create a new SAP superuser account (with SAP_ALL privileges) and directly manipulate critical business data,” the researchers said in a Thursday write-up alongside a video demo of the exploit.

It’s low-complexity to exploit. The bug enables a user to inject arbitrary ABAP code into the system, thus bypassing authorization checks and essentially creating a backdoor that allows full system compromise, data theft, and operational disruption. In other words: it’s effectively game over.

[…]

Source: Critical, make-me-super-user SAP S/4HANA bug being exploited • The Register

18 popular VPNs turn out to belong to 3 different owners – and contain insecurities as well

A new peer-reviewed study alleges that 18 of the 100 most-downloaded virtual private network (VPN) apps on the Google Play Store are secretly connected in three large families, despite claiming to be independent providers. The paper doesn’t indict any of our picks for the best VPN, but the services it investigates are popular, with 700 million collective downloads on Android alone.

The study, published in the journal of the Privacy Enhancing Technologies Symposium (PETS), doesn’t just find that the VPNs in question failed to disclose behind-the-scenes relationships, but also that their shared infrastructures contain serious security flaws. Well-known services like Turbo VPN, VPN Proxy Master and X-VPN were found to be vulnerable to attacks capable of exposing a user’s browsing activity and injecting corrupted data.

Titled “Hidden Links: Analyzing Secret Families of VPN apps,” the paper was inspired by an investigation by VPN Pro, which found that several VPN companies each were selling multiple apps without identifying the connections between them. This spurred the “Hidden Links” researchers to ask whether the relationships between secretly co-owned VPNs could be documented systematically.

[…]

Family A consists of Turbo VPN, Turbo VPN Lite, VPN Monster, VPN Proxy Master, VPN Proxy Master Lite, Snap VPN, Robot VPN and SuperNet VPN. These were found to be shared between three providers — Innovative Connecting, Lemon Clove and Autumn Breeze. All three have all been linked to Qihoo 360, a firm based in mainland China and identified as a “Chinese military company” by the US Department of Defense.

Family B consists of Global VPN, XY VPN, Super Z VPN, Touch VPN, VPN ProMaster, 3X VPN, VPN Inf and Melon VPN. These eight services, which are shared between five providers, all use the same IP addresses from the same hosting company.

Family C consists of X-VPN and Fast Potato VPN. Although these two apps each come from a different provider, the researchers found that both used very similar code and included the same custom VPN protocol.

If you’re a VPN user, this study should concern you for two reasons. The first problem is that companies entrusted with your private activities and personal data are not being honest about where they’re based, who owns them or who they might be sharing your sensitive information with. Even if their apps were all perfect, this would be a severe breach of trust.

But their apps are far from perfect, which is the second problem. All 18 VPNs across all three families use the Shadowsocks protocol with a hard-coded password, which makes them susceptible to takeover from both the server side (which can be used for malware attacks) and the client side (which can be used to eavesdrop on web activity).

[…]

 

Source: Researchers find alarming overlaps among 18 popular VPNs

US spy chief Gabbard says UK agreed to drop ‘backdoor’ mandate for Apple

U.S. Director of National Intelligence Tulsi Gabbard said on Monday the UK had agreed to drop its mandate for iPhone maker Apple to provide a “backdoor” that would have enabled access to the protected encrypted data of American citizens.

Gabbard issued the statement on X

saying she had worked for months with Britain, along with President Donald Trump and Vice President JD Vance to arrive at a deal.

[…]

U.S. lawmakers said in May that the UK’s order to Apple to create a backdoor to its encrypted user data could be exploited by cybercriminals and authoritarian governments.
Apple, which has said it would never build such access into its encrypted services or devices, had challenged the order at the UK’s Investigatory Powers Tribunal (IPT).
The iPhone maker withdrew its Advanced Data Protection feature for UK users in February following the UK order. Users of Apple’s iPhones, Macs and other devices can enable the feature to ensure that only they — and not even Apple — can unlock data stored on its cloud.
U.S. officials said earlier this year they were examining whether the UK broke a bilateral agreement by demanding that Apple build a backdoor allowing the British government to access backups of data in the company’s encrypted cloud storage systems.
In a letter dated February 25 to U.S. lawmakers, Gabbard said the U.S. was examining whether the UK government had violated the CLOUD Act, which bars it from issuing demands for the data of U.S. citizens and vice versa.
Cybersecurity experts told Reuters that if Apple chose to build a backdoor for a government, that backdoor would eventually be found and exploited by hackers.
[…]

Source: US spy chief Gabbard says UK agreed to drop ‘backdoor’ mandate for Apple | Reuters

Phishing training is pretty pointless, researchers find

In a scientific study involving thousands of test subjects, eight months and four different kinds of phishing training, the average improvement rate of falling for phishing scams was a whopping 1.7%.

“Is all of this focus on training worth the outcome?” asked researcher Ariana Mirian, a senior security researcher at Censys and recently a Ph.D. student at U.C. San Diego, where the study was conducted. “Training barely works.”

[…]

Dameff and Mirian wanted scientifically rigorous, real-world results. (You can read their academic paper here.) They enrolled more than 19,000 employees of the UCSD Health system and randomly split them into five groups, each member of which would see something different when they failed a phishing test randomly sent once a month to their workplace email accounts.

  • Control: Its members got a 404 error if they clicked on a phishing link in the body of the email.
  • Generic static: This group saw a static webpage containing general information about avoiding phishing scams.
  • Generic interactive: This group was walked through an interactive question-and-answer exercise.
  • Contextual static: A static webpage again, but this time showing the exact phishing lure the subject had received and pointing out the warning signs that were missed.
  • Contextual interactive: An interactive Q&A session that walked the subject on what they missed in the specific lure they’d received.

Over the eight months of testing, however, there was little difference in improvement among the four groups that received different kinds of training. Those groups did improve a bit over the control group’s performance — by the aforementioned 1.7%.

Not what was expected

However, there were some lessons learned — not all expected. The first was that it helped a lot to change up the phishing lures. Most subjects saw right through a phishing email that urged the recipients to change their Outlook account passwords, resulting in failure rates between 1% and 4%.

But about 30% of users clicked on a link promising information about a change in the organization’s vacation policy. Almost as many fell for one about a change in workplace dress code.

“Whoever controls the lures controls the failure rates,” said Mirian. “It’s important to have different lures in your phishing training.”

Another lesson was that given enough time, almost everyone falls for a phishing email. Over the eight months of the experiment, just over 50% failed at least once.

“Given enough time, most people get pwned,” said Mirian. “We need to stop punishing people who fail phishing tests. You’d end up punishing half the company.”

[…]

Source: Phishing training is pretty pointless, researchers find | SC Media

And for a more guerrilla approach, you may want to look at this: