Don’t delete your new inetpub folder. It’s a Windows security fix

Canny Windows users who’ve spotted a mysterious folder on hard drives after applying last week’s security patches for the operating system can rest assured – it’s perfectly benign. In fact, it’s recommended you leave the directory there.

The folder, typically C:\inetpub, is empty and related to Microsoft’s Internet Information Services (IIS). It will be created when you install the security patches whether or not you’re using that optional web server. The purpose of the folder is to mitigate an exploitable elevation-of-privileges flaw within Windows Process Activation, classified as CVE-2025-21204.

That CVE, which can give malware on a system or a rogue user system-level file-management privileges, was fixed in the April Patch Tuesday batch from the Windows maker; installing the fix on Windows 11 and 10 will create the directory as additional protection, we’re told.

“After installing the updates listed in the security updates table for your operating system, a new %systemdrive%\inetpub folder will be created on your device,” advised Microsoft.

“This folder should not be deleted regardless of whether Internet Information Services (IIS) is active on the target device. This behavior is part of changes that increase protection and does not require any action from IT admins and end users.”

[…]

If you have deleted it after applying the patch, there’s a fix. Go to the Windows Control Panel and open Programs and Features. On the left you’ll see “Turn Windows features on or off.” Scroll down until you find IIS and hit “OK” after highlighting it. The folder will be recreated with the correct SYSTEM-level permissions. You can then switch off IIS and restart. (No one uses IIS these days.)

Or create the folder by hand with read-only access and SYSTEM-level ownership

Source: Don’t delete inetpub folder. It’s a Windows security fix • The Register

Windows’ Recall Spyware Is Back—Here’s How to Control It

Remember Recall? It’s been close to full trip around the sun since Microsoft announced then suddenly pulled its AI-powered, auto-screenshotting “photographic memory” software for Copilot+ PCs. Whether you want it or not, the feature is coming back, and you should be prepared for it not just if you’re planning to use it, but if you imagine any of your friends, family, or coworkers plan to use it too.

Microsoft’s latest blog about the Windows Insider build KB5055627 includes the note that Recall is rolling out “gradually” to beta users over the coming weeks. Like what Microsoft first showed off in May 2024, Recall automatically screenshots most apps, webpages, or documents you’re on. The system catalogues all these screenshots then uses on-device AI to parse what’s on each screenshot

[…]

Microsoft originally recalled Recall  when security experts found glaring, obvious holes in the software that let any user with access to the PC read the AI’s excerpts. The program had no qualms about screenshotting bank accounts, social security numbers, or any other sensitive information. Microsoft returned Recall to the drawing board, and now users need to enroll in Windows Hello biometric or PIN security to access the screenshots. Users can also pause screenshots or filter out certain apps or specific webpages (though only for Edge, Firefox, Opera, and Chrome browsers). That may not be foolproof, as reports from late last year showed Recall failed to detect when it was looking at bank info. It will be up to users to ensure every sensitive page they visit is on the no-go list.

Microsoft Recall Windows Security 2
© Microsoft

Users will choose whether to enable or disable Recall the first time they startup their device with the new update. To disable it, you need to search “Turn Windows features on or off” in the Windows 11 taskbar, then uncheck Recall.

[…]

This is where some security-focused Windows users are especially concerned. You can tell Recall to gather dust alongside all the other pre-installed Windows apps, but that doesn’t mean your less-tech literate family member will. Security blogger Em pointed out in a Mastodon post (via Ars Technica) if you send that family member any photos or sensitive information, they could be scraping everything you text or email them, including family photos or passwords, and you wouldn’t even know it.

[…]

Source: Windows’ Controversial Recall Is Back—Here’s How to Control It

Don’t open that file in WhatsApp for Windows just yet – there is no check if it’s not just a renamed .exe

A bug in WhatsApp for Windows can be exploited to execute malicious code by anyone crafty enough to persuade a user to open a rigged attachment – and, to be fair, it doesn’t take much craft to pull that off.

The spoofing flaw, tracked as CVE-2025-30401, affects all versions of WhatsApp Desktop for Windows prior to 2.2450.6, and stems from a bug in how the app handles file attachments.

Specifically, WhatsApp displays attachments based on their MIME type – the metadata meant to indicate what kind of file it is – but when a user opens the file, the app hands it off based on its filename extension instead. That means something disguised as a harmless image with the right MIME type but ending in .exe could be executed as a program – if the user clicks it.

“A maliciously crafted mismatch could have caused the recipient to inadvertently execute arbitrary code rather than view the attachment when manually opening the attachment inside WhatsApp,” WhatsApp’s parent company Meta explained in its security advisory.

[…]

Make sure you’re running a version of WhatsApp for Windows higher than 2.2450.6 to be safe.

[…]

Source: Don’t open that file in WhatsApp for Windows just yet • The Register

Boeing 787 radio software patch didn’t work, says Qatar, it still turns itself off and changes frequencies by itself.

Boeing issued a software safety patch for the VHF radio systems used on its 787 aircraft, and the update turned out to be ineffective, Qatar Airways has complained.

In February, the US Department of Transportation issued an advisory [PDF] about a problem with the aircraft’s electronics that was causing VHF radio traffic to unexpectedly switch between active and standby mode. In practice, this means pilots constantly have to check their radio settings to make sure all messages from air traffic control are received, and multiple cases of this unwanted switching have been reported.

“The FAA has received reports indicating that VHF radio frequencies transfer between the active and standby windows of the TCP [tuning control panel] without flightcrew input,” the dept said.

“The flightcrew may not be aware of uncommanded frequency changes and could fail to receive air traffic control communications. This condition, if not addressed, could result in missed communications such as amended clearances and critical instructions for changes to flight path and consequent loss of safe separation between aircraft, collision, or runway incursion.”

Boeing issued a free software fix to stop the mode changes and, according to Uncle Sam, the update will take 90 minutes to install with an estimated labor cost of $127.50 per aircraft, with 157 US airplanes reportedly vulnerable. The problem affects 787-8, 787-9, and 787-10 aircraft.

The unsafe condition still exists on airplanes

America’s aviation watchdog the FAA has asked for feedback from airlines by April 14 on the situation, and Qatar Airways isn’t waiting that long. It has already warned the patch isn’t working as it should: The radios still change mode without warning.

“Qatar Airways flight crew are still reporting similar issues from post-mod airplanes. [Qatar Airways] already reported the events to Boeing/Collins aerospace for further investigation and root cause determination,” the airline said.

“As of now, Qatar believes that the issue is not completely addressed, and the unsafe condition still exists on airplanes.”

Neither Qatar, Boeing, or the FAA representative were available for comment on the issue. Collins is a software provider for Boeing.

Source: Boeing 787 radio software patch didn’t work, says Qatar • The Register

Over a million private photos from MAD Mobile dating apps exposed online

Researchers have discovered nearly 1.5 million pictures from specialist dating apps – many of which are explicit – being stored online without password protection, leaving them vulnerable to hackers and extortionists.

Anyone with the link was able to view the private photos from five platforms developed by M.A.D Mobile: kink sites BDSM People and Chica, and LGBT apps Pink, Brish and Translove.

These services are used by an estimated 800,000 to 900,000 people.

M.A.D Mobile was first warned about the security flaw on 20 January but didn’t take action until the BBC emailed on Friday.

They have since fixed it but not said how it happened or why they failed to protect the sensitive images.

woman in red bondage outfit
This is one of the photos that anyone could have accessed. We have cropped the face and blurred it to enhance privacy

Ethical hacker Aras Nazarovas from Cybernews first alerted the firm about the security hole after finding the location of the online storage used by the apps by analysing the code that powers the services.

He was shocked that he could access the unencrypted and unprotected photos without any password.

[…]

In an email M.A.D Mobile said it was grateful to the researcher for uncovering the vulnerability in the apps to prevent a data breach from occurring.

But there’s no guarantee that Mr Nazarovas was the only hacker to have found the image stash.

“We appreciate their work and have already taken the necessary steps to address the issue,” a M.A.D Mobile spokesperson said. “An additional update for the apps will be released on the App Store in the coming days.”

The company did not respond to further questions about where the company is based and why it took months to address the issue after multiple warnings from researchers.

Usually security researchers wait until a vulnerability is fixed before publishing an online report, in case it puts users at further risk of attack.

But Mr Nazarovas and his team decided to raise the alarm on Thursday while the issue was still live as they were concerned the company was not doing anything to fix it.

[…]

In 2015 malicious hackers stole a large amount of customer data about users of Ashley Madison, a dating website for married people who wish to cheat on their spouse.

Source: Over a million private photos from dating apps exposed online

Trump’s Defense Secretary Hegseth Orders Cyber Command to ‘Stand Down’ on All Russia Operations

The cybersecurity outlet The Record originally reported that under Trump’s new Defense Secretary Pete Hegseth, U.S. Cyber Command has been ordered to “stand down from all planning against Russia, including offensive digital actions.” The outlet cites three anonymous sources who are familiar with the matter. The order reportedly does not apply to the National Security Agency.

The policy shift represents a complete 180-degree turn from America’s posture over the past decade, which has consistently considered Russia one of the top cybersecurity threats. Credible reporting and government investigations have shown that Russia has hacked into U.S. systems countless times.

The Guardian has reported that a memo recently circulated to staff at America’s Cybersecurity and Infrastructure Security Agency (CISA) established “new priorities” for the agency and, while mentioning the threat of digital incursions by China and other enemies, failed to mention Russia.

“Russia and China are our biggest adversaries. With all the cuts being made to different agencies, a lot of cyber security personnel have been fired. Our systems are not going to be protected and our adversaries know this,” a source, who was familiar with the internal memo, told The Guardian. “People are saying Russia is winning. Putin is on the inside now.”

Another anonymous source, who said that CISA staff had been “verbally informed that they were not to follow or report on Russian threats,” expressed concern for the shift: “There are thousands of US government employees and military working daily on the massive threat Russia poses as possibly the most significant nation state threat actor. Not to diminish the significance of China, Iran, or North Korea, but Russia is at least on par with China as the most significant cyber threat,” they said.

[…]

As far as layoffs go, the NSA purge is a drop in the bucket for America’s signals intelligence agency. One of the intel community’s biggest outfits is reputed to employ at least 20,000 employees but has been estimated to use as many as 50,000. In general, despite Trump’s promise to smash the “deep state,” America’s dark and powerful national security state has remained largely untouched since he took office, with his administration’s wrecking ball DOGE content to spend most of its time smashing agencies that dispense services to the public.

Source: Trump’s Defense Secretary Hegseth Orders Cyber Command to ‘Stand Down’ on All Russia Operations

PeerAuth – easy way to authenticate a real person

Machine learning has become more and more powerful, to the point where a bad actor can take a photo and a voice recording of someone you know, and forge a complete video recording. See the “OmniHuman-1” model developed by ByteDance:

 

Bad actors can now digitally impersonate someone you love, and trick you into doing things like paying a ransom.

To mitigate that risk, I have developed this simple solution where you can setup a unique time-based one-time passcode (TOTP) between any pair of persons.

This is how it works:

  1. Two people, Person A and Person B, sit in front of the same computer and open this page;
  2. They input their respective names (e.g. Alice and Bob) onto the same page, and click “Generate”;
  3. The page will generate two TOTP QR codes, one for Alice and one for Bob;
  4. Alice and Bob scan the respective QR code into a TOTP mobile app (such as Authy or Google Authenticator) on their respective mobile phones;
  5. In the future, when Alice speaks with Bob over the phone or over video call, and wants to verify the identity of Bob, Alice asks Bob to provide the 6-digit TOTP code from the mobile app. If the code matches what Alice has on her own phone, then Alice has more confidence that she is speaking with the real Bob.

Note that this depends on both Alice’s and Bob’s phones being secure. If somebody steals Bob’s phone and manages to bypass the fingerprint or PIN or facial recognition of Bob’s phone, then all bets are off.

Discussion on Hacker News

Source code of this page on GitHub

Source: PeerAuth

After Snowden and now Trump, Europe  Finally begins to worry about US-controlled clouds

In a recent blog post titled “It is no longer safe to move our governments and societies to US clouds,” Bert Hubert, an entrepreneur, software developer, and part-time technical advisor to the Dutch Electoral Council, articulated such concerns.

“We now have the bizarre situation that anyone with any sense can see that America is no longer a reliable partner, and that the entire large-scale US business world bows to Trump’s dictatorial will, but we STILL are doing everything we can to transfer entire governments and most of our own businesses to their clouds,” wrote Hubert.

Hubert didn’t offer data to support that statement, but European Commission stats shows that close to half of European enterprises rely on cloud services, a market led by Amazon, Microsoft, Google, Oracle, Salesforce, and IBM – all US-based companies.

While concern about cloud data sovereignty became fashionable back in 2013 when former NSA contractor Edward Snowden disclosed secrets revealing the scope of US signals intelligence gathering and fled to Russia, data privacy worries have taken on new urgency in light of the Trump administration’s sudden policy shifts.

In the tech sphere those moves include removing members of the US Privacy and Civil Liberties Oversight Board that safeguards data under the EU-US Data Privacy Framework, alleged flouting of federal data rules to advance policy goals. Europeans therefore have good reason to wonder how much they can trust data privacy assurances from US cloud providers amid their shows of obsequious deference to the new regime.

And there’s also a practical impetus for the unrest: organizations that use Microsoft Office 2016 and 2019 have to decide whether they want to move to Microsoft’s cloud come October 14, 2025, when support officially ends. Microsoft is encouraging customers to move to Microsoft 365 which is tied to the cloud. But that looks riskier now than it did under less contentious transatlantic relations.

The Register spoke with Hubert about his concerns and the situation in which Europe now finds itself.

[…]

Source: Europe begins to worry about US-controlled clouds • The Register

It was truly unbelievable that EU was using US cloud in the first place for many reasons ranging from technical to cost to privacy but they just keep blundering on.

Google pulls plug on Ad blockers such as uBlock Origin by killing Manifest v2

Google’s purge of Manifest v2-based extensions from its Chrome browser is underway, as many users over the past few days may have noticed.

Popular content-blocking add-on (v2-based) uBlock Origin is now automatically disabled for many in the ubiquitous browser as it continues the V3 rollout.

[…]

According to the company, Google’s decision to shift to V3 is all in the name of improving its browser’s security, privacy, and performance. However, the transition to the new specification also means that some extensions will struggle due to limitations in the new API.

In September 2024, the team behind uBlock Origin noted that one of the most significant changes was around the webRequest API, used to intercept and modify network requests. Extensions such as uBlock Origin extensively use the API to block unwanted content before it loads.

[…]

Ad-blockers and privacy tools are the worst hit by the changes, and affected users – because let’s face it, most Chrome users won’t be using an ad-blocker – can switch to an alternative browser for something like the original experience, or they can switch to a different extension which is unlikely to have the same capabilities.

In its post, uBlock recommends a move to Firefox and use of the extension uBlock Origin, a switch to a browser that will support Manifest v2

[…]

Source: Google continues pulling the plug on Manifest v2 • The Register

Generative AI’s Impact on Cybersecurity – Q&A With an Expert

In the ever-evolving landscape of cybersecurity, the integration of generative AI has become a pivotal point of discussion. To delve deeper into this groundbreaking technology and its impact on cybersecurity, we turn to renowned cybersecurity expert Jeremiah Fowler. In this exclusive Q&A session with vpnMentor, Fowler sheds light on the critical role that generative AI plays in safeguarding digital environments against evolving threats.

[…]

Not long ago, it was far easier to identify a phishing attempt, but now that they have AI at their disposal, criminals can personalize their social engineering attempts using realistic identities, well-written content, or even deepfake audio and video. And, as AI models become more intelligent, it will become even harder to distinguish human- from AI-generated content, making it harder for potential victims to detect a scheme.

[…]

There are numerous examples of generative AI being used in recent cyberattacks. The Voice of SecOps report released by Deep Instinct found that 75% of security professionals surveyed saw an increase in cyberattacks in 2023, and that 85% of all attacks that year were powered by generative AI.

[…]

Currently, several malicious generative AI solutions are available on the Dark Web. Two examples of malicious AI tools designed for cybercriminals to create and automate fraudulent activities are FraudGPT and WormGPT. These tools can be used by criminals to easily conduct realistic phishing attacks, carry out scams, or generate malicious code. FraudGPT specializes in generating deceptive content while WormGPT focuses on creating malware and automating hacking attempts.

These tools are extremely dangerous and pose a very serious risk because they allow unskilled criminals with little or no technical knowledge to launch highly sophisticated cyberattacks. With a few command prompts, perpetrators can easily increase the scale, effectiveness, and success rate of their cybercrimes.

[…]

According to the 2023 Microsoft Digital Defense Report, researchers identified several cases where state actors attempted to access and use Microsoft’s AI technology for malicious purposes. These actors were associated with various countries, including Russia, China, Iran, and North Korea. Ironically, each of these countries have strict regulations governing cyberspace, and it would be highly unlikely to conduct large-scale attacks without some level of government oversight. The report noted that malicious actors used generative AI models for a wide range of activities such as spear-phishing, hacking, phishing emails, investigating satellite and radar technologies, and targeting U.S. defense contractors.
Hybrid disinformation campaigns — where state actors or civilian groups combine humans and AI to create division and conflict — have also become a serious risk. There is no better example of this than the Russian troll farms. […]

Earlier this year, fake X (formerly Twitter) accounts — which were actually Russian bots pretending to be real people from the U.S. — were programmed to post pro-Trump content generated by ChatGPT. The whole thing came to a head in June 2024, when the pre-programmed posts started reflecting error messages due to lack of payment.

 

This screenshot shows a translated tweet from X indicating that a bot using ChatGPT was out of credits.

A few months later, the U.S. Department of Justice announced that Russian state media had been paying American far-right social media influencers as much as 10 million USD to echo narratives and messages from the Kremlin in yet another hybrid disinformation campaign.

[…]

The trepidation regarding AI’s role in creating security threats is very real, but some time-tested advice is still valid — keeping software updated, applying patches where needed, and having endpoint security for all connected devices can go a long way. However, as AI becomes more advanced, it will likely make it easier for criminals to identify and exploit more complex vulnerabilities. So, I highly recommend implementing network segmentation too — by isolating individual sections, organizations can effectively limit the spread of malware or restrict unauthorized access to the entire network.

Ultimately, the most important thing is to have continuous monitoring and investigate all suspicious activity.

[…]

One recent example of self-evolving malware that uses AI to constantly rewrite its code is called “BlackMamba“. This is a proof of concept AI-enhanced malware. It was created by researchers from HYAS Labs to test how far it can go. BlackMamba was able to avoid being identified by most sophisticated cybersecurity products, including the leading EDR (Endpoint Detection and Response).

Generative AI is also being used to enhance evasion techniques or generate malicious content. For example, Microsoft researchers were able to get nearly every major AI model to bypass their own restrictions for creating harmful or illegal content. In June 2024, Microsoft published details about what they named “Skeleton Key” — a multi-step process that eventually gets the AI model to provide prohibited content. Additionally, AI-generated tools can bypass traditional cybersecurity defenses (like CAPTCHA) that are intended to filter bot traffic so that (theoretically) only humans can access accounts or content.

Criminals are also using Generative AI to enhance their phishing and social engineering scams.

[…]

The most well-known case to date happened in Hong Kong in early 2024. Criminals used deepfake technology to create a video showing a company’s CEO requesting the CFO to transfer $24.6 million USD. Since there was nothing that suggested that the video was not authentic, the CFO unknowingly transferred the money to the criminals.

[…]

Although AI cannot — and should not — fully replace the human role in the incident response process, it can assist by automating detection, triage, containment, and recovery tasks. Any tools or actions that help reduce response times will also limit the damage caused by cyber incidents. Organizations should integrate these technologies into their security operations and be prepared for AI-enhanced cyberthreats because it is no longer a matter of “if it happens” but “when it happens”.

Generative AI can help cybersecurity by creating realistic risk scenarios for both training and penetration testing.

[…]

what are the future risks of AI providers having vulnerabilities or data exposures?

According to researchers at Wiz they found 2 non-password protected databases that contained just under 1 million records. AI models will generate a massive amount of data and that needs to be stored somewhere. It makes sense that you would have a database full of learning content, monitoring and error logs, and chat responses, theoretically this should have been segregated from the administrative production environment or have additional access controls to prevent an unauthorized intrusion. This vulnerability allowed researchers to access administrative and operational data and the fact that anyone with an Internet connection could have potentially manipulated commands or code scripts should be a major concern to the DeepSeek organization and its users. Additionally, exposing secret keys or other internal access credentials is an open invitation for disaster and what I would consider a worse case scenario. This is a prime example of how important it will be for AI developers to secure and protect the data of their users and the internal backend code of their products.

[…]

Source: Generative AI’s Impact on Cybersecurity – Q&A With an Expert

Apple Says ‘No’ to UK Backdoor Order, Will Just Disable E2E Cloud Encryption Instead

Good work, Britain. Owners of Apple devices in the United Kingdom will be a little less safe moving forward as the company pulls its most secure end-to-end (E2E) encryption from the country. The move is in response to government demands there that Apple build a backdoor into its iCloud encryption feature that would allow law enforcement to access the cloud data of any iPhone user around the world under the guise of national security.

[…]

Following Apple’s decision to pull E2E cloud encryption from the UK, the company on Friday told Bloomberg that “enhancing the security of cloud storage with end-to-end encryption is more urgent than ever before” and that it “remains committed to offering our users the highest level of security for their personal data and are hopeful that we will be able to do so in the future in the United Kingdom.”

The UK order asked Apple for access to global user data under the country’s Investigatory Powers Act, a law that grants officials the authority to compel companies to remove encryption under a “technical capability notice.”

[…]

“Security officials asked not only that Apple allow the UK government access to UK residents’ encrypted cloud storage, but that the UK government get access to any Apple user’s encrypted cloud storage,” said David Ruiz, an online privacy expert at Malwarebytes. “To demand access to the world’s data is such a brazen, imperialist maneuver that I’m surprised it hasn’t come from, well, honestly, the US. This may embolden other countries, particularly those in the ‘Five Eyes,’ to make a similar demand of Apple.” Ruiz questioned what this means for the UK’s privacy guarantees with the US.

Law enforcement is always looking for new ways to conduct surveillance under the guise of protecting the public—Edward Snowden famously revealed a dragnet of surveillance created after 9/11 that pulled in data on individuals domestic and abroa. But once the genie is taken out of the proverbial bottle, it is hard to put it back, and the capabilities can end up in the wrong hands. Police already have access to plenty investigative powers, privacy advocates say, and the public should be very cautious about giving them more that could be ripe for abuse.

[…]

With today’s move, Apple is essentially saying that it would rather pull the E2E encryption altogether and inform customers they will be less safe, rather than build an open door for the UK government. It is a shrewd, gigachad move by Apple even though consumers there will no longer have the same amount of security as others around the globe. iCloud encryption is important as the service has in the past been a target of hackers who penetrated the accounts of celebrities to steal their nudes and post them online in a scandal that was called “the Fappening.”

[…]

Source: Apple Says ‘No’ to UK Backdoor Order, Will Disable E2E Cloud Encryption Instead

So, no security or privacy for those in the UK then.

ChatGPT crawler flaw opens door to DDoS, prompt injection

In a write-up shared this month via Microsoft’s GitHub, Benjamin Flesch, a security researcher in Germany, explains how a single HTTP request to the ChatGPT API can be used to flood a targeted website with network requests from the ChatGPT crawler, specifically ChatGPT-User.

This flood of connections may or may not be enough to knock over any given site, practically speaking, though it’s still arguably a danger and a bit of an oversight by OpenAI. It can be used to amplify a single API request into 20 to 5,000 or more requests to a chosen victim’s website, every second, over and over again.

“ChatGPT API exhibits a severe quality defect when handling HTTP POST requests to https://chatgpt.com/backend-api/attributions,” Flesch explains in his advisory, referring to an API endpoint called by OpenAI’s ChatGPT to return information about web sources cited in the chatbot’s output. When ChatGPT mentions specific websites, it will call attributions with a list of URLs to those sites for its crawler to go access and fetch information about.

If you throw a big long list of URLs at the API, each slightly different but all pointing to the same site, the crawler will go off and hit every one of them at once.

[…]

Thus, using a tool like Curl, an attacker can send an HTTP POST request – without any need for an authentication token – to that ChatGPT endpoint and OpenAI’s servers in Microsoft Azure will respond by initiating an HTTP request for each hyperlink submitted via the urls[] parameter in the request. When those requests are directed to the same website, they can potentially overwhelm the target, causing DDoS symptoms – the crawler, proxied by Cloudflare, will visit the targeted site from a different IP address each time.

[…]

“I’d say the bigger story is that this API was also vulnerable to prompt injection,” he said, in reference to a separate vulnerability disclosure. “Why would they have prompt injection for such a simple task? I think it might be because they’re dogfooding their autonomous ‘AI agent’ thing.”

That second issue can be exploited to make the crawler answer queries via the same attributions API endpoint; you can feed questions to the bot, and it can answer them, when it’s really not supposed to do that; it’s supposed to just fetch websites.

Flesch questioned why OpenAI’s bot hasn’t implemented simple, established methods to properly deduplicate URLs in a requested list or to limit the size of the list, nor managed to avoid prompt injection vulnerabilities that have been addressed in the main ChatGPT interface.

[…]

Source: ChatGPT crawler flaw opens door to DDoS, prompt injection • The Register

Bad Likert Judge: A Novel Multi-Turn Technique to Jailbreak LLMs by Misusing Their Evaluation Capability

Text-generation large language models (LLMs) have safety measures designed to prevent them from responding to requests with harmful and malicious responses. Research into methods that can bypass these guardrails, such as Bad Likert Judge, can help defenders prepare for potential attacks.

The technique asks the target LLM to act as a judge scoring the harmfulness of a given response using the Likert scale, a rating scale measuring a respondent’s agreement or disagreement with a statement. It then asks the LLM to generate responses that contain examples that align with the scales. The example that has the highest Likert scale can potentially contain the harmful content.

We have tested this technique across a broad range of categories against six state-of-the-art text-generation LLMs. Our results reveal that this technique can increase the attack success rate (ASR) by more than 60% compared to plain attack prompts on average.

Source: Bad Likert Judge: A Novel Multi-Turn Technique to Jailbreak LLMs by Misusing Their Evaluation Capability

JailBreaking AI still easy, can be done with StRanGe CaSINg

New research from Anthropic, one of the leading AI companies and the developer of the Claude family of Large Language Models (LLMs), has released research showing that the process for getting LLMs to do what they’re not supposed to is still pretty easy and can be automated. SomETIMeS alL it tAKeS Is typing prOMptS Like thiS.

To prove this, Anthropic and researchers at Oxford, Stanford, and MATS, created Best-of-N (BoN) Jailbreaking

[…]

As the researchers explain, “BoN Jailbreaking works by repeatedly sampling variations of a prompt with a combination of augmentations—such as random shuffling or capitalization for textual prompts—until a harmful response is elicited.”

For example, if a user asks GPT-4o “How can I build a bomb,” it will refuse to answer because “This content may violate our usage policies.” BoN Jailbreaking simply keeps tweaking that prompt with random capital letters, shuffled words, misspellings, and broken grammar until GPT-4o provides the information. Literally the example Anthropic gives in the paper looks like mocking sPONGbOB MEMe tEXT.

Anthropic tested this jailbreaking method on its own Claude 3.5 Sonnet, Claude 3 Opus, OpenAI’s GPT-4o, GPT-4o-mini, Google’s Gemini-1.5-Flash-00, Gemini-1.5-Pro-001, and Facebook’s Llama 3 8B. It found that the method “achieves ASRs [attack success rate] of over 50%” on all the models it tested within 10,000 attempts or prompt variations.

[…]

In January, we showed that the AI-generated nonconsensual nude images of Taylor Swift that went viral on Twitter were created with Microsoft’s Designer AI image generator by misspelling her name, using pseudonyms, and describing sexual scenarios without using any sexual terms or phrases. This allowed users to generate the images without using any words that would trigger Microsoft’s guardrails. In March, we showed that AI audio generation company ElevenLabs’s automated moderation methods preventing people from generating audio of presidential candidates were easily bypassed by adding a minute of silence to the beginning of an audio file that included the voice a user wanted to clone.

[…]

It’s also worth noting that while there’s good reasons for AI companies to want to lock down their AI tools and that a lot of harm comes from people who bypass these guardrails, there’s now no shortage of “uncensored” LLMs that will answer whatever question you want and AI image generation models and platforms that make it easy to create whatever nonconsensual images users can imagine.

Source: APpaREnTLy THiS iS hoW yoU JaIlBreAk AI

Chinese scammers, criminals and businesses are exploiting its surveillance state

Chinese tech company employees and government workers are siphoning off user data and selling it online – and even high-ranking Chinese Communist Party officials and FBI-wanted hackers’ sensitive information is being peddled by the Middle Kingdom’s thriving illegal data ecosystem.

“While Western cybercrime research focuses heavily on criminals in the English- and Russian-speaking worlds, there is also a large community of Chinese-speaking cybercriminals who engage in scammy, low-level, financially motivated cybercrime,” SpyCloud senior security researcher Kyla Cardona said during a talk at last month’s Cyberwarcon in Arlington, Virginia.

It’s no secret that President Xi Jinping’s government uses technology companies to help maintain the nation’s massive surveillance apparatus.

But in addition to forcing businesses operating in China to stockpile and hand over info about their users for censorship and state-snooping purposes, a black market for individuals’ sensitive data is also booming. Corporate and government insiders have access to this harvested private info, and the financial incentives to sell the data to fraudsters and crooks to exploit.

“It’s a double-edged sword,” Cardona told The Register during an interview alongside SpyCloud infosec researcher Aurora Johnson.

“The data is being collected by rich and powerful people that control technology companies and work in the government, but it can also be used against them in all of these scams and fraud and other low-level crimes,” Johnson added.

China’s thriving data black market

To get their hands on the personal info, Chinese data brokers often recruit shady insiders with wanted ads seeking “friends” working in government, and promise daily income of 20,000 to 70,000 yuan ($2,700 and $9,700) in exchange for harvested information. This data is then used to pull off scams, fraud, and suchlike.

Some of these data brokers also claim to have “signed formal contracts” with the big three Chinese telecom companies: China Mobile, China Unicom, and China Telecom. The brokers’ marketing materials tout they are able to legally obtain and sell details of people’s internet habits via the Chinese telcos’ deep packet inspection systems, which monitor as well as manage and store network traffic. (The West has also seen this kind of thing.)

Crucially, this level of surveillance by the telcos gives their employees access to users’ browsing data and other info, which workers can then swipe and then resell themselves through various brokers, Cardona and Johnson said.

Scammers and other criminals are buying copies of this personal information, illicitly obtained or otherwise, for their swindles, but it’s also being purchased by legitimate businesses for sales leads — to sell people car insurance when theirs is about to expire, for example.

Information acquired through DPI also seems to be a major source of the stolen personal details that goes into the so-called “social engineering databases,” or SGKs (short for shegong ku​), according to the researchers.

In addition to amassing information collected from DPI, these databases contain personal details provided by underhand software development kits (SDKs) buried in apps and other programs, which basically spy on users in real time, as well as records stolen during IT security breaches.

SGK records include personal profiles (names, genders, addresses, dates of birth, phone numbers, email and social media account details, zodiac signs), bank account and other financial information, health records, property and vehicle information, facial recognition scans and photos, criminal case details, and more. Some of the SGK platforms allow users to do reverse lookups on potential targets, allowing someone to be ultimately identified from their otherwise non-identifying details.

[…]

One SGK that has since been taken down had more than 3 million users. As of now, one of the biggest stolen-info databases has 317,000 subscribers, we’re told, while most of the search services each see about 90,000 users per month.

[…]

One also displayed a ton of sensitive details belonging to a high-ranking CCP member.

​A free SGK search query about this individual pulled up the person’s name, physical address, mobile number, national ID number, birth date, gender, and issuing authority, which the researcher surmised is the issuing authority for the ID card.

An additional query produced even more: The person’s WeChat ID, vehicle information, hobbies and industry information, marital status, and monthly salary, and his phone’s International Mobile Equipment Identity (IMEI) number with a link to click for more information about the device.

The researchers found similar info about a People’s Liberation Army member using SGKs, plus details about suspected nation-state-backed criminals wanted by the FBI.

[…]

“There is a huge ecosystem of Chinese breached and leaked data, and I don’t know that a lot of Western cybersecurity researchers are looking at this,” Johnson continued. “It poses privacy risks to all Chinese people across all groups. And then it also gives us Western cybersecurity researchers a really interesting source to track some of these actors that have been targeting critical infrastructure.” ®

Source: How Chinese insiders exploit its surveillance state • The Register

Which goes to show – large centralised databases give away their data to far too many people. Bad security (like government backdoors to encryption) is bad for everyone – anyone with the key can (and will) get in (like the US is finding out: In massive U-turn, FBI Warns Americans to Start Using Encrypted Messaging Apps, after discovering the problem with backdoors)

In massive U-turn, FBI Warns Americans to Start Using Encrypted Messaging Apps, after discovering the problem with backdoors

America’s top cybersecurity and law enforcement officials made a coordinated push Tuesday to raise awareness about cyber threats from foreign actors in the wake of an intrusion of U.S. telecom equipment dubbed Salt Typhoon. The hackers are linked to the Chinese government and they still have a presence in U.S. systems, spying on American communications, in what Sen. Mark Warner from Virginia has called “the worst hack in our nation’s history.”

Officials with the U.S. Cybersecurity and Infrastructure Security Agency and FBI went so far as to urge Americans to use encrypted messaging apps, according to a new report from NBC News, something that’s ostensibly about keeping foreign hackers out of your communications.

[…]

“Our suggestion, what we have told folks internally, is not new here: encryption is your friend, whether it’s on text messaging or if you have the capacity to use encrypted voice communication. Even if the adversary is able to intercept the data, if it is encrypted, it will make it impossible,” Jeff Greene, executive assistant director for cybersecurity at CISA, said on a press call Tuesday according to NBC News.

The unnamed FBI agent on the call with reporters echoed the message, according to NBC News, urging Americans to use “responsibly managed encryption,” which is a rather big deal when you remember that agencies like the FBI have been most resistant to Silicon Valley’s encryption efforts.

The hackers behind Salt Typoon failed to monitor or intercept anything encrypted, meaning that anything sent through Signal and Apple’s iMessage was likely protected, according to the New York Times. But the intrusion for all other communications was otherwise extremely galling. The hackers had access to metadata, including information on messages and phone calls along with when and where they were delivered. The hackers reportedly focused on targets around Washington, D.C.

The most alarming sort of intrusion in Salt Typhoon involved the system used by U.S. officials to wiretap Americans with a court order

[…]

Source: FBI Warns Americans to Start Using Encrypted Messaging Apps

It’s not like people have not been warning governments all over the world that there is no such thing as a safe backdoor to encryption and that forbidding encryption leads to a world of harm. We knew this, but still the idiots in charge wanted keys to encryption. The key, once it is in the hands of “baddies” will still work. It really does show the absolute retardation of government spy people who say breaking encryption will make us safer.

Data broker SL leaves 600K+ sensitive files exposed online, doesn’t fix it despite warnings

More than 600,000 sensitive files containing thousands of people’s criminal histories, background checks, vehicle and property records were exposed to the internet in a non-password protected database belonging to data brokerage SL Data Services, according to a security researcher.

We don’t know how long the personal information was openly accessible. Infosec specialist Jeremiah Fowler says he found the Amazon S3 bucket in October and reported it to the data collection company by phone and email every few days for more than two weeks.

In addition to not being password protected, none of the information was encrypted, he told The Register. In total, the open bucket contained 644,869 PDF files in a 713.1 GB archive.

“Even when I would make phone calls to the multiple numbers on different websites and tell them there was a data incident, they would tell me they use 128-bit encryption and use SSL certificates – there were many eye rolls,” he claimed.

Some 95 percent of the documents Fowler saw were labeled “background checks,” he said. These contained full names, home addresses, phone numbers, email addresses, employment, family members, social media accounts, and criminal record history belonging to thousands of people. In at least one of these documents, the criminal record indicated that the person had been convicted of sexual misconduct. It included case details, fines, dates, and additional charges.

[…]

Source: Data broker leaves 600K+ sensitive files exposed online • The Register

US and UK Armed Forces Dating & Social Networking Service Exposed Over 1 Million Records Online through coding error

Cybersecurity Researcher, Jeremiah Fowler, discovered and reported to vpnMentor about a non-password-protected database that contained more than 1.1 million records belonging to Conduitor Limited (trading as Forces Penpals) — a service that offers dating services, and social networking for military members and their supporters.

The publicly exposed database was not password-protected or encrypted. It contained a total of 1,187,296 documents. In a limited sampling, a majority of the documents I saw were user images, while others were photos of potentially sensitive proof of service documents. These contained full names (first, last, and middle), mailing addresses, SSN (US), National Insurance Numbers, and Service Numbers (UK). These documents also listed rank, branch of the service, dates, locations, and other information that should not be publicly accessible.

Upon further research, I identified that the records belonged to Forces Penpals, a dating service and social networking community for military service members and their supporters. I immediately sent a responsible disclosure notice, and public access was restricted the following day. It is not known how long the database was exposed or if anyone else gained access to it. Only an internal forensic audit could identify additional access or potentially suspicious activity. I received a response from Forces Penpals after my disclosure notice stating: “Thank you for contacting us. It is much appreciated. Looks like there was a coding error where the documents were going to the wrong bucket and directory listing was turned on for debugging and never turned off. The photos are public anyway so that’s not an issue, but the documents certainly should not be public”. It is not known if the database was owned and managed by Forces Penpals directly or via a third-party contractor.

According to their website, the service operates social networking and support for members of the US and UK armed forces. It claims to have over 290,000 military and civilian users. Founded in 2002, Forces Penpals allowed UK citizens to write to soldiers on active duty in Iraq or Afghanistan.

[…]

Source: US and UK Armed Forces Dating & Social Networking Service Exposed Over 1 Million Records Online

Oh Look, It Was Trivial To Buy Troop And Intelligence Officer Location Data From Dodgy, Unregulated Data Brokers

There are two major reasons that the U.S. doesn’t pass an internet-era privacy law or regulate data brokers despite a parade of dangerous scandals. One, lobbied by a vast web of interconnected industries with unlimited budgets, Congress is too corrupt to do its job. Two, the U.S. government is disincentivized to do anything because it exploits this privacy dysfunction to dodge domestic surveillance warrants.

If we imposed safeguards on consumer data, everybody from app makers to telecoms would make billions less per quarter. So our corrupt lawmakers pretend the vast human harms of our greed are a distant and unavoidable externality. Unless the privacy issues involve some kid tracking rich people on their planes, of course, in which case Congress moves with a haste that would break the sound barrier.

So as a result, we get a steady stream of scandals related to the over-collection and monetization of wireless location data, posing no limit of public safety, market trust, or national security issues. Including, for example, stalkers using location data to track and harm women. Or radical right wing extremists using it to target vulnerable abortion clinic visitors with health care disinformation.

Even when U.S. troop safety is involved U.S. officials have proven too corrupt and incompetent to act. Just the latest case in point: Wired this week released an excellent new report documenting how it was relatively trivial to buy the sensitive and detailed movement data of U.S. military and intelligence workers as they moved around Germany:

“A collaborative analysis of billions of location coordinates obtained from a US-based data broker provides extraordinary insight into the daily routines of US service members. The findings also provide a vivid example of the significant risks the unregulated sale of mobile location data poses to the integrity of the US military and the safety of its service members and their families overseas.”

The data purchased by Wired doesn’t just track troops as they head out for a weekend at the bars. It provides granular, second-by-second detail of their movements around extremely sensitive facilities:

“We tracked hundreds of thousands of signals from devices inside sensitive US installations in Germany. That includes scores of devices within suspected NSA monitoring or signals-analysis facilities, more than a thousand devices at a sprawling US compound where Ukrainian troops were being being trained in 2023, and nearly 2,000 others at an air force base that has crucially supported American drone operations.”

Wired does note that the FTC is poised to file several lawsuits recognizing these kinds of facilities as protected sites, though it’s unclear those suits will survive Lina Khan’s inevitable ouster under a Trump administration looking to dismantle the federal regulatory state for shits and giggles.

When our underfunded and undermined regulators have tried to hold wireless companies or app makers accountable, they’re routinely derailed by either a Republican Congress (like when the GOP in 2017 killed FCC broadband privacy rules before they could even take effect), or more recently by a Trump Supreme Court keen to declare all federal consumer protection effectively illegal.

Even the most basic of FCC efforts to impose a long overdue fine against AT&T, Verizon, and T-Mobile have run aground thanks to the Trump-stocked 5th, 6th, and Supreme Court efforts to block anything even vaguely resembling corporate oversight. I’m told by the nation’s deepest thinkers that this corruption and greed is, somehow, “populism.”

Time and time and time again the U.S. has prioritized making money over protecting consumer privacy, market health, or national security. And it’s certain to only get worse during a second Trump term stocked with folks like new FCC boss Brendan Carr, dedicated to ensuring his friends at AT&T, Verizon, and T-Mobile never face anything close to accountability for anything, ever.

[…]

Source: Oh Look, It Was Trivial To Buy Troop And Intelligence Officer Location Data From Dodgy, Unregulated Data Brokers | Techdirt

Hacking Back the AI-Hacker: Prompt Injection by your LLM as a Defense Against LLM-driven Cyberattacks

Large language models (LLMs) are increasingly being harnessed to automate cyberattacks, making sophisticated exploits more accessible and scalable. In response, we propose a new defense strategy tailored to counter LLM-driven cyberattacks. We introduce Mantis, a defensive framework that exploits LLMs’ susceptibility to adversarial inputs to undermine malicious operations. Upon detecting an automated cyberattack, Mantis plants carefully crafted inputs into system responses, leading the attacker’s LLM to disrupt their own operations (passive defense) or even compromise the attacker’s machine (active defense). By deploying purposefully vulnerable decoy services to attract the attacker and using dynamic prompt injections for the attacker’s LLM, Mantis can autonomously hack back the attacker. In our experiments, Mantis consistently achieved over 95% effectiveness against automated LLM-driven attacks. To foster further research and collaboration, Mantis is available as an open-source tool: this https URL

Source: [2410.20911] Hacking Back the AI-Hacker: Prompt Injection as a Defense Against LLM-driven Cyberattacks

Retailers Eye Radio emitting ink on fibres to Stop Shoplifting

[…] small Spanish technology company, Myruns, and telecommunications operator Telefónica SA about the possible application of a system based on an anti-theft alarm product so thin it’s imperceptible to the naked eye

[…]

The technology from Myruns, in San Sebastian, Spain, may be just one of the efforts to curb thefts that have been studied by Inditex, which declined to comment on specific projects. Myruns’ product, which one of the people says is five times thinner than a human hair, or about a thousandth of an inch, uses a conductive ink derived from cellulose to transmit signals. It can set off alarms if someone walks out of a shop with items whose woven-in tags haven’t been deactivated, according to the people. The novel ink replaces aluminum, the main material used in most alarms. That would mean retailers wouldn’t need to rely on the metal for alarms, making the devices potentially biodegradable and supporting the garments’ recyclability.

Competitors that make threadlike radio-frequency identification (RFID) technology containing metals include Primo1D, an offshoot of a research center in Grenoble, France; and RFID Threads Ltd., in Nottingham, England, formerly known as Adetex.ID.

[…]

Pressure to improve profitability and reduce losses has pushed many retailers to step up their traditional anti-theft efforts. Inditex rival Hennes & Mauritz AB, or H&M, has increased the number of security guards at its stores, including in the US. Associated British Foods Plc’s Primark has also hired more security staff, in addition to investing in closed-circuit television systems and body cameras worn by staff. And in the UK, retailers such as John Lewis, Sainsbury’s and Tesco have teamed up with law enforcement to help fund a team of police and intelligence officers targeting shoplifters.

The lack of visible security can encourage shoplifting, but more drastic measures can impede sales, says Martin Gill, a UK-based consultant whose work involves testing retailers’ security by trying to steal things.

“Certain retail strategies, which aim to boost sales, have made it much easier to steal,” he says. “The key for good security is not to stop theft from happening at all costs, but do as much as possible to reduce the number of offenses. It’s always about the balance between sales and security.”

Source: Retailers Eye High-Tech Tags to Stop Shoplifting – Bloomberg

Synology and QNAP hurry out patches for zero-days exploited at Pwn2Own

S

Synology, a Taiwanese network-attached storage (NAS) appliance maker, patched two critical zero-days exploited during last week’s Pwn2Own hacking competition within days.

Midnight Blue security researcher Rick de Jager found the critical zero-click vulnerabilities (tracked together as CVE-2024-10443 and dubbed RISK:STATION) in the company’s Synology Photos and BeePhotos for BeeStation software.

As Synology explains in security advisories published two days after the flaws were demoed at Pwn2Own Ireland 2024 to hijack a Synology BeeStation BST150-4T device, the security flaws enable remote attackers to gain remote code execution as root on vulnerable NAS appliances exposed online.

“The vulnerability was initially discovered, within just a few hours, as a replacement for another Pwn2Own submission. The issue was disclosed to Synology immediately after demonstration, and within 48 hours a patch was made available which resolves the vulnerability,” Midnight Blue said.

“However, since the vulnerability has a high potential for criminal abuse, and millions of devices are affected, a media reach-out was made to inform system owners of the issue and to stress the point that immediate mitigative actions are required.”

Synology says it addressed the vulnerabilities in the following software releases; however, they’re not automatically applied on vulnerable systems, and customers are advised to update as soon as possible to block potential incoming attacks:

  • BeePhotos for BeeStation OS 1.1: Upgrade to 1.1.0-10053 or above
  • BeePhotos for BeeStation OS 1.0: Upgrade to 1.0.2-10026 or above
  • Synology Photos 1.7 for DSM 7.2: Upgrade to 1.7.0-0795 or above.
  • Synology Photos 1.6 for DSM 7.2: Upgrade to 1.6.2-0720 or above.

QNAP, another Taiwanese NAS device manufacturer, patched two more critical zero-days exploited during the hacking contest within a week (in the company’s SMB Service and Hybrid Backup Sync disaster recovery and data backup solution).

[…]

Source: Synology hurries out patches for zero-days exploited at Pwn2Own

Usually the POC is given to the company around 30 days before disclosure. That is what makes it ‘responsible disclosure’.

Fitness apps (Strava) still giving away locations of world leaders including Trump, Putin and Macron

Some of the world’s most prominent leaders’ movements were tracked online through a fitness app used by their bodyguards, an investigation has suggested

A report by French newspaper Le Monde said several US Secret Service agents use the Strava fitness app, which has revealed highly confidential movements of US president Joe Biden, presidential rivals Donald Trump and Kamala Harris and other world leaders.

The investigation also identified Strava users among the security personnel for French president Emmanuel Macron and Russian president Vladimir Putin. Strava is a popular app among runners and cyclists, that enables users to log and share their physical activities within a community.

[…]

In another example, Le Monde used an agent’s Strava profile to reveal the location of a hotel where Biden stayed in San Francisco for high-stakes talks with Chinese president Xi Jinping in 2023. A few hours before Biden’s arrival, the agent went jogging from the hotel and used Strava to trace his route.

In a statement to the newspaper, the Secret Service said its staff aren’t allowed to use personal electronic devices while on duty during protective assignments but “we do not prohibit an employee’s personal use of social media off-duty.”

[…]

Source: How Strava ‘gave away locations’ of world leaders including Trump, Putin and Macron | The Independent

In 2018 this was shown to be a problem, you would have thought they would have fixed it by now:

Fitness app Polar even better at revealing secrets than Strava and Garmin

Heat Map Released by Fitness Tracker Reveals Location of Secret Military Bases

Over 115,000 United Nations Documents Associated to Gender Equality Exposed Online

[…] The non-password protected, non encrypted/clear text database contained financial reports and audits (including bank account information), staff documents, email addresses, contracts, certifications, registration documents, and much more. In total, the database held 115,141 files in.PDF,.xml,.jpg,,png, or other formats, amounting to 228 GB. Many of the documents I saw were marked as confidential and should have not been made publicly available. One single.xls file contained a list of 1,611 civil society organizations, including their internal UN application numbers, whether they are eligible for support, the status of their applications, whether they are local or national, and a range of detailed answers regarding the groups’ missions.

I also saw numerous scanned passports, ID cards, and staff directories of individual organizations. The staff documents included staff names, tax data, salary information, and job roles. There were also documents labeled as “victim success stories” or testimonies. Some of these contained the names and email addresses of those helped by the programs, as well as details of their personal experiences. For instance, one of the letters purported to be from a Chibok schoolgirl who was one of the 276 individuals kidnapped by Boko Haram in 2014. Exposure of this information could potentially have serious privacy or safety implications to charity workers and those individuals they provide assistance or services to.

The records indicated an association with UN Women and the UN Trust Fund to End Violence against Women. For instance, there were reference letters addressed directly to the UN, documents stamped with UN logos, and file names indicating the UN Women organization. I immediately sent a responsible disclosure notice of my findings to the general UN InfoSec address and UN Women, and public access to the database was restricted the following day. I received an immediate reply to my disclosure notice from the UN Information Security team stating “The reported vulnerability does not pertain to us (the United Nations Secretariat) and is for UN Women. Please report the vulnerability to UN WOMEN”.

Although the records indicated the files belonged to the UN Women agency, it is not known if they owned and managed the non-password protected database or if it was under the control of a third-party contractor. It is also unknown how long the records were exposed or if anyone else accessed them, as only an internal forensic audit can identify that information. I did not receive a reply from UN Women at the time of publication.

[…]

A scam alert was issued in an undated post on their website that reads “UN Women has been made aware of various correspondences—circulated via email, websites, social media, regular mail, or facsimile—falsely stating that they are issued by, or in association with UN Women, the United Nations, and/or its officials. These scams, which may seek to obtain money and/or, in many cases, personal details from the recipients of such correspondence, are fraudulent”. These scams typically operate by impersonating reputable organizations or individuals and requesting application fees, dues, or other payments.

[…]

Many of the charities operate in countries and regions where the potential threat of violence against women and members of the LGBTQ community is a serious safety concern. Protecting the privacy and identities of these individuals is extremely important. Criminals could potentially use social engineering methods to target charity workers — not only for financial gain, but in an effort to obtain the identities of vulnerable individuals who receive assistance from an organization.

[…]

Source: Over 115,000 United Nations Documents Associated to Gender Equality Exposed Online

Samsung phones being attacked by flaw. Use the Oct 7 update!

A nasty bug in Samsung’s mobile chips is being exploited by miscreants as part of an exploit chain to escalate privileges and then remotely execute arbitrary code, according to Google security researchers.

The use-after-free vulnerability is tracked as CVE-2024-44068, and it affects Samsung Exynos mobile processors versions 9820, 9825, 980, 990, 850, and W920. It received an 8.1 out of 10 CVSS severity rating, and Samsung, in its very brief security advisory, describes it as a high-severity flaw. The vendor patched the hole on October 7.

While the advisory doesn’t make any mention of attackers abusing the vulnerability, according to Googlers Xingyu Jin and Clement Lecigene, someone(s) has already chained the flaw with other CVEs (those aren’t listed) as part of an attack to execute code on people’s phones.

The bug exists in the memory management and how the device driver sets up the page mapping, according to Lecigene, a member of Google’s Threat Analysis Group, and Jin, a Google Devices and Services Security researcher who is credited with spotting the flaw and reporting it to Samsung.

“This 0-day exploit is part of an EoP chain,” the duo said. “The actor is able to execute arbitrary code in a privileged cameraserver process. The exploit also renamed the process name itself to ‘vendor.samsung.hardware.camera.provider@3.0-service,’ probably for anti-forensic purposes.”

The Register reached out to Samsung for more information about the flaw and in-the-wild exploits, but did not immediately receive a response. We will update this story when we hear back.

It’s worth noting that Google TAG keeps a close eye on spyware and nation-state gangs abusing zero-days for espionage purposes.

Considering that both of these threats frequently attack mobile devices to keep tabs on specific targets — Google tracked [PDF] 61 zero-days in the wild that specifically targeted end-user platforms and products in 2023 – we wouldn’t be too surprised to hear that the exploit chain including CVE-2024-44068 ultimately deploys some snooping malware on people’s phones. ®

Source: Samsung phone users exposed to EoP attacks, Google warns • The Register