Critical Cisco bug allows anyone to change all (including admin) passwords

Cisco just dropped a patch for a maximum-severity vulnerability that allows attackers to change the password of any user, including admins.

Tracked as CVE-2024-20419, the bug carries a maximum 10/10 CVSS 3.1 rating and affects the authentication system of Cisco Smart Software Manager (SSM) On-Prem.

Cisco hasn’t disclosed too many details about this, which is more than understandable given the nature of the vulnerability. However we know that an unauthenticated remote attacker can exploit this to change passwords. It’s hardly ideal, and should be patched as soon as possible.

Digging into the severity assessment, the attack complexity was deemed “low”: no privileges or user interaction would be required to pull it off, and the impact on the product’s integrity, availability, and confidentiality is all designated “high.”

“This vulnerability is due to improper implementation of the password-change process,” Cisco’s advisory reads, providing the last few details about the vulnerability.

“An attacker could exploit this vulnerability by sending crafted HTTP requests to an affected device. A successful exploit could allow an attacker to access the web UI or API with the privileges of the compromised user.”

There are no workarounds for this vulnerability, so get those patches applied if you’re in the business of keeping your passwords safe and secure. Fortunately, there are no signs of this being exploited in the wild yet, but now the cat’s out of the bag it likely won’t be long before that changes.

CVE-2024-20419 affects both SSM On-Prem and SSM Satellite. They’re different names for the same product, only the latter refers to versions before release 7.0.

[…]

Source: Critical Cisco bug allows crims to change admin passwords • The Register

Linksys Velop Routers Caught Sending WiFi Creds In The Clear – alerted in November 2023 still not fixed

A troubling report from the Belgian consumer protection group Testaankoop: several models of Velop Pro routers from Linksys were found to be sending WiFi configuration data out to a remote server during the setup process. That would be bad enough, but not only are these routers reporting private information to the mothership, they are doing it in clear text for anyone to listen in on.

Testaankoop says that while testing out the Pro WiFi 6E and Pro 7 versions of Velop routers, they discovered that unencrypted packets were being sent to a server hosted by Amazon Web Services (AWS). In these packets, they discovered not only the SSID of the user’s wireless network, but the encryption key necessary to join it. There were also various tokens included that could be used to identify network and user.

While the report doesn’t go into too much detail, it seems this information is being sent as part of the configuration process when using the official Linksys mobile application. If you want to avoid having your information bounced around the Internet, you can still use the router’s built-in web configuration menus from a browser on the local network — just like in the good old days.

The real kicker here is the response from Linksys, or more accurately, the lack thereof. Testaankoop says they notified them of their discovery back in November of 2023, and got no response. There’s even been firmware updates for the affected routers since then, but the issue is still unresolved.

Testaankoop ends the review by strongly recommending users avoid these particular models of Linksys Velop routers, which given the facts, sounds like solid advice to us. They also express their disappointment in how the brand, a fixture in the consumer router space for decades, has handled the situation. If you ask us, things started going downhill once they stopped running Linux on their hardware.

Source: Linksys Velop Routers Caught Sending WiFi Creds In The Clear | Hackaday

384,000 sites still pulling code from sketchy polyfill.io code library recently bought by Chinese firm

More than 384,000 websites are linking to a site that was caught last week performing a supply-chain attack that redirected visitors to malicious sites, researchers said.

For years, the JavaScript code, hosted at polyfill[.]com, was a legitimate open source project that allowed older browsers to handle advanced functions that weren’t natively supported. By linking to cdn.polyfill[.]io, websites could ensure that devices using legacy browsers could render content in newer formats. The free service was popular among websites because all they had to do was embed the link in their sites. The code hosted on the polyfill site did the rest.

The power of supply-chain attacks

In February, China-based company Funnull acquired the domain and the GitHub account that hosted the JavaScript code. On June 25, researchers from security firm Sansec reported that code hosted on the polyfill domain had been changed to redirect users to adult- and gambling-themed websites. The code was deliberately designed to mask the redirections by performing them only at certain times of the day and only against visitors who met specific criteria.

The revelation prompted industry-wide calls to take action. Two days after the Sansec report was published, domain registrar Namecheap suspended the domain, a move that effectively prevented the malicious code from running on visitor devices. Even then, content delivery networks such as Cloudflare began automatically replacing pollyfill links with domains leading to safe mirror sites. Google blocked ads for sites embedding the Polyfill[.]io domain. The website blocker uBlock Origin added the domain to its filter list. And Andrew Betts, the original creator of Polyfill.io, urged website owners to remove links to the library immediately.

As of Tuesday, exactly one week after malicious behavior came to light, 384,773 sites continued to link to the site, according to researchers from security firm Censys. Some of the sites were associated with mainstream companies including Hulu, Mercedes-Benz, and Warner Bros. and the federal government. The findings underscore the power of supply-chain attacks, which can spread malware to thousands or millions of people simply by infecting a common source they all rely on.

[…]

Source: 384,000 sites pull code from sketchy code library recently bought by Chinese firm | Ars Technica

CocoaPods Vulnerabilities from 2014 Affects almost all Apple devices, Facebook, TikTok apps and more

CocoaPods vulnerabilities reported today could allow malicious actors to take over thousands of unclaimed pods and insert malicious code into many of the most popular iOS and MacOS applications, potentially affecting “almost every Apple device.”

E.V.A Information Security researchers found that the three vulnerabilities in the open source CocoaPods dependency manager were present in applications provided by Meta (Facebook, Whatsapp), Apple (Safari, AppleTV, Xcode), and Microsoft (Teams); as well as in TikTok, Snapchat, Amazon, LinkedIn, Netflix, Okta, Yahoo, Zynga, and many more.

The vulnerabilities have been patched, yet the researchers still found 685 Pods “that had an explicit dependency using an orphaned Pod; doubtless there are hundreds or thousands more in proprietary codebases.”

The widespread issue is further evidence of the vulnerability of the software supply chain. The researchers wrote that they often find that 70-80% of client code they review “is composed of open-source libraries, packages, or frameworks.”

The CocoaPods Vulnerabilities

The newly discovered vulnerabilities – one of which (CVE-2024-38366) received a 10 out of 10 criticality score – actually date from a May 2014 CocoaPods migration to a new ‘Trunk’ server, which left 1,866 orphaned pods that owners never reclaimed.

The other two CocoaPods vulnerabilities (CVE-2024-38368 and CVE-2024-38367) also date from the migration.

For CVE-2024-38368, the researchers said that in analyzing the source code of the ‘Trunk’ server, they noticed that all orphan pods were associated with a default CocoaPods owner, and the email created for this default owner was unclaimed-pods@cocoapods.org. They also noticed that the public API endpoint to claim a pod was still available, and the API “allowed anyone to claim orphaned pods without any ownership verification process.”

“By making a straightforward curl request to the publicly available API, and supplying the unclaimed targeted pod name, the door was wide open for a potential attacker to claim any or all of these orphaned Pods as their own,” wrote Reef Spektor and Eran Vaknin.

Once they took over a Pod, an attacker would be able to manipulate the source code or insert malicious content into the Pod, which “would then go on to infect many downstream dependencies, and potentially find its way into a large percentage of Apple devices currently in use.”

[…]

“The vulnerabilities we discovered could be used to control the dependency manager itself, and any published package.”

Downstream dependencies could mean that thousands of applications and millions of devices were exposed over the last few years, and close attention should be paid to software that relies on orphaned CocoaPod packages that do not have an owner assigned to them.

Developers and organizations should review dependency lists and package managers used in their applications, validate checksums of third-party libraries, perform periodic scans to detect malicious code or suspicious changes, keep software updated, and limit use of orphaned or unmaintained packages.

“Dependency managers are an often-overlooked aspect of software supply chain security,” the researchers wrote. “Security leaders should explore ways to increase governance and oversight over the use these tools.”

Source: CocoaPods Vulnerabilities Could Affect Apple, Facebook, TikTok

Microsoft finally tells more customers their emails have been stolen

It took a while, but Microsoft has told customers that the Russian criminals who compromised its systems earlier this year made off with even more emails than it first admitted.

We’ve been aware for some time that the digital Russian break-in at the Windows maker saw Kremlin spies make off with source code, executive emails, and sensitive US government data. Reports last week revealed that the issue was even larger than initially believed and additional customers’ data has been stolen.

“We are continuing notifications to customers who corresponded with Microsoft corporate email accounts that were exfiltrated by the Midnight Blizzard threat actor, and we are providing the customers the email correspondence that was accessed by this actor,” a Microsoft spokesperson told Bloomberg. “This is increased detail for customers who have already been notified and also includes new notifications.”

Along with Russia, Microsoft was also compromised by state actors from China not long ago, and that issue similarly led to the theft of emails and other data belonging to senior US government officials.

Both incidents have led experts to call Microsoft a threat to US national security, and president Brad Smith to issue a less-than-reassuring mea culpa to Congress. All the while, the US government has actually invested more in its Microsoft kit.

Bloomberg reported that emails being sent to affected Microsoft customers include a link to a secure environment where customers can visit a site to review messages Microsoft identified as having been compromised. But even that might not have been the most security-conscious way to notify folks: Several thought they were being phished.

Source: Microsoft tells more customers their emails have been stolen • The Register

ID verification service that works with TikTok and X left its admin credentials wide open for a year

An ID verification company that works on behalf of TikTok, X and Uber, among others, has left a set of administrative credentials exposed for more than a year, as reported by 404 Media. The Israel-based AU10TIX verifies the identity of users by using pictures of their faces and drivers’ licenses, potentially opening up both to hackers.

“My personal reading of this situation is that an ID Verification service provider was entrusted with people’s identities and it failed to implement simple measures to protect people’s identities and sensitive ID documents,” Mossab Hussein, the chief security officer at cybersecurity firm spiderSilk who originally noticed the exposed credentials, said.

The set of admin credentials that were left exposed led right to a logging platform, which in turn included links to identity documents. There’s even some reason to suspect that bad actors got ahold of these credentials and actually used them.

They appear to have been scooped up by malware in December 2022 and placed on a Telegram channel in March 2023, according to timestamps and messages acquired by 404 Media. The news organization downloaded the credentials and found a wealth of passwords and authentication tokens linked to someone who lists their role on LinkedIn as a Network Operations Center Manager at AU10TIX.

If hackers got ahold of customer data, it would include a user’s name, date of birth, nationality, ID number and images of uploaded documents. It’s pretty much all an internet gollum would need to steal an identity. All they would have to do is snatch up the credentials, log in and start wreaking havoc. Yikes.

[…]

Source: An ID verification service that works with TikTok and X left its credentials wide open for a year

Patch now: ‘Easy-to-exploit’ RCE in open source Ollama

A now-patched vulnerability in Ollama – a popular open source project for running LLMs – can lead to remote code execution, according to flaw finders who warned that upwards of 1,000 vulnerable instances remain exposed to the internet.

Wiz Research disclosed the flaw, tracked as CVE-2024-37032 and dubbed Probllama, on May 5 and its maintainers fixed the issue in version 0.1.34 that was released via GitHub a day later.

Ollama is useful for performing inference with compatible neural networks – such as Meta’s Llama family, hence the name; Microsoft’s Phi clan; and models from Mistral – and it can be used on the command line or via a REST API. It has hundreds of thousands of monthly pulls on Docker Hub.

In a report published today, the Wiz bug hunting team’s Sagi Tzadik said the vulnerability is due to insufficient validation on the server side of that REST API provided by Ollama. An attacker could exploit the flaw by sending a specially crafted HTTP request to the Ollama API server — and in Docker installations, at least, the API server is publicly exposed.

The Ollama server provides multiple API endpoints that perform core functions. This includes the API endpoint /api/pull that lets users download models from the Ollama registry as well as private registries. As the researchers found, the process to trigger the download of a model was exploitable, allowing miscreants to potentially compromise the environment hosting a vulnerable Ollama server.

“What we found is that when pulling a model from a private registry (by querying the http://[victim]:11434/api/pull API endpoint), it is possible to supply a malicious manifest file that contains a path traversal payload in the digest field,” Tzadik explained.

An attacker could then use that payload to corrupt files on the system, achieve arbitrary file read, and ultimately remote code execution (RCE) to hijack that system.

“This issue is extremely severe in Docker installations, as the server runs with root privileges and listens on 0.0.0.0 by default – which enables remote exploitation of this vulnerability,” Tzadik emphasized.

And despite a patched version of the project being available for over a month, the Wiz kids found that, as of June 10, there were more than 1,000 of vulnerable Ollama server instances still exposed to the internet. In light of this, there’s a couple things anyone using Ollama should do to protect their AI applications.

First, which should go without saying, update instances to version 0.1.34 or newer. Also, as Ollama doesn’t inherently support authentication, do not expose installations to the internet unless using some sort of authentication, such as a reverse-proxy. Even better, don’t allow the internet to reach the server at all, put it behind firewalls, and only allow authorized internal applications and their users to access it.

“The critical issue is not just the vulnerabilities themselves but the inherent lack of authentication support in these new tools,” Tzadik noted, referring to previous RCEs in other tools used to deploy LLMs including TorchServe and Ray Anyscale.

Plus, he added, even those these tools are new and often written in modern safety-first programming languages, “classic vulnerabilities such as path traversal remain an issue.” ®

Source: Patch now: ‘Easy-to-exploit’ RCE in open source Ollama

Microsoft fixes hack-me-via-Wi-Fi Windows security hole

[…] CVE-2024-30078, a Wi-Fi driver remote code execution hole rated 8.8 in severity. It’s not publicly disclosed, not yet under attack, and exploitation is “less likely,” according to Redmond.

“An unauthenticated attacker could send a malicious networking packet to an adjacent system that is employing a Wi-Fi networking adapter, which could enable remote code execution,” and thus remotely, silently, and wirelessly run malware or spyware on that nearby victim’s computer, Microsoft admitted.

Childs said: “Considering it hits every supported version of Windows, it will likely draw a lot of attention from attackers and red teams alike.” Patch as soon as you can: This flaw can be abused to run malicious software on and hijack a nearby Windows PC via their Wi-Fi with no authentication needed. Pretty bad. […]

Source: Microsoft fixes hack-me-via-Wi-Fi Windows security hole • The Register

ASUS Releases Firmware Update for Critical Remote Authentication Bypass Affecting Seven Routers

A report from BleepingComputer notes that ASUS “has released a new firmware update that addresses a vulnerability impacting seven router models that allow remote attackers to log in to devices.” But there’s more bad news: Taiwan’s CERT has also informed the public about CVE-2024-3912 in a post yesterday, which is a critical (9.8) arbitrary firmware upload vulnerability allowing unauthenticated, remote attackers to execute system commands on the device. The flaw impacts multiple ASUS router models, but not all will be getting security updates due to them having reached their end-of-life (EoL).

Finally, ASUS announced an update to Download Master, a utility used on ASUS routers that enables users to manage and download files directly to a connected USB storage device via torrent, HTTP, or FTP. The newly released Download Master version 3.1.0.114 addresses five medium to high-severity issues concerning arbitrary file upload, OS command injection, buffer overflow, reflected XSS, and stored XSS problems.

Source: https://mobile.slashdot.org/story/24/06/17/0237229/asus-releases-firmware-update-for-critical-remote-authentication-bypass-affecting-seven-routers

Arm Memory Tag Extensions broken by speculative execution

In 2018, chip designer Arm introduced a hardware security feature called Memory Tagging Extensions (MTE) as a defense against memory safety bugs. But it may not be as effective as first hoped.

Implemented and supported last year in Google’s Pixel 8 and Pixel 8 Pro phones and previously in Linux, MTE aims to help detect memory safety violations, as well as hardening devices against attacks that attempt to exploit memory safety flaws.

[…]

MTE works by tagging blocks of physical memory with metadata. This metadata serves as a key that permits access. When a pointer references data within a tagged block of memory, the hardware checks to make sure the pointer contains a key matching that of the memory block to gain access to the data. A mismatch throws out an error.

Tag, you’re IT

Diving deeper, when MTE is active, programs can use special instructions to tag 16-byte blocks of physical memory with a 4-bit key. For example, when allocating a chunk of memory from the heap, that chunk (aligned and rounded to 16 bytes) can be tagged with the same 4-bit key, and a pointer to that chunk is generated containing the key in its upper unused bits.

When the program uses that pointer in future, referencing some part of the block, everything works fine. The pointer still contains the correct key. But if the block is freed and its key is changed, subsequent use of that stale pointer will trigger a fault by the processor, due to a mismatching key, which indicates a programming bug or a vulnerability exploit attempt, both of which you want to catch.

And if the program is hijacked via some other vulnerability, and the code is made to reference a tagged block without the right key in the pointer, that will also be caught.

[…]

Unfortunately, MTE appears to be insufficiently secure to fulfill its security promises. Researchers affiliated with Seoul National University in South Korea, Samsung Research, and Georgia Institute of Technology in the US have found that they can break MTE through speculative execution.

The authors – Juhee Kim, Jinbum Park, Sihyeon Roh, Jaeyoung Chung, Youngjoo Lee, Taesoo Kim, and Byoungyoung Lee – say as much in their research paper, “TikTag: Breaking Arm’s Memory Tagging Extension with Speculative Execution.”

Having looked at MTE to assess whether it provides the claimed security benefit, the boffins say it does not. Instead, they found they could extract MTE tags in under four seconds around 95 per cent of the time.

“[W]e found that speculative execution attacks are indeed possible against MTE, which severely harms the security assurance of MTE,” the authors report. “We discovered two new gadgets, named TIKTAG-v1 and TIKTAG-v2, which can leak the MTE tag of an arbitrary memory address.”

[…]

The authors say that their research expands on prior work from May 2024 that found MTE vulnerable to speculative probing. What’s more, they contend their findings challenge work by Google’s Project Zero that found no side-channel attack capable of breaking MTE.

Using proof-of-concept code, MTE tags were ferreted out of Google Chrome on Android and the Linux kernel using this technique, with a success rate that exceeded 95 percent in less than four seconds, it’s claimed.

The authors have made their code available on GitHub. “When TikTag gadgets are speculatively executed, cache state differs depending on whether the gadgets trigger a tag check fault or not,” the code repo explains. “Therefore, by observing the cache states, it is possible to leak the tag check results without raising any exceptions.”

Access to leaked tags doesn’t ensure exploitation. It simply means that an attacker capable of exploiting a particular memory bug on an affected device wouldn’t be thwarted by MTE.

The researchers disclosed their findings to Arm, which acknowledged them in a developer note published in December 2023. The chip design firm said that timing differences in successful and failed tag checking can be enough to create an MTE speculative oracle – a mechanism to reveal MTE tags – in Cortex-X2, Cortex-X3, Cortex-A510, Cortex-A520, Cortex-A710, Cortex-A715, and Cortex-A720 processors.

[…]

Source: Arm Memory Tag Extensions broken by speculative execution • The Register

Wi-Fi Routers are like an trackers available to everyone

Apple and the satellite-based broadband service Starlink each recently took steps to address new research into the potential security and privacy implications of how their services geo-locate devices. Researchers from the University of Maryland say they relied on publicly available data from Apple to track the location of billions of devices globally — including non-Apple devices like Starlink systems — and found they could use this data to monitor the destruction of Gaza, as well as the movements and in many cases identities of Russian and Ukrainian troops.

At issue is the way that Apple collects and publicly shares information about the precise location of all Wi-Fi access points seen by its devices. Apple collects this location data to give Apple devices a crowdsourced, low-power alternative to constantly requesting global positioning system (GPS) coordinates.

Both Apple and Google operate their own Wi-Fi-based Positioning Systems (WPS) that obtain certain hardware identifiers from all wireless access points that come within range of their mobile devices. Both record the Media Access Control (MAC) address that a Wi-FI access point uses, known as a Basic Service Set Identifier or BSSID.

Periodically, Apple and Google mobile devices will forward their locations — by querying GPS and/or by using cellular towers as landmarks — along with any nearby BSSIDs. This combination of data allows Apple and Google devices to figure out where they are within a few feet or meters, and it’s what allows your mobile phone to continue displaying your planned route even when the device can’t get a fix on GPS.

[…]

In essence, Google’s WPS computes the user’s location and shares it with the device. Apple’s WPS gives its devices a large enough amount of data about the location of known access points in the area that the devices can do that estimation on their own.

That’s according to two researchers at the University of Maryland, who theorized they could use the verbosity of Apple’s API to map the movement of individual devices into and out of virtually any defined area of the world. The UMD pair said they spent a month early in their research continuously querying the API, asking it for the location of more than a billion BSSIDs generated at random.

They learned that while only about three million of those randomly generated BSSIDs were known to Apple’s Wi-Fi geolocation API, Apple also returned an additional 488 million BSSID locations already stored in its WPS from other lookups.

[…]

Plotting the locations returned by Apple’s WPS between November 2022 and November 2023, Levin and Rye saw they had a near global view of the locations tied to more than two billion Wi-Fi access points. The map showed geolocated access points in nearly every corner of the globe, apart from almost the entirety of China, vast stretches of desert wilderness in central Australia and Africa, and deep in the rainforests of South America.

A “heatmap” of BSSIDs the UMD team said they discovered by guessing randomly at BSSIDs.

The researchers said that by zeroing in on or “geofencing” other smaller regions indexed by Apple’s location API, they could monitor how Wi-Fi access points moved over time. Why might that be a big deal? They found that by geofencing active conflict zones in Ukraine, they were able to determine the location and movement of Starlink devices used by both Ukrainian and Russian forces.

The reason they were able to do that is that each Starlink terminal — the dish and associated hardware that allows a Starlink customer to receive Internet service from a constellation of orbiting Starlink satellites — includes its own Wi-Fi access point, whose location is going to be automatically indexed by any nearby Apple devices that have location services enabled.

A heatmap of Starlink routers in Ukraine. Image: UMD.

The University of Maryland team geo-fenced various conflict zones in Ukraine, and identified at least 3,722 Starlink terminals geolocated in Ukraine.

“We find what appear to be personal devices being brought by military personnel into war zones, exposing pre-deployment sites and military positions,” the researchers wrote. “Our results also show individuals who have left Ukraine to a wide range of countries, validating public reports of where Ukrainian refugees have resettled.”

[…]

The researchers also focused their geofencing on the Israel-Hamas war in Gaza, and were able to track the migration and disappearance of devices throughout the Gaza Strip as Israeli forces cut power to the country and bombing campaigns knocked out key infrastructure.

“As time progressed, the number of Gazan BSSIDs that are geolocatable continued to decline,” they wrote. “By the end of the month, only 28% of the original BSSIDs were still found in the Apple WPS.”

In late March 2024, Apple quietly updated its website to note that anyone can opt out of having the location of their wireless access points collected and shared by Apple — by appending “_nomap” to the end of the Wi-Fi access point’s name (SSID). Adding “_nomap” to your Wi-Fi network name also blocks Google from indexing its location.

[…]

Rye said Apple’s response addressed the most depressing aspect of their research: That there was previously no way for anyone to opt out of this data collection.

“You may not have Apple products, but if you have an access point and someone near you owns an Apple device, your BSSID will be in [Apple’s] database,” he said. “What’s important to note here is that every access point is being tracked, without opting in, whether they run an Apple device or not. Only after we disclosed this to Apple have they added the ability for people to opt out.”

The researchers said they hope Apple will consider additional safeguards, such as proactive ways to limit abuses of its location API.

[…]

“We observe routers move between cities and countries, potentially representing their owner’s relocation or a business transaction between an old and new owner,” they wrote. “While there is not necessarily a 1-to-1 relationship between Wi-Fi routers and users, home routers typically only have several. If these users are vulnerable populations, such as those fleeing intimate partner violence or a stalker, their router simply being online can disclose their new location.”

The researchers said Wi-Fi access points that can be created using a mobile device’s built-in cellular modem do not create a location privacy risk for their users because mobile phone hotspots will choose a random BSSID when activated.

[…]

For example, they discovered that certain commonly used travel routers compound the potential privacy risks.

“Because travel routers are frequently used on campers or boats, we see a significant number of them move between campgrounds, RV parks, and marinas,” the UMD duo wrote. “They are used by vacationers who move between residential dwellings and hotels. We have evidence of their use by military members as they deploy from their homes and bases to war zones.”

A copy of the UMD research is available here (PDF).

Source: Why Your Wi-Fi Router Doubles as an Apple AirTag – Krebs on Security

Over 165 Snowflake customers didn’t use MFA, says Mandiant

An unknown financially motivated crime crew has swiped a “significant volume of records” from Snowflake customers’ databases using stolen credentials, according to Mandiant.

“To date, Mandiant and Snowflake have notified approximately 165 potentially exposed organizations,” the Google-owned threat hunters wrote on Monday, and noted they track the perps as “UNC5537.”

The crew behind the Snowflake intrusions may have ties to Scattered Spider, aka UNC3944 – the notorious gang behind the mid-2023 Las Vegas casino breaches.

“Mandiant is investigating the possibility that a member of UNC5537 collaborated with UNC3944 on at least one past intrusion in the past six months, but we don’t have enough data to confidently link UNC5537 to a broader group at this time,” senior threat analyst Austin Larsen told The Register.

Mandiant – one of the incident response firms hired by Snowflake to help investigate its recent security incident – also noted that there’s no evidence a breach of Snowflake’s own enterprise environment was to blame for its customers’ breaches.

“Instead, every incident Mandiant responded to associated with this campaign was traced back to compromised customer credentials,” the Google-owned threat hunters confirmed.

The earliest detected attack against a Snowflake customer instance happened on April 14. Upon investigating that breach, Mandiant says it determined that UNC5537 used legitimate credentials – previously stolen using infostealer malware – to break into the victim’s Snowflake environment and exfiltrate data. The victim did not have multi-factor authentication turned on.

About a month later, after uncovering “multiple” Snowflake customer compromises, Mandiant contacted the cloud biz and the two began notifying affected organizations. By May 24 the criminals had begun selling the stolen data online, and on May 30 Snowflake issued its statement about the incidents.

After gaining initial access – which we’re told occurred through the Snowflake native web-based user interface or a command-line-interface running on Windows Server 2002 – the criminals used a horribly named utility, “rapeflake,” which Mandiant has instead chosen to track as “FROSTBITE.”

UNC5537 has used both .NET and Java versions of this tool to perform reconnaissance against targeted Snowflake customers, allowing the gang to identify users, their roles, and IP addresses.

The crew also sometimes uses DBeaver Ultimate – a publicly available database management utility – to query Snowflake instances.

Several of the initial compromises occurred on contractor systems that were being used for both work and personal activities.

“These devices, often used to access the systems of multiple organizations, present a significant risk,” Mandiant researchers wrote. “If compromised by infostealer malware, a single contractor’s laptop can facilitate threat actor access across multiple organizations, often with IT and administrator-level privileges.”

All of the successful intrusions had three things in common, according to Mandiant. First, the victims didn’t use MFA.

Second, the attackers used valid credentials, “hundreds” of which were stolen thanks to infostealer infections – some as far back as 2020. Common variants used included VIDAR, RISEPRO, REDLINE, RACOON STEALER, LUMMA and METASTEALER. But even in these years-old thefts, the credentials had not been updated or rotated.

Almost 80 percent of the customer accounts accessed by UNC5537 had prior credential exposure, we’re told.

Finally, the compromised accounts did not have network allow-lists in place. So if you are a Snowflake customer, it’s time to get a little smarter.

Source: Over 165 Snowflake customers didn’t use MFA, says Mandiant • The Register

Oddly enough, they don’t mention the Ticketmaster 560m+ account hack confirmed in what seems to be a spree hitting Snowflake customers considering the size of the hack! Also, oddly enough, when you Google Snowflake, you get the corporate page, some wikipedia entries, but not very much about the hack. Considering the size and breadth of the problem, this is surprising. But perhaps not, considering it’s a part of Google.

China state hackers infected 20,000 govt and defence Fortinet VPNs, due to at least 2 month unfixed critical vulnerability

Hackers working for the Chinese government gained access to more than 20,000 VPN appliances sold by Fortinet using a critical vulnerability that the company failed to disclose for two weeks after fixing it, Netherlands government officials said.

The vulnerability, tracked as CVE-2022-42475, is a heap-based buffer overflow that allows hackers to remotely execute malicious code. It carries a severity rating of 9.8 out of 10. A maker of network security software, Fortinet silently fixed the vulnerability on November 28, 2022, but failed to mention the threat until December 12 of that year, when the company said it became aware of an “instance where this vulnerability was exploited in the wild.” On January 11, 2023—more than six weeks after the vulnerability was fixed—Fortinet warned a threat actor was exploiting it to infect government and government-related organizations with advanced custom-made malware.

Enter CoatHanger

The Netherlands officials first reported in February that Chinese state hackers had exploited CVE-2022-42475 to install an advanced and stealthy backdoor tracked as CoatHanger on Fortigate appliances inside the Dutch Ministry of Defense. Once installed, the never-before-seen malware, specifically designed for the underlying FortiOS operating system, was able to permanently reside on devices even when rebooted or receiving a firmware update. CoatHanger could also escape traditional detection measures, the officials warned. The damage resulting from the breach was limited, however, because infections were contained inside a segment reserved for non-classified uses.

On Monday, officials with the Military Intelligence and Security Service (MIVD) and the General Intelligence and Security Service in the Netherlands said that to date, Chinese state hackers have used the critical vulnerability to infect more than 20,000 FortiGate VPN appliances sold by Fortinet. Targets include dozens of Western government agencies, international organizations, and companies within the defense industry.

“Since then, the MIVD has conducted further investigation and has shown that the Chinese cyber espionage campaign appears to be much more extensive than previously known,” Netherlands officials with the National Cyber Security Center wrote. “The NCSC therefore calls for extra attention to this campaign and the abuse of vulnerabilities in edge devices.”

Monday’s report said that exploitation of the vulnerability started two months before Fortinet first disclosed it and that 14,000 servers were backdoored during this zero-day period. The officials warned that the Chinese threat group likely still has access to many victims because CoatHanger is so hard to detect and remove.

[…]

Fortinet’s failure to timely disclose is particularly acute given the severity of the vulnerability. Disclosures are crucial because they help users prioritize the installation of patches. When a new version fixes minor bugs, many organizations often wait to install it. When it fixes a vulnerability with a 9.8 severity rating, they’re much more likely to expedite the update process. Given the vulnerability was being exploited even before Fortinet fixed it, the disclosure likely wouldn’t have prevented all of the infections, but it stands to reason it could have stopped some.

Fortinet officials have never explained why they didn’t disclose the critical vulnerability when it was fixed. They have also declined to disclose what the company policy is for the disclosure of security vulnerabilities. Company representatives didn’t immediately respond to an email seeking comment for this post.

Source: China state hackers infected 20,000 Fortinet VPNs, Dutch spy service says | Ars Technica

Largest ever operation by Europol against botnets hits dropper malware ecosystem

Between 27 and 29 May 2024 Operation Endgame, coordinated from Europol’s headquarters, targeted droppers including, IcedID, SystemBC, Pikabot, Smokeloader, Bumblebee and Trickbot. The actions focused on disrupting criminal services through arresting High Value Targets, taking down the criminal infrastructures and freezing illegal proceeds. This approach had a global impact on the dropper ecosystem. The malware, whose infrastructure was taken down during the action days, facilitated attacks with ransomware and other malicious software. Following the action days, eight fugitives linked to these criminal activities, wanted by Germany, will be added to Europe’s Most Wanted list on 30 May 2024. The individuals are wanted for their involvement in serious cybercrime activities.

This is the largest ever operation against botnets, which play a major role in the deployment of ransomware. The operation, initiated and led by France, Germany and the Netherlands was also supported by Eurojust and involved Denmark, the United Kingdom and the United States. In addition, Armenia, Bulgaria, Lithuania, Portugal, Romania, Switzerland and Ukraine also supported the operation with different actions, such as arrests, interviewing suspects, searches, and seizures or takedowns of servers and domains. The operation was also supported by a number of private partners at national and international level including Bitdefender, Cryptolaemus, Sekoia, Shadowserver, Team Cymru, Prodaft, Proofpoint, NFIR, Computest, Northwave, Fox-IT, HaveIBeenPwned, Spamhaus and DIVD.

The coordinated actions led to:

  • 4 arrests (1 in Armenia and 3 in Ukraine)
  • 16 location searches (1 in Armenia, 1 in the Netherlands, 3 in Portugal and 11 in Ukraine)
  • Over 100 servers taken down or disrupted in Bulgaria, Canada, Germany, Lithuania, the Netherlands, Romania, Switzerland, the United Kingdom, the United States and Ukraine
  • Over 2 000 domains under the control of law enforcement

Furthermore, it has been discovered through the investigations so far that one of the main suspects has earned at least EUR 69 million in cryptocurrency by renting out criminal infrastructure sites to deploy ransomware.

[…]

Operation Endgame does not end today. New actions will be announced on the website Operation Endgame. In addition, suspects involved in these and other botnets, who have not yet been arrested, will be directly called to account for their actions. Suspects and witnesses will find information on how to reach out via this website.

Command post at Europol to coordinate the operational actions

Europol facilitated the information exchange and provided analytical, crypto-tracing and forensic support to the investigation. To support the coordination of the operation, Europol organised more than 50 coordination calls with all the countries as well as an operational sprint at its headquarters.

Over 20 law enforcement officers from Denmark, France, Germany and the United States supported the coordination of the operational actions from the command post at Europol and hundreds of other officers from the different countries involved in the actions. In addition, a virtual command post allowed real-time coordination between the Armenian, French, Portuguese and Ukrainian officers deployed on the spot during the field activities.

The command post at Europol facilitated the exchange of intelligence on seized servers, suspects and the transfer of seized data. Local command posts were also set up in Germany, the Netherlands, Portugal, the United States and Ukraine. Eurojust supported the action by setting up a coordination centre at its headquarters to facilitate the judicial cooperation between all authorities involved. Eurojust also assisted with the execution of European Arrest Warrants and European Investigation Orders.

[…]

Source: Largest ever operation against botnets hits dropper malware ecosystem | Europol

2.8M US folks’ personal info swiped in Sav-Rx IT heist – 8 months ago

Sav-Rx has started notifying about 2.8 million people that their personal information was likely stolen during an IT intrusion that happened more than seven months ago.

The biz provides prescription drug management services to more than 10 million US workers and their families, via their employers or unions. It first spotted the network “interruption” on October 8 last year and notes the break-in likely occurred five days earlier, according to a FAQ page about the incident posted on the Sav-Rx website.

Sav-Rx says it restored the IT systems to normal the following business day, and says all prescriptions were shipped on time and without delay. It also notified the police and called in some experts for a deeper dive into the logs.

An “extensive review” completed by a third-party security team on April 30 confirmed “some of the data accessed or acquired by the unauthorized third party may have contained personal information.”

The security breach affected 2,812,336 people, according to an incident notification filed with the Maine attorney general by A&A Services, doing business as Sav-Rx. Potentially stolen details include patients’ names, dates of birth, social security numbers, email addresses, mailing addresses, phone numbers, eligibility data, and insurance identification numbers.

“Please note that other than these data elements, the threat actor did not have access to clinical or financial information,” the notice reads.

While there’s no indication that the crooks have “made any use of your data as a result of this security incident,” Sav-Rx is providing everyone with two years of free credit and identity monitoring, as seems to be standard practice.

There’s also an oddly worded line about what happened that notes, “in conjunction with third-party experts, we have confirmed that any data acquired from our IT system was destroyed and not further disseminated.”

The Register contacted Sav-Rx with several questions about the network breach — including how it confirmed the data was destroyed and if the crooks demanded a payment — and did not receive a response. We will update this story when we hear back. It seems like some form of ransomware or extortion.

Either anticipating, or already receiving, inquiries about why the lag between discovering the intrusion and then notifying affected parties, the FAQ also includes a “Why wasn’t I contacted sooner?” question.

“Our initial priority was restoring systems to minimize any interruption to patient care,” it answers.

And then, after securing the IT systems and hiring the incident response team, Sav-Rx launched an investigation to determine who had been affected, and what specific personal information had been stolen for each of them.

Then, it sounds like there was some back-and-forth between healthcare bodies and Sav-Rx as to who would notify people that their data had been stolen. Here’s what the company says to that point:

We prioritized this technological investigation to be able to provide affected individuals with as much accurate information as possible. We received the results of that investigation on April 30, 2024, and promptly sent notifications to our health plan customers whose participant data was affected within 48 hours.

We offered to provide affected individuals notification, and once we confirmed that their respective health plans wanted us to provide notice to their participants, we worked expediently to mail notices to the affected individuals.

It’s unclear if this will be enough to satisfy affected customers. But in a statement to reporters, Roger Grimes, of infosec house KnowBe4, said the short answer is probably not.

“I don’t think the eight months it took Sav-Rx to notify impacted customers of the breach is going to fly with anyone, least of all their customers,” Grimes said.

“Today, you’ve got most companies notifying impacted customers in days to a few weeks,” he added. “Eight months? Whoever decided on that decision is likely to come under some heat and have explaining to do.”

Sav-Rx claims to have implemented a “number of detailed and immediate mitigation measures” to improve its security after the digital break-in. This includes “enhancing” its always-on security operations center, and adding new firewalls, antivirus software, and multi-factor authentication.

The organization also says it has since implemented a patching cycle and network segmentation and taken other measures to harden its systems. Hopefully it can also speed up its response times if it happens again.

Source: 2.8M US folks’ personal info swiped in Sav-Rx IT heist • The Register

US Patent and Trademark Office confirms another leak of filers’ address data

The federal government agency responsible for granting patents and trademarks is alerting thousands of filers whose private addresses were exposed following a second data spill in as many years.

The U.S. Patent and Trademark Office (USPTO) said in an email to affected trademark applicants this week that their private domicile address — which can include their home address — appeared in public records between August 23, 2023 and April 19, 2024.

U.S. trademark law requires that applicants include a private address when filing their paperwork with the agency to prevent fraudulent trademark filings.

USPTO said that while no addresses appeared in regular searches on the agency’s website, about 14,000 applicants’ private addresses were included in bulk datasets that USPTO publishes online to aid academic and economic research.

The agency took blame for the incident, saying the addresses were “inadvertently exposed as we transitioned to a new IT system,” according to the email to affected applicants, which TechCrunch obtained. “Importantly, this incident was not the result of malicious activity,” the email said.

Upon discovery of the security lapse, the agency said it “blocked access to the impacted bulk data set, removed files, implemented a patch to fix the exposure, tested our solution, and re-enabled access.”

If this sounds remarkably familiar, USPTO had a similar exposure of applicants’ address data last June. At the time, USPTO said it inadvertently exposed about 61,000 applicants’ private addresses in a years-long data spill in part through the release of its bulk datasets, and told affected individuals that the issue was fixed.

[…]

Source: US Patent and Trademark Office confirms another leak of filers’ address data | TechCrunch

Attack against virtually all VPN apps neuters their entire purpose

Researchers have devised an attack against nearly all virtual private network applications that forces them to send and receive some or all traffic outside of the encrypted tunnel designed to protect it from snooping or tampering.

TunnelVision, as the researchers have named their attack, largely negates the entire purpose and selling point of VPNs, which is to encapsulate incoming and outgoing Internet traffic in an encrypted tunnel and to cloak the user’s IP address. The researchers believe it affects all VPN applications when they’re connected to a hostile network and that there are no ways to prevent such attacks except when the user’s VPN runs on Linux or Android. They also said their attack technique may have been possible since 2002 and may already have been discovered and used in the wild since then.

Reading, dropping, or modifying VPN traffic

The effect of TunnelVision is “the victim’s traffic is now decloaked and being routed through the attacker directly,” a video demonstration explained. “The attacker can read, drop or modify the leaked traffic and the victim maintains their connection to both the VPN and the Internet.”

TunnelVision – CVE-2024-3661 – Decloaking Full and Split Tunnel VPNs – Leviathan Security Group.

The attack works by manipulating the DHCP server that allocates IP addresses to devices trying to connect to the local network. A setting known as option 121 allows the DHCP server to override default routing rules that send VPN traffic through a local IP address that initiates the encrypted tunnel. By using option 121 to route VPN traffic through the DHCP server, the attack diverts the data to the DHCP server itself. Researchers from Leviathan Security explained:

Our technique is to run a DHCP server on the same network as a targeted VPN user and to also set our DHCP configuration to use itself as a gateway. When the traffic hits our gateway, we use traffic forwarding rules on the DHCP server to pass traffic through to a legitimate gateway while we snoop on it.

We use DHCP option 121 to set a route on the VPN user’s routing table. The route we set is arbitrary and we can also set multiple routes if needed. By pushing routes that are more specific than a /0 CIDR range that most VPNs use, we can make routing rules that have a higher priority than the routes for the virtual interface the VPN creates. We can set multiple /1 routes to recreate the 0.0.0.0/0 all traffic rule set by most VPNs.

Pushing a route also means that the network traffic will be sent over the same interface as the DHCP server instead of the virtual network interface. This is intended functionality that isn’t clearly stated in the RFC. Therefore, for the routes we push, it is never encrypted by the VPN’s virtual interface but instead transmitted by the network interface that is talking to the DHCP server. As an attacker, we can select which IP addresses go over the tunnel and which addresses go over the network interface talking to our DHCP server.

A malicious DHCP option 121 route that causes traffic to never be encrypted by the VPN process.
Enlarge / A malicious DHCP option 121 route that causes traffic to never be encrypted by the VPN process.
Leviathan Security

We now have traffic being transmitted outside the VPN’s encrypted tunnel. This technique can also be used against an already established VPN connection once the VPN user’s host needs to renew a lease from our DHCP server. We can artificially create that scenario by setting a short lease time in the DHCP lease, so the user updates their routing table more frequently. In addition, the VPN control channel is still intact because it already uses the physical interface for its communication. In our testing, the VPN always continued to report as connected, and the kill switch was never engaged to drop our VPN connection.

The attack can most effectively be carried out by a person who has administrative control over the network the target is connecting to. In that scenario, the attacker configures the DHCP server to use option 121. It’s also possible for people who can connect to the network as an unprivileged user to perform the attack by setting up their own rogue DHCP server.

The attack allows some or all traffic to be routed through the unencrypted tunnel. In either case, the VPN application will report that all data is being sent through the protected connection. Any traffic that’s diverted away from this tunnel will not be encrypted by the VPN and the Internet IP address viewable by the remote user will belong to the network the VPN user is connected to, rather than one designated by the VPN app.

Interestingly, Android is the only operating system that fully immunizes VPN apps from the attack because it doesn’t implement option 121. For all other OSes, there are no complete fixes. When apps run on Linux there’s a setting that minimizes the effects, but even then TunnelVision can be used to exploit a side channel that can be used to de-anonymize destination traffic and perform targeted denial-of-service attacks. Network firewalls can also be configured to deny inbound and outbound traffic to and from the physical interface. This remedy is problematic for two reasons: (1) a VPN user connecting to an untrusted network has no ability to control the firewall and (2) it opens the same side channel present with the Linux mitigation.

The most effective fixes are to run the VPN inside of a virtual machine whose network adapter isn’t in bridged mode or to connect the VPN to the Internet through the Wi-Fi network of a cellular device. The research, from Leviathan Security researchers Lizzie Moratti and Dani Cronce, is available here.

Source: Novel attack against virtually all VPN apps neuters their entire purpose | Ars Technica

Microsoft’s latest Windows security updates might break your VPN

Microsoft says the April security updates for Windows may break your VPN. (Oops!) “Windows devices might face VPN connection failures after installing the April 2024 security update (KB5036893) or the April 2024 non-security preview update,” the company wrote in a status update. It’s working on a fix.

Bleeping Computer first reported the issue, which affects Windows 11, Windows 10 and Windows Server 2008 and later. User reports on Reddit are mixed, with some commenters saying their VPNs still work after installing the update and others claiming their encrypted connections were indeed borked.

“We are working on a resolution and will provide an update in an upcoming release,” Microsoft wrote.

There’s no proper fix until Microsoft pushes a patched update. However, you can work around the issue by uninstalling all the security updates. In an unfortunate bit of timing for CEO Satya Nadella, he said last week that he wants Microsoft to put “security above else.” I can’t imagine making customers (temporarily) choose between going without a VPN and losing the latest protection is what he had in mind.

At least one Redditor claims that uninstalling and reinstalling their VPN app fixed the problem for them, so it may be worth trying that before moving on to more drastic measures.

If you decide to uninstall the security updates, Microsoft tells you how. “To remove the LCU after installing the combined SSU and LCU package, use the DISM/Remove-Package command line option with the LCU package name as the argument,” the company wrote in its patch notes. “You can find the package name by using this command: DISM /online /get-packages.”

Source: Microsoft’s latest Windows security updates might break your VPN

UK becomes first country to ban default bad passwords on IoT devices

[…] On Monday, the United Kingdom became the first country in the world to ban default guessable usernames and passwords from these IoT devices. Unique passwords installed by default are still permitted.

The Product Security and Telecommunications Infrastructure Act 2022 (PSTI) introduces new minimum-security standards for manufacturers, and demands that these companies are open with consumers about how long their products will receive security updates for.

Manufacturing and design practices mean many IoT products introduce additional risks to the home and business networks they’re connected to. In one often-cited case described by cybersecurity company Darktrace, hackers were allegedly able to steal data from a casino’s otherwise well-protected computer network after breaking in through an internet-connected temperature sensor in a fish tank.

Under the PSTI, weak or easily guessable default passwords such as “admin” or “12345” are explicitly banned, and manufacturers are also required to publish contact details so users can report bugs.

Products that fail to comply with the rules could face being recalled, and the companies responsible could face a maximum fine of £10 million ($12.53 million) or 4% of their global revenue, whichever is higher.

The law will be regulated by the Office for Product Safety and Standards (OPSS), which is part of the Department for Business and Trade rather than an independent body.

[…]

Similar laws are being advanced elsewhere, although none have entered into effect. The European Union’s Cyber Resilience Act is yet to be finally agreed, but its similar provisions aren’t expected to apply within the bloc until 2027.

There is no federal law about securing consumer IoT devices in the United States, although the IoT Cybersecurity Improvement Act of 2020 requires the National Institute of Standards and Technology “to develop and publish standards and guidelines for the federal government” on how they use IoT devices.

Source: UK becomes first country to ban default bad passwords on IoT devices

Apple’s ‘incredibly private’ Safari not so private in Europe, allows

Apple’s grudging accommodation of European antitrust rules by allowing third-party app stores on iPhones has left users of its Safari browser exposed to potential web activity tracking.

Developers Talal Haj Bakry and Tommy Mysk looked into the way Apple implemented the installation process for third-party software marketplaces on iOS with Safari, and concluded Cupertino’s approach is particularly shoddy.

“Our testing shows that Apple delivered this feature with catastrophic security and privacy flaws,” wrote Bakry and Mysk in an advisory published over the weekend.

Apple – which advertises Safari as “incredibly private” – evidently has undermined privacy among European Union Safari users through a marketplace-kit: URI scheme that potentially allows approved third-party app stores to follow those users around the web.

[…]

The trouble is, any site can trigger a marketplace-kit: request. On EU iOS 17.4 devices, that will cause a unique per-user identifier to be fired off by Safari to an approved marketplace’s servers, leaking the fact that the user was just visiting that site. This happens even if Safari is in private browsing mode. The marketplace’s servers can reject the request, which can also include a custom payload, passing more info about the user to the alternative store.

[…]

Apple doesn’t allow third-party app stores in most parts of the world, citing purported privacy and security concerns – and presumably interest in sustaining its ability to collect commissions for software sales.

But Apple has been designated as a “gatekeeper” under Europe’s Digital Markets Act (DMA) for iOS, the App Store, Safari, and just recently iPadOS.

That designation means the iBiz has been ordered to open its gated community so that European customers can choose third-party app stores and web-based app distribution – also known as side-loading.

But wait, there’s more

According to Bakry and Mysk, Apple’s URI scheme has three significant failings. First, they say, it fails to check the origin of the website, meaning the aforementioned cross-site tracking is possible.

Second, Apple’s MarketplaceKit – its API for third-party stores – doesn’t validate the JSON Web Tokens (JWT) passed as input parameters via incoming requests. “Worse, it blindly relayed the invalid JWT token when calling the /oauth/token endpoint,” observed Bakry and Mysk. “This opens the door to various injection attacks to target either the MarketplaceKit process or the marketplace back-end.”

And third, Apple isn’t using certificate pinning, which leaves the door open for meddling by an intermediary (MITM) during the MarketplaceKit communication exchange. Bakry and Mysk claim they were able to overwrite the servers involved in this process with their own endpoints.

The limiting factor of this attack is that a marketplace must first be approved by Apple before it can undertake this sort of tracking. At present, not many marketplaces have won approval. We’re aware of the B2B Mobivention App marketplace, AltStore, and Setapp. Epic Games has also planned an iOS store. A few other marketplaces will work after an iThing jailbreak, but they’re unlikely to attract many consumers.

Nope, the costs to set up your own store are prohibitive and you still have to funnel proceeds to Apple – see also Shameless Insult, Malicious Compliance, Junk Fees, Extortion Regime: Industry Reacts To Apple’s Proposed Changes Over Digital Markets Act

“The flaw of exposing users in the EU to tracking is the result of Apple insisting on inserting itself between marketplaces and their users,” asserted Bakry and Mysk. “This is why Apple needs to pass an identifier to the marketplaces so they can identify installs and perhaps better calculate the due Core Technology Fee (CTF).”

They urge iOS users in Europe to use Brave rather than Safari because Brave’s implementation checks the origin of the website against the URL to prevent cross-site tracking.

Back when Apple planned not to support Home Screen web apps in Europe – a gambit later abandoned after developer complaints and regulatory pressure – the iGiant justified its position by arguing the amount of work required “was not practical to undertake given the other demands of the DMA.” By not making the extra effort to implement third-party app stores securely, Apple has arguably turned its security and privacy concerns into a self-fulfilling prophecy.

In its remarks [PDF] on complying with the DMA, Apple declared, “In the EU, every user’s security, privacy, and safety will depend in part on two questions. First, are alternative marketplaces and payment processors capable of protecting users? And, second, are they interested in doing so?”

There’s also the question of whether Apple is capable of protecting users – and whether it’s interested in doing so.

[…]

Source: Apple’s ‘incredibly private’ Safari not so private in Europe • The Register

CSS allows HTML emails to change their content after they have been forwarded

[…] The email your manager received and forwarded to you was something completely innocent, such as a potential customer asking a few questions. All that email was supposed to achieve was being forwarded to you. However, the moment the email appeared in your inbox, it changed. The innocent pretext disappeared and the real phishing email became visible. A phishing email you had to trust because you knew the sender and they even confirmed that they had forwarded it to you.

This attack is possible because most email clients allow CSS to be used to style HTML emails. When an email is forwarded, the position of the original email in the DOM usually changes, allowing for CSS rules to be selectively applied only when an email has been forwarded.

An attacker can use this to include elements in the email that appear or disappear depending on the context in which the email is viewed. Because they are usually invisible, only appear in certain circumstances, and can be used for all sorts of mischief, I’ll refer to these elements as kobold letters, after the elusive sprites of mythology.

This affects all types of email clients and webmailers that support HTML email. So pretty much all of them. For the moment, however, I’ll focus on selected clients to demonstrate the problem, and leave it to others (or future me) to extend the principle to other clients.

[…]

Exploiting this in Thunderbird is fairly straightforward. Thunderbird wraps emails in <div class="moz-text-html" lang="x-unicode"></div> and leaves them otherwise unchanged, making it a good example to demonstrate the principle. When forwarding an email, the quoted email will be enclosed in another <div></div>, moving it down one level in the DOM.

Taking this into account leads to the following proof of concept:

<!DOCTYPE html>
<html>

<head>
    <style>
        .kobold-letter {
            display: none;
        }

        .moz-text-html>div>.kobold-letter {
            display: block !important;
        }
    </style>
</head>

<body>
    <p>This text is always visible.</p>
    <p class="kobold-letter">This text will only appear after forwarding.</p>
</body>

</html>

The email contains two paragraphs, one that has no styling and should always be visible, and one that is hidden with display: none;. This is how it looks when the email is displayed in Thunderbird:

A simple email containing the sentence "This text is always visible."

This email may look harmless…

As expected, only the paragraph “This text is always visible.” is shown. However, when we forward the email, the second paragraph becomes suddenly visible. Albeit only to the new recipient – the original recipient who forwarded the email remains unaware.

The sentence "This text will only appear after forwarding." is now visible.

…until it has been forwarded.

Because we know exactly where each element will be in the DOM relative to .moz-text-html, and because we control the CSS, we can easily hide and show any part of the email, changing the content completely. If we style the kobold letter as an overlay, we can not only affect the forwarded email, but also (for example) replace any comments your manager might have had on the original mail, opening up even more opportunities for phishing.

[…]

Source: Kobold letters – Lutra Security

Intel CPUs still vulnerable to Spectre attack

[…] We’re told mitigations put in place at the software and silicon level by the x86 giant to thwart Spectre-style exploitation of its processors’ speculative execution can be bypassed, allowing malware or rogue users on a vulnerable machine to steal sensitive information – such as passwords and keys – out of kernel memory and other areas of RAM that should be off limits.

The boffins say they have developed a tool called InSpectre Gadget that can find snippets of code, known as gadgets, within an operating system kernel that on vulnerable hardware can be abused to obtain secret data, even on chips that have Spectre protections baked in.

[…]

“We show that our tool can not only uncover new (unconventionally) exploitable gadgets in the Linux kernel, but that those gadgets are sufficient to bypass all deployed Intel mitigations,” the VU Amsterdam team said this week. “As a demonstration, we present the first native Spectre-v2 exploit against the Linux kernel on last-generation Intel CPUs, based on the recent BHI variant and able to leak arbitrary kernel memory at 3.5 kB/sec.”

A quick video demonstrating that Native BHI-based attack to grab the /etc/shadow file of usernames and hashed passwords out of RAM on a 13th-gen Intel Core processor is below. We’re told the technique, tagged CVE-2024-2201, will work on any Intel CPU core.

The VU Amsterdam team — Sander Wiebing, Alvise de Faveri Tron, Herbert Bos and Cristiano Giuffrida — have now open sourced InSpectre Gadget, an angr-based analyzer, plus a database of gadgets found for Linux Kernel 6.6-rc4 on GitHub.

“Our efforts led to the discovery of 1,511 Spectre gadgets and 2,105 so-called ‘dispatch gadgets,'” the academics added. “The latter are very useful for an attacker, as they can be used to chain gadgets and direct speculation towards a Spectre gadget.”

[…]

AMD and Arm cores are not vulnerable to Native BHI, according to the VU Amsterdam team. AMD has since confirmed this in an advisory

[…]

After the aforementioned steps were taken to shut down BHI-style attacks, “this mitigation left us with a dangling question: ‘Is finding ‘native’ Spectre gadgets for BHI, ie, not implanted through eBPF, feasible?'” the academics asked.

The short answer is yes. A technical paper [PDF] describing Native BHI is due to be presented at the USENIX Security Symposium.

Source: Tool finds new ways to exploit Spectre holes in Intel CPUs • The Register

Critical bugs in LG TVs could allow complete device takeover

A handful of bugs in LG smart TVs running WebOS could allow an attacker to bypass authorization and gain root access on the device.

Once they have gained root, your TV essentially belongs to the intruder who can use that access to do all sorts of nefarious things including moving laterally through your home network, dropping malware, using the device as part of a botnet, spying on you — or at the very least severely screwing up your streaming service algorithms.

Bitdefender Labs researcher Alexandru Lazăr spotted the four vulnerabilities that affect WebOS versions 4 through 7. In an analysis published today, the security firm noted that while the vulnerable service is only intended for LAN access, more than 91,000 devices are exposed to the internet, according to a Shodan scan.

Here’s a look at the four flaws:

  • CVE-2023-6317: a PIN/prompt bypass that allows an attacker to set a variable and add a new user account to the TV without requiring a security PIN. It has a CVSS rating of 7.2.
  • CVE-2023-6318: a critical command injection flaw with a 9.1 CVSS rating that allows an attacker to elevate an initial access to root-level privileges and take over the TV.
  • CVE-2023-6319: another 9.1-rated command injection vulnerability that can be triggered by manipulating the music-lyrics library.
  • CVE-2023-6320: a critical command injection vulnerability that can be triggered by manipulating an API endpoint to allow execution of commands on the device as dbus, which has similar permissions as root. It also received a 9.1 CVSS score.

In order to abuse any of the command injection flaws, however, the attacker must first exploit CVE-2023-6317. This issue is down to WebOS running a service on ports 3000/3001 that allows users to control their TV on their smartphone using a PIN. But, there’s a bug in the account handler function that sometimes allows skipping the PIN verification:

The function that handles account registration requests uses a variable called skipPrompt which is set to true when either the client-key or the companion-client-key parameters correspond to an existing profile. It also takes into consideration what permissions are requested when deciding whether to prompt the user for a PIN, as confirmation is not required in some cases.

After creating an account with no permissions, an attacker can then request a new account with elevated privileges “but we specify the companion-client-key variable to match the key we got when we created the first account,” the team reports.

The server confirms that the key exists, but doesn’t verify which account it belongs to, we’re told. “Thus, the skipPrompt variable will be true and the account will be created without requesting a PIN confirmation on the TV,” the team reports

And then, after creating this account with elevated privileges, an attacker can use that access to exploit the other three flaws that lead to root access or command execution as the dbus user.

Lazăr responsibly reported the flaws to LG on November 1, 2023, and LG asked for a time extension to fix them. The electronics giant issued patches on March 22. It’s a good idea to check your TV for software updates and apply the WebOS patch now.

Source: Critical bugs in LG TVs could allow complete device takeover

In-app browsers still a privacy, security, and choice issue

[…] Open Web Advocacy (OWA), a group that supports open web standards and fair competition, said in a post on Tuesday that representatives “recently met with both the [EU’s] Digital Markets Act team and the UK’s Market Investigation Reference into Cloud Gaming and Browsers team to discuss how tech giants are subverting users’ choice of default browser via in-app browsers and the harm this causes.”

OWA argues that in-app browsers, without notice or consent, “ignore your choice of default browser and instead automatically and silently replace your default browser with their own in-app browser.”

The group’s goal isn’t to ban the technology, which has legitimate uses. Rather it’s to prevent in-app browsers from being used to thwart competition and flout user choice.

In-app browsers are like standalone web browsers without the interface – they rely on the native app for the interface. They can be embedded in native platform apps to load and render web content within the app, instead of outside the app in the designated default browser.

[…]

The problem with in-app browsers is that they play by a different set of rules from standalone browsers. As noted by OWA in its 62-page submission [PDF] to regulators:

  • They override the user’s choice of default browser
  • They raise tangible security and privacy harms
  • They stop the user from using their ad-blockers and tracker blockers
  • Their default browsers privacy and security settings are not shared
  • They are typically missing web features
  • They typically have many unique bugs and issues
  • The user’s session state is not shared so they are booted out of websites they have logged into in their default browser
  • They provide little benefit to users
  • They create significant work and often break third-party websites
  • They don’t compete as browsers
  • They confuse users and today function as dark patterns

Since around 2016, software engineers involved in web application development started voicing concerns about in-app browsers at some of the companies using them. But it wasn’t until around 2019 when Google engineer Thomas Steiner published a blog post about Facebook’s use of in-app browsers in its iOS and Android apps that the privacy and choice impact of in-app browsers began to register with a wider audience.

Steiner observed: “WebViews can also be used for effectively conducting intended man-in-the-middle attacks, since the IAB [in-app browser] developer can arbitrarily inject JavaScript code and also intercept network traffic.” He added: “Most of the time, this feature is used for good.”

[…]

In August 2022, developer Felix Krause published a blog post titled “Instagram and Facebook can track anything you do on any website in their in-app browser.” A week later, he expanded his analysis of in-app browsers to note how TikTok’s iOS app injects JavaScript to subscribe to “every keystroke (text inputs) happening on third party websites rendered inside the TikTok app”

[…]

Even assuming one accepts Meta’s and TikTok’s claims that they’ve not misused the extraordinary access granted by in-app browsers – a difficult ask in light of allegations raised in ongoing Meta litigation – the issue remains that companies implementing in-app browsers may be overriding the choices of users regarding their browser and whatever extensions they have installed.

However, Meta does provide a way to opt out of having its in-app browser open links clicked in its Facebook and Instagram apps.

[…]

As for the Competition and Markets Authority (CMA), the UK watchdog appears to be willing to consider allowing developer choice to supersede user choice, or at least that was the case two years ago. In its 2022 response to the CMA’s Interim Report, Google observed [PDF] that the competition agency itself had conceded that in an Android native app, the choice of browser belongs to the app developer rather than to Google.

“The Interim Report raises concerns about in-app browsers overriding users’ chosen default browsers,” Google said in its response. “However, as the CMA rightly notes, the decision on whether a native app launches an in-app browser, and if so, which browser, lies with the respective app developer, not Google. Having control over whether or not an in-app browser is launched allows app developers to customize their user interfaces, which can in turn improve the experience for users. There is therefore, to some extent, a trade-off between offering developers choice and offering end users choice.”

Source: In-app browsers still a privacy, security, and choice issue • The Register

However, in-app browsers are a horrible security breach and the choice should belong to the user – not Google, not an app developer.

GitHub’s new AI-powered tool auto-fixes vulnerabilities in your code

GitHub introduced a new AI-powered feature capable of speeding up vulnerability fixes while coding. This feature is in public beta and automatically enabled on all private repositories for GitHub Advanced Security (GHAS) customers.

Known as Code Scanning Autofix and powered by GitHub Copilot and CodeQL, it helps deal with over 90% of alert types in JavaScript, Typescript, Java, and Python.

After being toggled on, it provides potential fixes that GitHub claims will likely address more than two-thirds of found vulnerabilities while coding with little or no editing.

“When a vulnerability is discovered in a supported language, fix suggestions will include a natural language explanation of the suggested fix, together with a preview of the code suggestion that the developer can accept, edit, or dismiss,” GitHub’s Pierre Tempel and Eric Tooley said.

The code suggestions and explanations it provides can include changes to the current file, multiple files, and the current project’s dependencies.

Implementing this approach can significantly reduce the frequency of vulnerabilities that security teams must handle daily.

This, in turn, enables them to concentrate on ensuring the organization’s security rather than being forced to allocate unnecessary resources to keep up with new security flaws introduced during the development process.

However, it’s also important to note that developers should always verify if the security issues are resolved, as GitHub’s AI-powered feature may suggest fixes that only partially address the security vulnerability or fail to preserve the intended code functionality.

“Code scanning autofix helps organizations slow the growth of this “application security debt” by making it easier for developers to fix vulnerabilities as they code,” added Tempel and Tooley.

“Just as GitHub Copilot relieves developers of tedious and repetitive tasks, code scanning autofix will help development teams reclaim time formerly spent on remediation.”

The company plans to add support for additional languages in the coming months, with C# and Go support coming next.

More details about the GitHub Copilot-powered code scanning autofix tool are available on GitHub’s documentation website.

Last month, the company also enabled push protection by default for all public repositories to stop the accidental exposure of secrets like access tokens and API keys when pushing new code.

This was a significant issue in 2023, as GitHub users accidentally exposed 12.8 million authentication and sensitive secrets via more than 3 million public repositories throughout the year.

As BleepingComputer reported, exposed secrets and credentials have been exploited for multiple high-impact breaches [123] in recent years.

Source: GitHub’s new AI-powered tool auto-fixes vulnerabilities in your code