Italy’s Piracy Shield Blocks Innocent Web Sites, Makes It Hard For Them To Appeal so ISPs are ignoring the law because it’s stupid

Italy’s newly-installed Piracy Shield system, put in place by the country’s national telecoms regulator, Autorità per le Garanzie nelle Comunicazioni (Authority for Communications Guarantees, AGCOM), is already failing in significant ways. One issue became evident in February, when the VPN provider AirVPN announced that it would no longer accept users resident in Italy because of the “burdensome” requirements of the new system. Shortly afterwards, TorrentFreak published a story about the system crashing under the weight of requests to block just a few hundred IP addresses. Since there are now around two billion copyright claims being made every year against YouTube material, it’s unlikely that Piracy Shield will be able to cope once takedown requests start ramping up, as they surely will.

That’s a future problem, but something that has already been encountered concerns one of the world’s largest and most important content delivery networks (CDN), Cloudflare. CDNs have a key function in the Internet’s ecology. They host and deliver digital material to users around the globe, using their large-scale infrastructure to provide this quickly and efficiently on behalf of Web site owners. Blocking CDN addresses is reckless: it risks affecting thousands or even millions of sites, and compromises some of the basic plumbing of the Internet. And yet according to a post on TorrentFreak, that is precisely what Piracy Shield has now done:

Around 16:13 on Saturday [24 February], an IP address within Cloudflare’s AS13335, which currently accounts for 42,243,794 domains according to IPInfo, was targeted for blocking [by Piracy Shield]. Ownership of IP address 188.114.97.7 can be linked to Cloudflare in a few seconds, and doubled checked in a few seconds more.

The service that rightsholders wanted to block was not the IP address’s sole user. There’s a significant chance of that being the case whenever Cloudflare IPs enter the equation; blocking this IP always risked taking out the target plus all other sites using it.

The TorrentFreak article lists a few of the evidently innocent sites that were indeed blocked by Piracy Shield, and notes:

Around five hours after the blockade was put in place, reports suggest that the order compelling ISPs to block Cloudflare simply vanished from the Piracy Shield system. Details are thin, but there is strong opinion that the deletion may represent a violation of the rules, if not the law.

That lack of transparency about what appears to be a major overblocking is part of a larger problem, which affects those who are wrongfully cut off. As TorrentFreak writes, AGCOM’s “rigorous complaint procedure” for Piracy Shield “effectively doesn’t exist”:

information about blocks that should be published to facilitate correction of blunders, is not being published, also in violation of the regulations.

That matters, because appeals against Piracy Shield’s blocks can only be made within five working days of their publication. As a result, the lack of information about erroneous blocks makes it almost impossible for those affected to appeal in time:

That raises the prospect of a blocked innocent third party having to a) proactively discover that their connectivity has been limited b) isolate the problem to Italy c) discover the existence of AGCOM d) learn Italian and e) find the blocking order relating to them.

No wonder, then that:

some ISPs, having seen the mess, have decided to unblock some IP addresses without permission from those who initiated the mess, thus contravening the rules themselves.

In other words, not only is the Piracy Shield system wrongly blocking innocent sites, and making it hard for them to appeal against such blocks, but its inability to follow the law correctly is causing ISPs to ignore its rulings, rendering the system pointless.

This combination of incompetence and ineffectiveness brings to mind an earlier failed attempt to stop people sharing unauthorized copies. It’s still early days, but there are already indications that Italy’s Piracy Shield could well turn out to be a copyright fiasco on the same level as France’s Hadopi system, discussed in detail in Walled Culture the book (digital versions available free).

Source: Italy’s Piracy Shield Blocks Innocent Web Sites And Makes It Hard For Them To Appeal | Techdirt

Commercial Bank of Ethiopia glitch lets customers withdraw millions

Ethiopia’s biggest commercial bank is scrambling to recoup large sums of money withdrawn by customers after a “systems glitch”.

The customers discovered early on Saturday that they could take out more cash than they had in their accounts at the Commercial Bank of Ethiopia (CBE).

More than $40m (£31m) was withdrawn or transferred to other banks, local media reported.

It took several hours for the institution to freeze transactions.

Much of the money was withdrawn from state-owned CBE by students, bank president Abe Sano told journalists on Monday.

News of the glitch spread across universities largely via messaging apps and phone calls.

Long lines formed at campus ATMs, with a student in western Ethiopia telling BBC Amharic people were withdrawing money until police officers arrived on campus to stop them.

[…]

Ethiopia’s central bank, which serves as the financial sector’s governing body, released a statement on Sunday saying “a glitch” had occurred during “maintenance and inspection activities”.

The statement, however, focused on the interrupted service that occurred after CBE froze all transactions. It did not mention the money withdrawn by customers.

Mr Sano did not say exactly how much money was withdrawn during Saturday’s incident, but said the loss incurred was small when compared to the bank’s total assets.

He stated that CBE was not hit by a cyber-attack and that customers should not be worried as their personal accounts were intact.

At least three universities have released statements advising students to return any money not belonging to them that they may have taken from CBE.

Anyone returning money will not be charged with a criminal offence, Mr Sano said.

But it’s not clear how successful the bank’s attempts to recoup their money has been so far.

The student from Jimma University said on Monday he had not heard of anyone giving the money back, but said he had seen police vehicles on campus.

[…]

Source: Commercial Bank of Ethiopia glitch lets customers withdraw millions

VPN Demand Surges 234.8% After Adult Site Restriction on Texas-Based Users

VPN demand in Texas skyrocketed by 234.8% on March 15, 2024, after state authorities enacted a law requiring adult sites to verify users’ ages before granting them access to the websites’ content.

Texas’ age verification law was passed in June 2023 and was set to take effect in September of the same year. However, a day before its implementation, a US district judge temporarily blocked enforcement after a lawsuit filed by the Free Speech Coalition (FSC) deemed the policy unconstitutional per the First Amendment.

On March 14, 2024, the US Court of Appeals for the 5th Circuit decreed that Texas could proceed with the law’s enactment.

As a sign of protest, Pornhub, the most visited adult site in the US, blocked IP addresses from Texas — the eighth state to suffer such a ban after their respective governments enforced similar restrictions on adult sites.

[…]

Following the law’s enactment, users in Texas seem to be scrambling for means to access the affected adult sites. vpnMentor’s research team analyzed user demand data and found a 234.8% increase in VPN demand in the state.

The graph below shows the VPN demand in Texas from March 1 to March 16.

Past VPN Demand Growths from Adult Site Restrictions

Pornhub has previously blocked IP addresses from Louisiana, Mississippi, Arkansas, Utah, Virginia, North Carolina, and Montana — all of which have enforced age-verification laws that the adult site deemed unjust.

In May 2023, Pornhub’s banning of Utah-based users caused a 967% spike in VPN demand in the state. That same year, the passing of adult-site-related age restriction laws in Louisiana and Mississippi led to a 200% and 72% surge in VPN interest, respectively.

Source: VPN Demand Surges Post Adult Site Restriction on Texas-Based Users

Under New Management Detects when your extensions have changed owners

Intermittenty checks your installed extensions to see if the developer information listed on the Chrome Web Store or Firefox Addons store has changed. If anything is different, the extension icon will display a red badge, alerting you to the change.

shows the difference when an extension has been changed

Why is this needed?

Extension developers are constantly getting offers to buy their extensions. In nearly every case, the people buying these extensions want to rip off the existing users.

The users of these extensions have no idea an installed extension has changed hands, and may now be compromised.

Under New Management gives users notice of the change of ownership, giving them a chance to make an informed decision about the software they’re using.

Source: Under New Management (Github)

Install for Chrome: https://chromewebstore.google.com/detail/under-new-management/jppepdecgemgbgnjnnfjcmanlleioikj

Install for Firefox: https://addons.mozilla.org/en-US/firefox/addon/under-new-management-v2/

OR

Download a prebuilt release, unpack the .zip file, and load the dist directory into your browser.

How to Prevent X’s Audio and Video Calls Feature From Revealing Your IP Address – wait it reveals your IP address :O – wait… of course, it’s a Musk thing

[…] X began rolling out the audio and video calling feature, which was previously restricted to paid users, to everyone last week. However, hawk-eyed sleuths quickly noticed that the feature was automatically turned on, meaning that users had to manually go to their settings to turn it off. Only your mutuals or someone you’ve exchanged DMs with can call you by default, but that’s still potentially a lot of people.

Privacy researchers also sounded the alarm on the feature after learning that it revealed users’ IP address during calls. Notably, the option to protect users’ IP addresses is toggled off, which frankly makes no sense.

Zach Edwards, an independent privacy researcher, told Gizmodo that an IP address can allow third parties to track down your location and get their hands on other details of your online life.

“In major cities, an IP address can sometimes identify someone’s exact location, but usually it’s just close enough to be creepy. Like a 1 block radius around your house,” Edwards said via X direct messages. However, “sometimes if in a remote/rural location, the IP address 1000% identifies you.”

Law enforcement can use IP addresses to track down illegal behavior, such as child sexual abuse material or pirating online content. Meanwhile, hackers can launch DDoS attacks to take down your internet connection or even steal your data.

How to turn off audio and video calls on X

Luckily, you can avoid potential IP security nightmares by turning off audio and video calls on X. As you’ll see in the screenshots below, it’s pretty straightforward:

– First, go to Settings and Support. Then click on Settings and Privacy. (If you’re on desktop, click on the More button and then go to Settings and Privacy).

– Next, click on Privacy and Safety. Select Direct Messages from the menu that pops up.

– Toggle off the option that says Enable audio and video calling.

A screenshot that shows how to disable audio and video calling on X.
Screenshot: Oscar Gonzalez

And that’s it. Some may not see the Enable audio and video calling option in their settings yet, which means the feature hasn’t been rolled out to them. That doesn’t mean they won’t eventually get it in a future update.

Source: How to Prevent X’s Audio and Video Calls Feature From Revealing Your IP Address

Hackers exploited Windows 0-day for 6 months after Microsoft knew of it

[…]

Even after Microsoft patched the vulnerability last month, the company made no mention that the North Korean threat group Lazarus had been using the vulnerability since at least August to install a stealthy rootkit on vulnerable computers. The vulnerability provided an easy and stealthy means for malware that had already gained administrative system rights to interact with the Windows kernel. Lazarus used the vulnerability for just that. Even so, Microsoft has long said that such admin-to-kernel elevations don’t represent the crossing of a security boundary, a possible explanation for the time Microsoft took to fix the vulnerability.

A rootkit “holy grail”

“When it comes to Windows security, there is a thin line between admin and kernel,” Jan Vojtěšek, a researcher with security firm Avast, explained last week. “Microsoft’s security servicing criteria have long asserted that ‘[a]dministrator-to-kernel is not a security boundary,’ meaning that Microsoft reserves the right to patch admin-to-kernel vulnerabilities at its own discretion. As a result, the Windows security model does not guarantee that it will prevent an admin-level attacker from directly accessing the kernel.”

The Microsoft policy proved to be a boon to Lazarus in installing “FudModule,” a custom rootkit that Avast said was exceptionally stealthy and advanced.

[…]

In years past, Lazarus and other threat groups have reached this last threshold mainly by exploiting third-party system drivers, which by definition already have kernel access. To work with supported versions of Windows, third-party drivers must first be digitally signed by Microsoft to certify that they are trustworthy and meet security requirements. In the event Lazarus or another threat actor has already cleared the admin hurdle and has identified a vulnerability in an approved driver, they can install it and exploit the vulnerability to gain access to the Windows kernel. This technique—known as BYOVD (bring your own vulnerable driver)—comes at a cost, however, because it provides ample opportunity for defenders to detect an attack in progress.

The vulnerability Lazarus exploited, tracked as CVE-2024-21338, offered considerably more stealth than BYOVD because it exploited appid.sys, a driver enabling the Windows AppLocker service, which comes preinstalled in the Microsoft OS. Avast said such vulnerabilities represent the “holy grail,” as compared to BYOVD.

In August, Avast researchers sent Microsoft a description of the zero-day, along with proof-of-concept code that demonstrated what it did when exploited. Microsoft didn’t patch the vulnerability until last month. Even then, the disclosure of the active exploitation of CVE-2024-21338 and details of the Lazarus rootkit came not from Microsoft in February but from Avast 15 days later. A day later, Microsoft updated its patch bulletin to note the exploitation.

[…]

Source: Hackers exploited Windows 0-day for 6 months after Microsoft knew of it | Ars Technica

VMware sandbox escape bugs are so critical, patches are released for end-of-life products – also, remove all your USB products now

VMware is urging customers to patch critical vulnerabilities that make it possible for hackers to break out of sandbox and hypervisor protections in all versions, including out-of-support ones, of VMware ESXi, Workstation, Fusion, and Cloud Foundation products.

A constellation of four vulnerabilities—two carrying severity ratings of 9.3 out of a possible 10—are serious because they undermine the fundamental purpose of the VMware products, which is to run sensitive operations inside a virtual machine that’s segmented from the host machine. VMware officials said that the prospect of a hypervisor escape warranted an immediate response

[…]

A VMware advisory included the following matrix showing how the vulnerabilities—tracked as CVE-2024-22252, CVE-2024-22253, CVE-2024-22254, CVE-2024-22255—affect each of the vulnerable products:

Product Version Running On CVE Identifier CVSSv3 Severity Fixed Version [1] Workarounds Additional Documentation
ESXi 8.0 Any CVE-2024-22252, CVE-2024-22253, CVE-2024-22254, CVE-2024-22255 8.4, 8.4, 7.9, 7.1 critical ESXi80U2sb-23305545 KB96682 FAQ
ESXi 8.0 [2] Any CVE-2024-22252, CVE-2024-22253, CVE-2024-22254, CVE-2024-22255 8.4, 8.4, 7.9, 7.1 critical ESXi80U1d-23299997 KB96682 FAQ
ESXi 7.0 Any CVE-2024-22252, CVE-2024-22253, CVE-2024-22254, CVE-2024-22255 8.4, 8.4, 7.9, 7.1 critical ESXi70U3p-23307199 KB96682 FAQ
Workstation 17.x Any CVE-2024-22252, CVE-2024-22253, CVE-2024-22255 9.3, 9.3, 7.1 critical 17.5.1 KB96682 None.
Fusion 13.x MacOS CVE-2024-22252, CVE-2024-22253, CVE-2024-22255 9.3, 9.3, 7.1 critical 13.5.1 KB96682 None

Three of the vulnerabilities affect the USB controller the products use to support peripheral devices such as keyboards and mice. The advisory describes the vulnerabilities as:

CVE-2024-22252: a use-after-free vulnerability in XHCI USB controller with a maximum severity range of 9.3 for Workstation/Fusion and a base score of 8.4 for ESXi. Someone with local administrative privileges on a virtual machine can execute code as the virtual machine’s VMX process running on the host. On ESXi, the exploitation is contained within the VMX sandbox, whereas, on Workstation and Fusion, this could lead to code execution on the machine where Workstation or Fusion is installed.

CVE-2024-22253: a use-after-free vulnerability in UHCI USB controller with a maximum severity rating of 9.3 for Workstation/Fusion and a base score of 8.4 for ESXi. Exploitation requirements and outcomes are the same as for CVE-2024-22252.

CVE-2024-22254: an out-of-bounds write vulnerability with a maximum severity base score of 7.9. This vulnerability makes it possible for someone with privileges within the VMX process to trigger an out-of-bounds write, leading to a sandbox escape.

CVE-2024-22255: an information disclosure vulnerability in the UHCI USB controller with a maximum CVSSv3 base score of 7.1. Someone with administrative access to a virtual machine can exploit it to leak memory from the vmx process.

Broadcom, the VMware parent company, is urging customers to patch vulnerable products. As a workaround, users can remove USB controllers from vulnerable virtual machines, but Broadcom stressed that this measure could degrade virtual console functionality and should be viewed as only a temporary solution.

[…]

Source: VMware sandbox escape bugs are so critical, patches are released for end-of-life products | Ars Technica

Vietnam to collect biometrics – even DNA – for new ID cards. Centralised databases never leak.

The Vietnamese government will begin collecting biometric information from its citizens for identification purposes beginning in July this year.

Prime minister Pham Minh Chinh instructed the nation’s Ministry of Public Security to collect the data in the form of iris scans, voice samples and actual DNA, in accordance with amendments to Vietnam’s Law on Citizen Identification.

The ID cards are issued to anyone over the age of 14 in Vietnam, and are optional for citizens between the ages of 6 and 14, according to a government news report.

Ammendments to the Law on Citizen Identification that allow collection of biometrics passed on November 27 of last year.

The law allows recording of blood type among the DNA-related information that will be contained in a national database to be shared across agencies “to perform their functions and tasks.”

The ministry will work with other parts of the government to integrate the identification system into the national database.

As for how the information will be collected, the amendments state:

Biometric information on DNA and voice is collected when voluntarily provided by the people or the agency conducting criminal proceedings or the agency managing the person to whom administrative measures are applied in the process of settling the case according to their functions and duties whether to solicit assessment or collect biometric information on DNA, people’s voices are shared with identity management agencies for updating and adjusting to the identity database.

Vietnam’s future identity cards will incorporate the functions of health insurance cards, social insurance books, driver’s licenses, birth certificates, and marriage certificates, as defined by the amendment.

There are approximately 70 million adults in Vietnam as of 2022, making the collection and safeguarding of such data no small feat.

The Reg is sure the personal information on all those citizens will be just fine – personal data held by governments for ID cards certainly never leaks.

[…]

Source: Vietnam to collect biometrics – even DNA – for new ID cards • The Register

Absolutely retarded.

Wyze says camera breach let 13,000 customers briefly see into other people’s homes

Last week, co-founder David Crosby said that “so far” the company had identified 14 people who were able to briefly see into a stranger’s property because they were shown an image from someone else’s Wyze camera. Now we’re being told that number of affected customers has ballooned to 13,000.

The revelation came from an email sent to customers entitled “An Important Security Message from Wyze,” in which the company copped to the breach and apologized, while also attempting to lay some of the blame on its web hosting provider AWS.

“The outage originated from our partner AWS and took down Wyze devices for several hours early Friday morning. If you tried to view live cameras or Events during that time, you likely weren’t able to. We’re very sorry for the frustration and confusion this caused.

The breach, however, occurred as Wyze was attempting to bring its cameras back online. Customers were reporting seeing mysterious images and video footage in their own Events tab. Wyze disabled access to the tab and launched its own investigation.

As it did before, Wyze is chalking up the incident to “a third-party caching client library” that was recently integrated into its system.

This client library received unprecedented load conditions caused by devices coming back online all at once. As a result of increased demand, it mixed up device ID and user ID mapping and connected some data to incorrect accounts.

But it was too late to prevent an estimated 13,000 people from getting an unauthorized peek at thumbnails from a stranger’s homes. Wyze says that 1,504 people tapped to enlarge the thumbnail, and that a few of them caught a video that they were able to view. It also claims that all impacted users have been notified of the security breach, and that over 99 percent of all of its customers weren’t affected.

[…]

Source: Wyze says camera breach let 13,000 customers briefly see into other people’s homes – The Verge

Which it’s better to store stuff on your own NAS hardware instead of some vendor’s cloud.

Whoops: ‘Smart’ Livall Helmet Allowed Real Time Surveillance And Location Tracking Of A Million Customers

livall smart helmets

[,,,] a company named Livall makes “smart” bike helmets for skiers and cyclists that includes features like auto-fall detection, GPS location monitoring, and integrated braking lights. The problem: the company apparently didn’t spend enough time securing the company’s app, allowing pretty much anybody to listen in on and track the precise location data of a million customers in real time.

Livall’s smartphone apps feature group audio chats and location data. The problem: Ken Munro, founder of U.K. cybersecurity testing firm Pen Test Partners, found that the chat groups were secured by a six-digit pin code that was very simple to brute force (via Techcrunch):

“That 6 digit group code simply isn’t random enough. We could brute force all group IDs in a matter of minutes.”

Munro also noted that there was nothing to alert a group of cyclists or skiers that someone new had entered the chat, allowing a third party to monitor them in complete silence:

“As soon as one entered a valid group code, one joined the group automatically. There was no further authorisation nor alerts to the other group user. It was therefore trivial to silently join any group, giving us access to any users location and the ability to listen in to any group audio communications.

Whoops a daisy. As with so many modern “smart” tech companies, Munro also notes that Livall only took their findings seriously once they got a prominent security journalist (Zack Whittaker at Techcrunch) involved to bring attention to the problem. Livall finally fixed the problem, but it’s not entirely clear that would have happened without Whittaker’s involvement.

[…]

Source: Whoops: ‘Smart’ Helmet Allowed Real Time Surveillance And Location Tracking Of A Million Customers | Techdirt

‘World’s biggest casino’ app Winstar exposed customers’ personal data: developer Dexia didn’t secure the db.

Oklahoma-based WinStar bills itself as the “world’s biggest casino” by square footage. The casino and hotel resort also offers an app, My WinStar, in which guests can access self-service options during their hotel stay, their rewards points and loyalty benefits, and casino winnings.

The app is developed by a Nevada software startup called Dexiga.

The startup left one of its logging databases on the internet without a password, allowing anyone with knowledge of its public IP address to access the WinStar customer data stored within using only their web browser.

Dexiga took the database offline after TechCrunch alerted the company to the security lapse.

[…]

the personal data included full names, phone numbers, email addresses and home addresses. Sen shared details of the exposed database with TechCrunch to help identify its owner and disclose the security lapse.

TechCrunch examined some of the exposed data and verified Sen’s findings. The database also contained an individual’s gender and the IP address of the user’s device, TechCrunch found.

None of the data was encrypted, though some sensitive data — such as a person’s date of birth — was redacted and replaced with asterisks.

A review of the exposed data by TechCrunch found an internal user account and password associated with Dexiga founder Rajini Jayaseelan.

[…]

Source: ‘World’s biggest casino’ app exposed customers’ personal data | TechCrunch

Canada Moves to Ban the Flipper Zero Over Car Hacking Fears – instead of requiring good security on Cars

On Thursday, following a summit that focused on “the growing challenge of auto theft in Canada,” the country’s Minister of Innovation, Science and Industry posted a statement on X, saying “Criminals have been using sophisticated tools to steal cars…Today, I announced we are banning the importation, sale and use of consumer hacking devices, like flippers, used to commit these crimes.”

In a press release issued on Thursday, the Canadian government confirmed that it will be pursuing “all avenues to ban devices used to steal vehicles by copying the wireless signals for remote keyless entry, such as the Flipper Zero.”

The Flipper, which is technically a penetration testing device, has been controversial due to its ability to hack droves of smart products. Alex Kulagin, the COO of Flipper Devices, said in a statement shared with Gizmodo that the device couldn’t be used to “hijack any car” and that certain circumstances would have to be met for it to happen:

“Flipper Zero can’t be used to hijack any car, specifically the ones produced after the 1990s, since their security systems have rolling codes. Also, it’d require actively blocking the signal from the owner to catch the original signal, which Flipper Zero’s hardware is incapable of doing. Flipper Zero is intended for security testing and development and we have taken necessary precautions to ensure the device can’t be used for nefarious purposes”

[…]

Even if the Flipper isn’t considered a culprit in Canada’s car theft woes, it should be pointed out that hacking modern cars is notoriously easy. Major car manufacturers’ cybersecurity is terrible, and it seems difficult to imagine that banning the Flipper will make any serious dent in their security problems.

[…]

“Dude that’s not the solution. The car company needs to address the security of their products. Sincerely, Cyber security experts everywhere,” one X user, whose bio mentions infosec, posted.

“You can use screwdrivers to steal cars too,” another user posted, sarcastically.

“If you knew anything about technology you would know the flipper and others are just simple ARM processors with basic sensors attached,” said another user. “Nothing ground breaking this will not stop a thing but makes it look like your doing something. The trick of politicians everywhere and it is why people are fed up of you as everything else just crumbles.”

Source: Canada Moves to Ban the Flipper Zero Over Car Hacking Fears

Mercedes-Benz source code exposed by leaving private key online

Mercedes-Benz accidentally exposed a trove of internal data after leaving a private key online that gave “unrestricted access” to the company’s source code, according to the security research firm that discovered it.

Shubham Mittal, co-founder and chief technology officer of RedHunt Labs, alerted TechCrunch to the exposure and asked for help in disclosing to the car maker. The London-based cybersecurity company said it discovered a Mercedes employee’s authentication token in a public GitHub repository during a routine internet scan in January.

[…]

“The GitHub token gave ‘unrestricted’ and ‘unmonitored’ access to the entire source code hosted at the internal GitHub Enterprise Server,” Mittal explained in a report shared by TechCrunch. “The repositories include a large amount of intellectual property… connection strings, cloud access keys, blueprints, design documents, [single sign-on] passwords, API Keys, and other critical internal information.”

[…]

Source: How a mistakenly published password exposed Mercedes-Benz source code | TechCrunch

Let’s hope that others have found this and can use it to jailbreak the cars so that you can get what you paid for when you bought the machine, such as better EV performance and faster acceleration

Dutch COVID-19 testing firm Coronalab exposed 1.3 million patient records

A password-less database containing an estimated 1.3 million sets of Dutch COVID-19 testing records was left exposed to the open internet, and it’s not clear if anyone is taking responsibility.

Among the information revealed in the publicly accessible and seemingly insecurely configured database were 118,441 coronavirus test certificates, 506,663 appointment records, 660,173 testing samples and “a small number” of internal files. A bevy of personally identifiable information was included in the records – including patient names, dates of birth, passport numbers, email addresses, and other information.

The leaky database was discovered by perennial breach sniffer Jeremiah Fowler, who reckoned it belongs to one of the Netherlands’ largest commercial COVID-19 test providers, CoronaLab – a subsidiary of Amsterdam-based Microbe & Lab. The US Embassy in the Netherlands lists CoronaLab as one of its recommended commercial COVID-19 test providers in the country.

If someone with malicious intent managed to find the database they could do some serious damage, Fowler warned.

“Criminal[s] could potentially reference test dates, locations, or other insider information that only the patient and the laboratory would know,” he wrote. “Any potential exposure involving COVID test data combined with PII could potentially compromise the personal and medical privacy of the individuals listed in the documents.”

Will the responsible party please stand up?

The CoronaLab data exposure report reads in many ways like any other accidental data exposure news: It was found, and now the offending database is offline. But this one isn’t that simple.

According to Fowler, no-one at CoronaLab or Microbe & Lab ever responded to his repeated attempts to reach out and inform them of the exposure.

“I sent multiple responsible disclosure notices and did not receive any reply, and several phone calls also yielded no results,” Fowler claimed. “The database remained open for nearly three weeks before I contacted the cloud hosting provider and it was finally secured from public access.”

The Register has asked Microbe & Lab to get more information about the incident – and we haven’t heard back either.

Without more information from Microbe & Lab or CoronaLab itself, it’s impossible to know how long the database was actually exposed online. The CoronaLab website is down as of this writing – it’s not clear if the outage is related to the database exposure, or if the service will be brought back online.

Because no-one at the organization whose records were exposed can be reached, it’s also not clear if customers or patients are aware that their data was exposed online. Nor, importantly, do we know if European data protection authorities have been informed.

Per article 33 of the EU General Data Protection Regulation (GDPR), data breaches must be reported to local officials within 72 hours of detection, and notifications also have to be made to affected individuals. We reached out to the Dutch Data Protection Authority to learn if it had been notified of the CoronaLab data exposure, and didn’t immediately hear back.

Source: COVID-19 testing firm ‘exposed 1.3 million patient records’ • The Register

All Apples Wide open for 4 years, Kaspersky security company and many others in Moscow opened wide – photos, location, mic, etc – just by sending them an imessage. Shows how dangerous closed source is.

[…]

after about 12 months of intensive investigation. Besides how the attackers learned of the hardware feature, the researchers still don’t know what, precisely, its purpose is. Also unknown is if the feature is a native part of the iPhone or enabled by a third-party hardware component such as ARM’s CoreSight

 

The mass backdooring campaign, which according to Russian officials also infected the iPhones of thousands of people working inside diplomatic missions and embassies in Russia, according to Russian government officials, came to light in June. Over a span of at least four years, Kaspersky said, the infections were delivered in iMessage texts that installed malware through a complex exploit chain without requiring the receiver to take any action.

With that, the devices were infected with full-featured spyware that, among other things, transmitted microphone recordings, photos, geolocation, and other sensitive data to attacker-controlled servers. Although infections didn’t survive a reboot, the unknown attackers kept their campaign alive simply by sending devices a new malicious iMessage text shortly after devices were restarted.

A fresh infusion of details disclosed Wednesday said that “Triangulation”—the name Kaspersky gave to both the malware and the campaign that installed it—exploited four critical zero-day vulnerabilities, meaning serious programming flaws that were known to the attackers before they were known to Apple. The company has since patched all four of the vulnerabilities, which are tracked as:

Besides affecting iPhones, these critical zero-days and the secret hardware function resided in Macs, iPods, iPads, Apple TVs, and Apple Watches. What’s more, the exploits Kaspersky recovered were intentionally developed to work on those devices as well. Apple has patched those platforms as well. Apple declined to comment for this article.

[…]

“This is no ordinary vulnerability,” Larin said in a press release that coincided with a presentation he made at the 37th Chaos Communication Congress in Hamburg, Germany. “Due to the closed nature of the iOS ecosystem, the discovery process was both challenging and time-consuming, requiring a comprehensive understanding of both hardware and software architectures. What this discovery teaches us once again is that even advanced hardware-based protections can be rendered ineffective in the face of a sophisticated attacker, particularly when there are hardware features allowing to bypass these protections.”

In a research paper also published Wednesday, Larin added:

If we try to describe this feature and how attackers use it, it all comes down to this: attackers are able to write the desired data to the desired physical address with [the] bypass of [a] hardware-based memory protection by writing the data, destination address and hash of data to unknown, not used by the firmware, hardware registers of the chip.

Our guess is that this unknown hardware feature was most likely intended to be used for debugging or testing purposes by Apple engineers or the factory, or was included by mistake. Since this feature is not used by the firmware, we have no idea how attackers would know how to use it

On the same day last June that Kaspersky first disclosed Operation Triangulation had infected the iPhones of its employees, officials with the Russian National Coordination Center for Computer Incidents said the attacks were part of a broader campaign by the US National Security Agency that infected several thousand iPhones belonging to people inside diplomatic missions and embassies in Russia, specifically from those representing NATO countries, post-Soviet nations, Israel, and China. A separate alert from the FSB, Russia’s Federal Security Service, alleged Apple cooperated with the NSA in the campaign. An Apple representative has denied the claim. Kaspersky researchers, meanwhile, have said they have no evidence corroborating the claim of involvement by either the NSA or Apple.

[…]

Kaspersky’s summary of the exploit chain is:

  • Attackers send a malicious iMessage attachment, which is processed by the application without showing any signs to the user
  • This attachment exploits vulnerability CVE-2023-41990 in the undocumented, Apple-only TrueType font instruction ADJUST for a remote code execution. This instruction existed since the early 90’s and the patch removed it.
  • It uses return/jump oriented programming, multiple stages written in NSExpression/NSPredicate query language, patching JavaScriptCore library environment to execute a privilege escalation exploit written in JavaScript.
  • This JavaScript exploit is obfuscated to make it completely unreadable and to minimize its size. Still it has around 11000 lines of code which are mainly dedicated to JavaScriptCore and kernel memory parsing and manipulation.
  • It’s exploited JavaScriptCore’s debugging feature DollarVM ($vm) to get the ability to manipulate JavaScriptCore’s memory from the script and execute native API functions.
  • It was designed to support old and new iPhones and included a Pointer Authentication Code (PAC) bypass for exploitation of newer models.
  • It used an integer overflow vulnerability CVE-2023-32434 in the XNU’s memory mapping syscalls (mach_make_memory_entry and vm_map) to get read/write access to [the] whole physical memory of the device from the user level.
  • It uses hardware memory-mapped I/O (MMIO) registers to bypass Page Protection Layer (PPL). This was mitigated as CVE-2023-38606.
  • After exploiting all the vulnerabilities, the JavaScript exploit can do whatever it wants to the device and run spyware, but attackers chose to: a) launch the imagent process and inject a payload that cleans the exploitation artifacts from the device; b) run the Safari process in invisible mode and forward it to the web page with the next stage.
  • Web page has the script that verifies the victim and, if the checks pass, it receives the next stage—the Safari exploit.
  • Safari exploit uses vulnerability CVE-2023-32435 to execute a shellcode.
  • Shellcode executes another kernel exploit in the form of mach object file. It uses the same vulnerabilities CVE-2023-32434 and CVE-2023-38606, it’s also massive in size and functionality, but it is completely different from the kernel exploit written in JavaScript. Only some parts related to exploitation of the above-mentioned vulnerabilities are the same. Still most of its code is also dedicated to the parsing and manipulation of the kernel memory. It has various post-exploitation utilities, which are mostly unused.
  • Exploit gets root privileges and proceeds to execute other stages responsible for loading of spyware. We already covered these stages in our previous posts.

Wednesday’s presentation, titled What You Get When You Attack iPhones of Researchers, is a further reminder that even in the face of innovative defenses like the one protecting the iPhone kernel, ever more sophisticated attacks continue to find ways to defeat them.

[…]

Source: 4-year campaign backdoored iPhones using possibly the most advanced exploit ever | Ars Technica

It also shows that closed source software is an immense security threat – even with the threat exposed it’s almost impossible to find out what happened and how to fix it – especially without the help of the manufacturer

1M non-profit donors PII exposed by unsecured DonorView database

Close to a million records containing personally identifiable information belonging to donors that sent money to non-profits were found exposed in an online database.

The database is owned and operated by DonorView – provider of a cloud-based fundraising platform used by schools, charities, religious institutions, and other groups focused on charitable or philanthropic goals.

Infosec researcher Jeremiah Fowler found 948,029 records exposed online including donor names, addresses, phone numbers, emails, payment methods, and more.

Manual analysis of the data revealed what appeared to be the names and addresses of individuals designated as children – though it wasn’t clear to the researcher whether these children were associated with the organization collecting the donation or the funds’ recipients.

Another document seen by Fowler revealed children’s names, medical conditions, names of their attending doctors, and information on whether the child’s image could be used in marketing materials – though in many cases this was not permitted.

In just a single document, more than 70,000 names and contact details were exposed, all believed to be donors to non-profits.

Neither Fowler nor The Register has received a response from the US-based service provider, though Fowler said it did secure the database within days of him filing a disclosure report.

Source: DonorView exposes 1M records for unknown time frame • The Register

Bad genes: 23andMe leak highlights a possible future of genetic discrimination

23andMe is a terrific concept. In essence, the company takes a sample of your DNA and tells you about your genetic makeup. For some of us, this is the only way to learn about our heritage. Spotty records, diaspora, mistaken family lore and slavery can make tracing one’s roots incredibly difficult by traditional methods.

What 23andMe does is wonderful because your DNA is fixed. Your genes tell a story that supersedes any rumors that you come from a particular country or are descended from so-and-so.

[…]

ou can replace your Social Security number, albeit with some hassle, if it is ever compromised. You can cancel your credit card with the click of a button if it is stolen. But your DNA cannot be returned for a new set — you just have what you are given. If bad actors steal or sell your genetic information, there is nothing you can do about it.

This is why 23andMe’s Oct. 6 data leak, although it reads like science fiction, is not an omen of some dark future. It is, rather, an emblem of our dangerous present.

23andMe has a very simple interface with some interesting features. “DNA Relatives” matches you with other members to whom you are related. This could be an effective, thoroughly modern way to connect with long-lost family, or to learn more about your origins.

But the Oct. 6 leak perverted this feature into something alarming. By gaining access to individual accounts through weak and recycled passwords, hackers were able to create an extensive list of people with Ashkenazi heritage. This list was then posted on forums with the names, sex and likely heritage of each member under the title “Ashkenazi DNA Data of Celebrities.”

First and foremost, collecting lists of people based on their ethnic backgrounds is a personal violation with tremendously insidious undertones. If you saw yourself and your extended family on such a list, you would not take it lightly.

[…]

I find it troubling because, in 2018, Time reported that 23andMe had sold a $300 million stake in its business to GlaxoSmithKline, allowing the pharmaceutical giant to use users’ genetic data to develop new drugs. So because you wanted to know if your grandmother was telling the truth about your roots, you spat into a cup and paid 23andMe to give your DNA to a drug company to do with it as they please.

Although 23andMe is in the crosshairs of this particular leak, there are many companies in murky waters. Last year, Consumer Reports found that 23andMe and its competitors had decent privacy policies where DNA was involved, but that these businesses “over-collect personal information about you and overshare some of your data with third parties…CR’s privacy experts say it’s unclear why collecting and then sharing much of this data is necessary to provide you the services they offer.”

[…]

As it stands, your DNA can be weaponized against you by law enforcement, insurance companies, and big pharma. But this will not be limited to you. Your DNA belongs to your whole family.

Pretend that you are going up against one other candidate for a senior role at a giant corporation. If one of these genealogy companies determines that you are at an outsized risk for a debilitating disease like Parkinson’s and your rival is not, do you think that this corporation won’t take that into account?

[…]

Insurance companies are not in the business of losing money either. If they gain access to such a thing that on your record, you can trust that they will use it to blackball you or jack up your rates.

In short, the world risks becoming like that of the film Gattaca, where the genetic elite enjoy access while those deemed genetically inferior are marginalized.

The train has left the station for a lot of these issues. That list of people from the 23andMe leak cannot put the genie back in the bottle. If your DNA is on a server for one of these companies, there is a chance that it has already been used as a reference or to help pharmaceutical companies.

[…]

There are things they can do now to avoid further damage. The next time a company asks for something like your phone number or SSN, press them as to why they need it. Make it inconvenient for them to mine you for your Personal Identifiable Information (PII). Your PII has concrete value to these places, and they count on people to be passive, to hand it over without any fuss.

[…]

The time to start worrying about this problem was 20 years ago, but we can still affect positive change today. This 23andMe leak is only the beginning; we must do everything possible to protect our identities and DNA while they still belong to us.

Source: Bad genes: 23andMe leak highlights a possible future of genetic discrimination | The Hill

Scientific American was warning about this since at least 2013. What have we done? Nothing.:

If there’s a gene for hubris, the 23andMe crew has certainly got it. Last Friday the U.S. Food and Drug Administration (FDA) ordered the genetic-testing company immediately to stop selling its flagship product, its $99 “Personal Genome Service” kit. In response, the company cooed that its “relationship with the FDA is extremely important to us” and continued hawking its wares as if nothing had happened. Although the agency is right to sound a warning about 23andMe, it’s doing so for the wrong reasons.

Since late 2007, 23andMe has been known for offering cut-rate genetic testing. Spit in a vial, send it in, and the company will look at thousands of regions in your DNA that are known to vary from human to human—and which are responsible for some of our traits

[…]

Everything seemed rosy until, in what a veteran Forbes reporter calls “the single dumbest regulatory strategy [he had] seen in 13 years of covering the Food and Drug Administration,” 23andMe changed its strategy. It apparently blew through its FDA deadlines, effectively annulling the clearance process, and abruptly cut off contact with the agency in May. Adding insult to injury the company started an aggressive advertising campaign (“Know more about your health!”)

[…]

But as the FDA frets about the accuracy of 23andMe’s tests, it is missing their true function, and consequently the agency has no clue about the real dangers they pose. The Personal Genome Service isn’t primarily intended to be a medical device. It is a mechanism meant to be a front end for a massive information-gathering operation against an unwitting public.

Sound paranoid? Consider the case of Google. (One of the founders of 23andMe, Anne Wojcicki, is presently married to Sergei Brin, the founder of Google.) When it first launched, Google billed itself as a faithful servant of the consumer, a company devoted only to building the best tool to help us satisfy our cravings for information on the web. And Google’s search engine did just that. But as we now know, the fundamental purpose of the company wasn’t to help us search, but to hoard information. Every search query entered into its computers is stored indefinitely. Joined with information gleaned from cookies that Google plants in our browsers, along with personally identifiable data that dribbles from our computer hardware and from our networks, and with the amazing volumes of information that we always seem willing to share with perfect strangers—even corporate ones—that data store has become Google’s real asset

[…]

23andMe reserves the right to use your personal information—including your genome—to inform you about events and to try to sell you products and services. There is a much more lucrative market waiting in the wings, too. One could easily imagine how insurance companies and pharmaceutical firms might be interested in getting their hands on your genetic information, the better to sell you products (or deny them to you).

[…]

ven though 23andMe currently asks permission to use your genetic information for scientific research, the company has explicitly stated that its database-sifting scientific work “does not constitute research on human subjects,” meaning that it is not subject to the rules and regulations that are supposed to protect experimental subjects’ privacy and welfare.

Those of us who have not volunteered to be a part of the grand experiment have even less protection. Even if 23andMe keeps your genome confidential against hackers, corporate takeovers, and the temptations of filthy lucre forever and ever, there is plenty of evidence that there is no such thing as an “anonymous” genome anymore. It is possible to use the internet to identify the owner of a snippet of genetic information and it is getting easier day by day.

This becomes a particularly acute problem once you realize that every one of your relatives who spits in a 23andMe vial is giving the company a not-inconsiderable bit of your own genetic information to the company along with their own. If you have several close relatives who are already in 23andMe’s database, the company already essentially has all that it needs to know about you.

[…]

Source: 23andMe Is Terrifying, but Not for the Reasons the FDA Thinks

Your mobile password manager might be exposing your credentials because of Webview

A number of popular mobile password managers are inadvertently spilling user credentials due to a vulnerability in the autofill functionality of Android apps.

The vulnerability, dubbed “AutoSpill,” can expose users’ saved credentials from mobile password managers by circumventing Android’s secure autofill mechanism, according to university researchers at the IIIT Hyderabad, who discovered the vulnerability and presented their research at Black Hat Europe this week.

The researchers, Ankit Gangwal, Shubham Singh and Abhijeet Srivastava, found that when an Android app loads a login page in WebView, password managers can get “disoriented” about where they should target the user’s login information and instead expose their credentials to the underlying app’s native fields, they said. This is because WebView, the preinstalled engine from Google, lets developers display web content in-app without launching a web browser, and an autofill request is generated.

[…]

“When the password manager is invoked to autofill the credentials, ideally, it should autofill only into the Google or Facebook page that has been loaded. But we found that the autofill operation could accidentally expose the credentials to the base app.”

Gangwal notes that the ramifications of this vulnerability, particularly in a scenario where the base app is malicious, are significant. He added: “Even without phishing, any malicious app that asks you to log in via another site, like Google or Facebook, can automatically access sensitive information.”

The researchers tested the AutoSpill vulnerability using some of the most popular password managers, including 1Password, LastPass, Keeper and Enpass, on new and up-to-date Android devices. They found that most apps were vulnerable to credential leakage, even with JavaScript injection disabled. When JavaScript injection was enabled, all the password managers were susceptible to their AutoSpill vulnerability.

[…]

Source: Your mobile password manager might be exposing your credentials | TechCrunch

It’s pretty well known that you shouldn’t use in app browsers anyway though PSA: Stop Using In-App Browsers Now but I am not sure how you would avoid using webview in this case

Attacks abuse Microsoft DHCP to spoof DNS records and steal secrets

A series of attacks against Microsoft Active Directory domains could allow miscreants to spoof DNS records, compromise Active Directory and steal all the secrets it stores, according to Akamai security researchers.

We’re told the attacks – which are usable against servers running the default configuration of Microsoft Dynamic Host Configuration Protocol (DHCP) servers – don’t require any credentials.

Akamai says it reported the issues to Redmond, which isn’t planning to fix the issue. Microsoft did not respond to The Register‘s inquiries.

The good news, according to Akamai, is that it hasn’t yet seen a server under this type of attack. The bad news: the firm’s flaw finders also told us that massive numbers of organizations are likely vulnerable, considering 40 percent of the “thousands” of networks that Akamai monitors are running Microsoft DHCP in the vulnerable configuration.

In addition to detailing the security issue, the cloud services biz also provided a tool that sysadmins can use to detect configurations that are at risk.

While the current report doesn’t provide technical details or proof-of-concept exploits, Akamai has promised, in the near future, to publish code that implements these attacks called DDSpoof – short for DHCP DNS Spoof.

“We will show how unauthenticated attackers can collect necessary data from DHCP servers, identify vulnerable DNS records, overwrite them, and use that ability to compromise AD domains,” Akamai security researcher Ori David said.

The DHCP attack research builds on earlier work by NETSPI’s Kevin Roberton, who detailed ways to exploit flaws in DNS zones.

[…]

In addition to creating non-existent DNS records, unauthenticated attackers can also use the DHCP server to overwrite existing data, including DNS records inside the ADI zone in instances where the DHCP server is installed on a domain controller, which David says is the case in 57 percent of the networks Akamai monitors.

“All these domains are vulnerable by default,” he wrote. “Although this risk was acknowledged by Microsoft in their documentation, we believe that the awareness of this misconfiguration is not in accordance with its potential impact.”

[…]

we’re still waiting to hear from Microsoft about all of these issues and will update this story if and when we do. But in the meantime, we’d suggest following Akamai’s advice and disable DHCP DNS Dynamic Updates if you don’t already and avoid DNSUpdateProxy altogether.

“Use the same DNS credential across all your DHCP servers instead,” is the advice.

Source: Attacks abuse Microsoft DHCP to spoof DNS records and steal secrets • The Register

Nearly Every Windows and Linux Device Vulnerable To New LogoFAIL Firmware Attack

“Researchers have identified a large number of bugs to do with the processing of images at boot time,” writes longtime Slashdot reader jd. “This allows malicious code to be installed undetectably (since the image doesn’t have to pass any validation checks) by appending it to the image. None of the current secure boot mechanisms are capable of blocking the attack.” Ars Technica reports: LogoFAIL is a constellation of two dozen newly discovered vulnerabilities that have lurked for years, if not decades, in Unified Extensible Firmware Interfaces responsible for booting modern devices that run Windows or Linux. The vulnerabilities are the product of almost a year’s worth of work by Binarly, a firm that helps customers identify and secure vulnerable firmware. The vulnerabilities are the subject of a coordinated mass disclosure released Wednesday. The participating companies comprise nearly the entirety of the x64 and ARM CPU ecosystem, starting with UEFI suppliers AMI, Insyde, and Phoenix (sometimes still called IBVs or independent BIOS vendors); device manufacturers such as Lenovo, Dell, and HP; and the makers of the CPUs that go inside the devices, usually Intel, AMD or designers of ARM CPUs. The researchers unveiled the attack on Wednesday at the Black Hat Security Conference in London.

As its name suggests, LogoFAIL involves logos, specifically those of the hardware seller that are displayed on the device screen early in the boot process, while the UEFI is still running. Image parsers in UEFIs from all three major IBVs are riddled with roughly a dozen critical vulnerabilities that have gone unnoticed until now. By replacing the legitimate logo images with identical-looking ones that have been specially crafted to exploit these bugs, LogoFAIL makes it possible to execute malicious code at the most sensitive stage of the boot process, which is known as DXE, short for Driver Execution Environment. “Once arbitrary code execution is achieved during the DXE phase, it’s game over for platform security,” researchers from Binarly, the security firm that discovered the vulnerabilities, wrote in a whitepaper. “From this stage, we have full control over the memory and the disk of the target device, thus including the operating system that will be started.” From there, LogoFAIL can deliver a second-stage payload that drops an executable onto the hard drive before the main OS has even started. The following video demonstrates a proof-of-concept exploit created by the researchers. The infected device — a Gen 2 Lenovo ThinkCentre M70s running an 11th-Gen Intel Core with a UEFI released in June — runs standard firmware defenses, including Secure Boot and Intel Boot Guard.
LogoFAIL vulnerabilities are tracked under the following designations: CVE-2023-5058, CVE-2023-39538, CVE-2023-39539, and CVE-2023-40238. However, this list is currently incomplete.

“A non-exhaustive list of companies releasing advisories includes AMI (PDF), Insyde, Phoenix, and Lenovo,” reports Ars. “People who want to know if a specific device is vulnerable should check with the manufacturer.”

“The best way to prevent LogoFAIL attacks is to install the UEFI security updates that are being released as part of Wednesday’s coordinated disclosure process. Those patches will be distributed by the manufacturer of the device or the motherboard running inside the device. It’s also a good idea, when possible, to configure UEFIs to use multiple layers of defenses. Besides Secure Boot, this includes both Intel Boot Guard and, when available, Intel BIOS Guard. There are similar additional defenses available for devices running AMD or ARM CPUs.”

3 Vulns expose ownCloud admin passwords, sensitive data

ownCloud has disclosed three critical vulnerabilities, the most serious of which leads to sensitive data exposure and carries a maximum severity score.

The open source file-sharing software company said containerized deployments of ownCloud could expose admin passwords, mail server credentials, and license keys.

Tracked as CVE-2023-49103, the vulnerability carries a maximum severity rating of 10 on the CVSS v3 scale and affects the garaphapi app version 0.2.0 to 0.3.0.

The app relies on a third-party library that provides a URL that when followed reveals the PHP environment’s configuration details, which then allows an attacker to access sensitive data.

Not only could an intruder access admin passwords when deployed using containers, but the same PHP environment also exposes other potentially valuable configuration details, ownCloud said in its advisory, so even if the software isn’t running in a container, the recommended fixes should still be applied.

To fix the vulnerability, customers should delete the file at the following directory: owncloud/apps/graphapi/vendor/microsoft/microsoft-graph/tests/GetPhpInfo.php.

Customers are also advised to change their secrets in case they’ve been accessed. These include ownCloud admin passwords, mail server credentials, database credentials, and Object-Store/S3 access-keys.

In a library update, ownCloud said it disabled the phpinfo function in its Docker containers and “will apply various hardenings in future core releases to mitigate similar vulnerabilities.”

The second vulnerability carries another high severity score, a near-maximum rating of 9.8 for an authentication bypass flaw that allows attackers to access, modify, or delete any file without authentication.

Tracked as CVE-2023-49105, the conditions required for a successful exploit are that a target’s username is known to the attacker and that they have no signing-key configured, which is the default setting in ownCloud.

Exploits work here because pre-signed URLs are accepted when no signing-key is configured for the owner of the files.

The affected core versions are 10.6.0 to 10.13.0 and to mitigate the issue, users are advised to deny the use of pre-signed URLs in scenarios where no signing-key is configured.

The final vulnerability was assigned a severity score of 9 by ownCloud, a “critical” categorization, but the National Vulnerability Database has reduced this to 8.7 – a less-severe “high” classification.

It’s a subdomain validation bypass issue that affects all versions of the oauth2 library including and before 0.6.1 when “Allow Subdomains” is enabled.

“Within the oauth2 app, an attacker is able to pass in a specially crafted redirect-url which bypasses the validation code and thus allows the attacker to redirect callbacks to a TLD controlled by the attacker,” read ownCloud’s advisory.

Source: Vulns expose ownCloud admin passwords, sensitive data • The Register

Windows users report appearance of unwanted HP app – shows you how secure automatic updating is (with no real information about what is in the updates)

Windows users are reporting that Hewlett Packard’s HP Smart application is appearing on their systems, despite them not having any of the manufacturer’s hardware attached.

While Microsoft has remained tight-lipped on what is happening, folks on various social media platforms noted the app’s appearance, which seems to afflict both Windows 10 and Windows 11.

The Windows Update mechanism is used to deploy third-party applications and drivers as well as Microsoft’s updates, and we’d bet someone somewhere has accidentally checked the wrong box.

[…]

WindowsLatest reported the issue occurring on both physical Windows 10 hardware and a Windows 11 virtual machine.

HP Smart is innocuous enough. It’s an application used in conjunction with HP’s printer hardware and can simply be uninstalled.

However, the question is how the application got installed in the first place on a machine with no HP hardware attached or on a network, according to affected users.

[…]

Source: Windows users report appearance of unwanted HP app • The Register

Decoupling for IT Security (=privacy)

Whether we like it or not, we all use the cloud to communicate and to store and process our data. We use dozens of cloud services, sometimes indirectly and unwittingly. We do so because the cloud brings real benefits to individuals and organizations alike. We can access our data across multiple devices, communicate with anyone from anywhere, and command a remote data center’s worth of power from a handheld device.

But using the cloud means our security and privacy now depend on cloud providers. Remember: the cloud is just another way of saying “someone else’s computer.” Cloud providers are single points of failure and prime targets for hackers to scoop up everything from proprietary corporate communications to our personal photo albums and financial documents.

The risks we face from the cloud today are not an accident. For Google to show you your work emails, it has to store many copies across many servers. Even if they’re stored in encrypted form, Google must decrypt them to display your inbox on a webpage. When Zoom coordinates a call, its servers receive and then retransmit the video and audio of all the participants, learning who’s talking and what’s said. For Apple to analyze and share your photo album, it must be able to access your photos.

Hacks of cloud services happen so often that it’s hard to keep up. Breaches can be so large as to affect nearly every person in the country, as in the Equifax breach of 2017, or a large fraction of the Fortune 500 and the U.S. government, as in the SolarWinds breach of 2019-20.

It’s not just attackers we have to worry about. Some companies use their access—benefiting from weak laws, complex software, and lax oversight—to mine and sell our data.

[…]

The less someone knows, the less they can put you and your data at risk. In security this is called Least Privilege. The decoupling principle applies that idea to cloud services by making sure systems know as little as possible while doing their jobs. It states that we gain security and privacy by separating private data that today is unnecessarily concentrated.

To unpack that a bit, consider the three primary modes for working with our data as we use cloud services: data in motion, data at rest, and data in use. We should decouple them all.

Our data is in motion as we exchange traffic with cloud services such as videoconferencing servers, remote file-storage systems, and other content-delivery networks. Our data at rest, while sometimes on individual devices, is usually stored or backed up in the cloud, governed by cloud provider services and policies. And many services use the cloud to do extensive processing on our data, sometimes without our consent or knowledge. Most services involve more than one of these modes.

[…]

Cryptographer David Chaum first applied the decoupling approach in security protocols for anonymity and digital cash in the 1980s, long before the advent of online banking or cryptocurrencies. Chaum asked: how can a bank or a network service provider provide a service to its users without spying on them while doing so?

Chaum’s ideas included sending Internet traffic through multiple servers run by different organizations and divvying up the data so that a breach of any one node reveals minimal information about users or usage. Although these ideas have been influential, they have found only niche uses, such as in the popular Tor browser.

Trust, but Don’t Identify

The decoupling principle can protect the privacy of data in motion, such as financial transactions and Web browsing patterns that currently are wide open to vendors, banks, websites, and Internet Service Providers (ISPs).

Illustration of a process.

STORYTK

1. Barath orders Bruce’s audiobook from Audible. 2. His bank does not know what he is buying, but it guarantees the payment. 3. A third party decrypts the order details but does not know who placed the order. 4. Audible delivers the audiobook and receives the payment.

DECOUPLED E-COMMERCE: By inserting an independent verifier between the bank and the seller and by blinding the buyer’s identity from the verifier, the seller and the verifier cannot identify the buyer, and the bank cannot identify the product purchased. But all parties can trust that the signed payment is valid.

Illustration of a process

STORYTK

1. Bruce’s browser sends a doubly encrypted request for the IP address of sigcomm.org. 2. A third-party proxy server decrypts one layer and passes on the request, replacing Bruce’s identity with an anonymous ID. 3. An Oblivious DNS server decrypts the request, looks up the IP address, and sends it back in an encrypted reply. 4. The proxy server forwards the encrypted reply to Bruce’s browser. 5. Bruce’s browser decrypts the response to obtain the IP address of sigcomm.org.

DECOUPLED WEB BROWSING: ISPs can track which websites their users visit because requests to the Domain Name System (DNS), which converts domain names to IP addresses, are unencrypted. A new protocol called Oblivious DNS can protect users’ browsing requests from third parties. Each name-resolution request is encrypted twice and then sent to an intermediary (a “proxy”) that strips out the user’s IP address and decrypts the outer layer before passing the request to a domain name server, which then decrypts the actual request. Neither the ISP nor any other computer along the way can see what name is being queried. The Oblivious resolver has the key needed to decrypt the request but no information about who placed it. The resolver encrypts its reply so that only the user can read it.

Similar methods have been extended beyond DNS to multiparty-relay protocols that protect the privacy of all Web browsing through free services such as Tor and subscription services such as INVISV Relay and Apple’s iCloud Private Relay.

[…]

Meetings that were once held in a private conference room are now happening in the cloud, and third parties like Zoom see it all: who, what, when, where. There’s no reason a videoconferencing company has to learn such sensitive information about every organization it provides services to. But that’s the way it works today, and we’ve all become used to it.

There are multiple threats to the security of that Zoom call. A Zoom employee could go rogue and snoop on calls. Zoom could spy on calls of other companies or harvest and sell user data to data brokers. It could use your personal data to train its AI models. And even if Zoom and all its employees are completely trustworthy, the risk of Zoom getting breached is omnipresent. Whatever Zoom can do with your data in motion, a hacker can do to that same data in a breach. Decoupling data in motion could address those threats.

[…]

Most storage and database providers started encrypting data on disk years ago, but that’s not enough to ensure security. In most cases, the data is decrypted every time it is read from disk. A hacker or malicious insider silently snooping at the cloud provider could thus intercept your data despite it having been encrypted.

Cloud-storage companies have at various times harvested user data for AI training or to sell targeted ads. Some hoard it and offer paid access back to us or just sell it wholesale to data brokers. Even the best corporate stewards of our data are getting into the advertising game, and the decade-old feudal model of security—where a single company provides users with hardware, software, and a variety of local and cloud services—is breaking down.

Decoupling can help us retain the benefits of cloud storage while keeping our data secure. As with data in motion, the risks begin with access the provider has to raw data (or that hackers gain in a breach). End-to-end encryption, with the end user holding the keys, ensures that the cloud provider can’t independently decrypt data from disk.

[…]

Modern protocols for decoupled data storage, like Tim Berners-Lee’s Solid, provide this sort of security. Solid is a protocol for distributed personal data stores, called pods. By giving users control over both where their pod is located and who has access to the data within it—at a fine-grained level—Solid ensures that data is under user control even if the hosting provider or app developer goes rogue or has a breach. In this model, users and organizations can manage their own risk as they see fit, sharing only the data necessary for each particular use.

[…]

the last few years have seen the advent of general-purpose, hardware-enabled secure computation. This is powered by special functionality on processors known as trusted execution environments (TEEs) or secure enclaves. TEEs decouple who runs the chip (a cloud provider, such as Microsoft Azure) from who secures the chip (a processor vendor, such as Intel) and from who controls the data being used in the computation (the customer or user). A TEE can keep the cloud provider from seeing what is being computed. The results of a computation are sent via a secure tunnel out of the enclave or encrypted and stored. A TEE can also generate a signed attestation that it actually ran the code that the customer wanted to run.

With TEEs in the cloud, the final piece of the decoupling puzzle drops into place. An organization can keep and share its data securely at rest, move it securely in motion, and decrypt and analyze it in a TEE such that the cloud provider doesn’t have access. Once the computation is done, the results can be reencrypted and shipped off to storage. CPU-based TEEs are now widely available among cloud providers, and soon GPU-based TEEs—useful for AI applications—will be common as well.

[…]

Decoupling also allows us to look at security more holistically. For example, we can dispense with the distinction between security and privacy. Historically, privacy meant freedom from observation, usually for an individual person. Security, on the other hand, was about keeping an organization’s data safe and preventing an adversary from doing bad things to its resources or infrastructure.

There are still rare instances where security and privacy differ, but organizations and individuals are now using the same cloud services and facing similar threats. Security and privacy have converged, and we can usefully think about them together as we apply decoupling.

[…]

Decoupling isn’t a panacea. There will always be new, clever side-channel attacks. And most decoupling solutions assume a degree of noncollusion between independent companies or organizations. But that noncollusion is already an implicit assumption today: we trust that Google and Advanced Micro Devices will not conspire to break the security of the TEEs they deploy, for example, because the reputational harm from being found out would hurt their businesses. The primary risk, real but also often overstated, is if a government secretly compels companies to introduce backdoors into their systems. In an age of international cloud services, this would be hard to conceal and would cause irreparable harm.

[…]

Imagine that individuals and organizations held their credit data in cloud-hosted repositories that enable fine-grained encryption and access control. Applying for a loan could then take advantage of all three modes of decoupling. First, the user could employ Solid or a similar technology to grant access to Equifax and a bank only for the specific loan application. Second, the communications to and from secure enclaves in the cloud could be decoupled and secured to conceal who is requesting the credit analysis and the identity of the loan applicant. Third, computations by a credit-analysis algorithm could run in a TEE. The user could use an external auditor to confirm that only that specific algorithm was run. The credit-scoring algorithm might be proprietary, and that’s fine: in this approach, Equifax doesn’t need to reveal it to the user, just as the user doesn’t need to give Equifax access to unencrypted data outside of a TEE.

Building this is easier said than done, of course. But it’s practical today, using widely available technologies. The barriers are more economic than technical.

[…]

One of the challenges of trying to regulate tech is that industry incumbents push for tech-only approaches that simply whitewash bad practices. For example, when Facebook rolls out “privacy-enhancing” advertising, but still collects every move you make, has control of all the data you put on its platform, and is embedded in nearly every website you visit, that privacy technology does little to protect you. We need to think beyond minor, superficial fixes.

Decoupling might seem strange at first, but it’s built on familiar ideas. Computing’s main tricks are abstraction and indirection. Abstraction involves hiding the messy details of something inside a nice clean package: when you use Gmail, you don’t have to think about the hundreds of thousands of Google servers that have stored or processed your data. Indirection involves creating a new intermediary between two existing things, such as when Uber wedged its app between passengers and drivers.

The cloud as we know it today is born of three decades of increasing abstraction and indirection. Communications, storage, and compute infrastructure for a typical company were once run on a server in a closet. Next, companies no longer had to maintain a server closet, but could rent a spot in a dedicated colocation facility. After that, colocation facilities decided to rent out their own servers to companies. Then, with virtualization software, companies could get the illusion of having a server while actually just running a virtual machine on a server they rented somewhere. Finally, with serverless computing and most types of software as a service, we no longer know or care where or how software runs in the cloud, just that it does what we need it to do.

[…]

We’re now at a turning point where we can add further abstraction and indirection to improve security, turning the tables on the cloud providers and taking back control as organizations and individuals while still benefiting from what they do.

The needed protocols and infrastructure exist, and there are services that can do all of this already, without sacrificing the performance, quality, and usability of conventional cloud services.

But we cannot just rely on industry to take care of this. Self-regulation is a time-honored stall tactic: a piecemeal or superficial tech-only approach would likely undermine the will of the public and regulators to take action. We need a belt-and-suspenders strategy, with government policy that mandates decoupling-based best practices, a tech sector that implements this architecture, and public awareness of both the need for and the benefits of this better way forward.

Source: Essays: Decoupling for Security – Schneier on Security

In a first, cryptographic keys protecting SSH connections stolen in new attack

For the first time, researchers have demonstrated that a large portion of cryptographic keys used to protect data in computer-to-server SSH traffic are vulnerable to complete compromise when naturally occurring computational errors occur while the connection is being established.

Underscoring the importance of their discovery, the researchers used their findings to calculate the private portion of almost 200 unique SSH keys they observed in public Internet scans taken over the past seven years. The researchers suspect keys used in IPsec connections could suffer the same fate. SSH is the cryptographic protocol used in secure shell connections that allows computers to remotely access servers, usually in security-sensitive enterprise environments. IPsec is a protocol used by virtual private networks that route traffic through an encrypted tunnel.

The vulnerability occurs when there are errors during the signature generation that takes place when a client and server are establishing a connection. It affects only keys using the RSA cryptographic algorithm, which the researchers found in roughly a third of the SSH signatures they examined. That translates to roughly 1 billion signatures out of the 3.2 billion signatures examined. Of the roughly 1 billion RSA signatures, about one in a million exposed the private key of the host.

While the percentage is infinitesimally small, the finding is nonetheless surprising for several reasons—most notably because most SSH software in use has deployed a countermeasure for decades that checks for signature faults before sending a signature over the Internet. Another reason for the surprise is that until now, researchers believed that signature faults exposed only RSA keys used in the TLS—or Transport Layer Security—protocol encrypting Web and email connections. They believed SSH traffic was immune from such attacks because passive attackers—meaning adversaries simply observing traffic as it goes by—couldn’t see some of the necessary information when the errors happened.

[…]

The new findings are laid out in a paper published earlier this month titled “Passive SSH Key Compromise via Lattices.” It builds on a series of discoveries spanning more than two decades. In 1996 and 1997, researchers published findings that, taken together, concluded that when naturally occurring computational errors resulted in a single faulty RSA signature, an adversary could use it to compute the private portion of the underlying key pair.

The reason: By comparing the malformed signature with a valid signature, the adversary could perform a GCD—or greatest common denominator—mathematical operation that, in turn, derived one of the prime numbers underpinning the security of the key. This led to a series of attacks that relied on actively triggering glitches during session negotiation, capturing the resulting faulty signature and eventually compromising the key. Triggering the errors relied on techniques such as tampering with a computer’s power supply or shining a laser on a smart card.

Then, in 2015, a researcher showed for the first time that attacks on keys used during TLS sessions were possible even when an adversary didn’t have physical access to the computing device. Instead, the attacker could simply connect to the device and opportunistically wait for a signature error to occur on its own. Last year, researchers found that even with countermeasures added to most TLS implementations as long as two decades earlier, they were still able to passively observe faulty signatures that allowed them to compromise the RSA keys of a small population of VPNs, network devices, and websites, most notably Baidu.com, a top-10 Alexa property.

[…]

The attack described in the paper published this month clears the hurdle of missing key material exposed in faulty SSH signatures by harnessing an advanced cryptanalytic technique involving the same mathematics found in lattice-based cryptography. The technique was first described in 2009, but the paper demonstrated only that it was theoretically possible to recover a key using incomplete information in a faulty signature. This month’s paper implements the technique in a real-world attack that uses a naturally occurring corrupted SSH signature to recover the underlying RSA key that generated it.

[…]

The researchers traced the keys they compromised to devices that used custom, closed-source SSH implementations that didn’t implement the countermeasures found in OpenSSH and other widely used open source code libraries. The devices came from four manufacturers: Cisco, Zyxel, Hillstone Networks, and Mocana.

[…]

Once attackers have possession of the secret key through passive observation of traffic, they can mount an active Mallory-in-the-middle attack against the SSH server, in which they use the key to impersonate the server and respond to incoming SSH traffic from clients. From there, the attackers can do things such as recover the client’s login credentials. Similar post-exploit attacks are also possible against IPsec servers if faults expose their private keys.

[…]

a single flip of a bit—in which a 0 residing in a memory chip register turns to 1 or vice versa—is all that’s required to trigger an error that exposes a secret RSA key. Consequently, it’s crucial that the countermeasures that detect and suppress such errors work with near-100 percent accuracy

[…]

Source: In a first, cryptographic keys protecting SSH connections stolen in new attack | Ars Technica

European digital identity: Council and Parliament reach a provisional agreement on eID

[…]

Under the new law, member states will offer citizens and businesses digital wallets that will be able to link their national digital identities with proof of other personal attributes (e.g., driving licence, diplomas, bank account). Citizens will be able to prove their identity and share electronic documents from their digital wallets with a click of a button on their mobile phone.

The new European digital identity wallets will enable all Europeans to access online services with their national digital identification, which will be recognised throughout Europe, without having to use private identification methods or unnecessarily sharing personal data. User control ensures that only information that needs to be shared will be shared.

Concluding the initial provisional agreement

Since the initial provisional agreement on some of the main elements of the legislative proposal at the end of June this year, a thorough series of technical meetings followed in order to complete a text that allowed the finalisation of the file in full. Some relevant aspects agreed by the co-legislators today are:

  • the e-signatures: the wallet will be free to use for natural persons by default, but member states may provide for measures to ensure that the free-of-charge use is limited to non-professional purposes
  • the wallet’s business model: the issuance, use and revocation will be free of charge for all natural persons
  • the validation of electronic attestation of attributes: member states shall provide free-of-charge validation mechanisms only to verify the authenticity and validity of the wallet and of the relying parties’ identity
  • the code for the wallets: the application software components will be open source, but member states are granted necessary leeway so that, for justified reasons, specific components other than those installed on user devices may not be disclosed
  • consistency between the wallet as an eID means and the underpinning scheme under which it is issued has been ensured

Finally, the revised law clarifies the scope of the qualified web authentication certificates (QWACs), which ensures that users can verify who is behind a website, while preserving the current well-established industry security rules and standards.

Next steps

Technical work will continue to complete the legal text in accordance with the provisional agreement. When finalised, the text will be submitted to the member states’ representatives (Coreper) for endorsement. Subject to a legal/linguistic review, the revised regulation will then need to be formally adopted by the Parliament and the Council before it can be published in the EU’s Official Journal and enter into force.

[…]

Source: European digital identity: Council and Parliament reach a provisional agreement on eID – Consilium