The Linkielist

Linking ideas with the world

The Linkielist

DDR4 memory protections are broken wide open by new Rowhammer technique

Rowhammer exploits that allow unprivileged attackers to change or corrupt data stored in vulnerable memory chips are now possible on virtually all DDR4 modules due to a new approach that neuters defenses chip manufacturers added to make their wares more resistant to such attacks.

Rowhammer attacks work by accessing—or hammering—physical rows inside vulnerable chips millions of times per second in ways that cause bits in neighboring rows to flip, meaning 1s turn to 0s and vice versa. Researchers have shown the attacks can be used to give untrusted applications nearly unfettered system privileges, bypass security sandboxes designed to keep malicious code from accessing sensitive operating system resources, and root or infect Android devices, among other things.

All previous Rowhammer attacks have hammered rows with uniform patterns, such as single-sided, double-sided, or n-sided. In all three cases, these “aggressor” rows—meaning those that cause bitflips in nearby “victim” rows—are accessed the same number of times.

Rowhammer access patterns from previous work, showing spatial arrangement of aggressor rows (in black) and victim rows (in orange and cream) in DRAM memory.
Rowhammer access patterns from previous work, showing spatial arrangement of aggressor rows (in black) and victim rows (in orange and cream) in DRAM memory.
Jattke et al.
Relative activation frequency, i.e., number of ACTIVATEs per aggressor row in a Rowhammer pattern. Notice how they hammer aggressors uniformly.
Relative activation frequency, i.e., number of ACTIVATEs per aggressor row in a Rowhammer pattern. Notice how they hammer aggressors uniformly.
Jattke et al.

Bypassing all in-DRAM mitigations

Research published on Monday presented a new Rowhammer technique. It uses non-uniform patterns that access two or more aggressor rows with different frequencies. The result: all 40 of the randomly selected DIMMs in a test pool experienced bitflips, up from 13 out of 42 chips tested in previous work from the same researchers.

[…]

The effects of previous Rowhammer demonstrations have been serious. In one case, researchers were able to gain unrestricted access to all physical memory by flipping bits in the page table entry, which maps the memory address locations. The same research also demonstrated how untrusted applications could gain root privileges. In another case, researchers used Rowhammer to pluck a 2048-bit encryption key out of memory.

[…]

Source: DDR4 memory protections are broken wide open by new Rowhammer technique | Ars Technica

High severity BIOS flaws affect numerous Intel processors

Intel has disclosed two high-severity vulnerabilities that affect a wide range of Intel processor families, allowing threat actors and malware to gain higher privilege levels on the device.

The flaws were discovered by SentinelOne and are tracked as CVE-2021-0157 and CVE-2021-0158, and both have a CVSS v3 score of 8.2 (high).

The former concerns the insufficient control flow management in the BIOS firmware for some Intel processors, while the latter relies on the improper input validation on the same component.

These vulnerabilities could lead to escalation of privilege on the machine, but only if the attacker had physical access to vulnerable devices.

The affected products, according to Intel’s advisory, are the following:

  • Intel® Xeon® Processor E Family
  • Intel® Xeon® Processor E3 v6 Family
  • Intel® Xeon® Processor W Family
  • 3rd Generation Intel® Xeon® Scalable Processors
  • 11th Generation Intel® Core™ Processors
  • 10th Generation Intel® Core™ Processors
  • 7th Generation Intel® Core™ Processors
  • Intel® Core™ X-series Processors
  • Intel® Celeron® Processor N Series
  • Intel® Pentium® Silver Processor Series

Intel hasn’t shared many technical details around these two flaws, but they advise users to patch the vulnerabilities by applying the available BIOS updates.

This is particularly problematic because motherboard vendors do not release BIOS updates often and don’t support their products with security updates for long.

Considering that 7th gen Intel Core processors came out five years ago, it’s doubtful that MB vendors are still releasing security BIOS updates for them.

As such, some users will be left with no practical way to fix the above flaws. In these cases, we would suggest that you set up a strong password for accessing the BIOS settings.

A third vulnerability affects cars

A third flaw for which Intel released a separate advisory on the same day is CVE-2021-0146, also a high-severity (CVSS 7.2) elevation of privilege flaw.

“Hardware allows activation of test or debug logic at runtime for some Intel(R) processors which may allow an unauthenticated user to potentially enable escalation of privilege via physical access.” – Intel’s advisory

This bug affects the following products:

Affected Intel products
Affected Intel products
Source: Intel

Intel has released a firmware update to mitigate this flaw, and users will get it through patches supplied by the system manufacturer.

Positive Technologies, who discovered and reported the bug to Intel, says that the flaw could allow threat actors to gain access to highly sensitive information.

“One example of a real threat is lost or stolen laptops that contain confidential information in encrypted form,” says Mark Ermolov.

“Using this vulnerability, an attacker can extract the encryption key and gain access to information within the laptop. The bug can also be exploited in targeted attacks across the supply chain.”

“For example, an employee of an Intel processor-based device supplier could, in theory, extract the Intel CSME firmware key and deploy spyware that security software would not detect.”

Positive Technologies says that the flaw also affects several car models that use the Intel Atom E3900, including the Tesla Model 3.

Users should apply a BIOS update from the device vendor to address this flaw, so check your manufacturer’s website regularly.

[…]

Source: High severity BIOS flaws affect numerous Intel processors

Securing your digital life, part one: The basics

[…]

Even those who consider themselves well educated about cyber crime and security threats—and who do everything they’ve been taught to do—can (and do!) still end up as victims. The truth is that, with enough time, resources, and skill, everything can be hacked.

The key to protecting your digital life is to make it as expensive and impractical as possible for someone bent on mischief to steal the things most important to your safety, financial security, and privacy. If attackers find it too difficult or expensive to get your stuff, there’s a good chance they’ll simply move on to an easier target. For that reason, it’s important to assess the ways that vital information can be stolen or leaked—and understand the limits to protecting that information.

[…]

Source: Securing your digital life, part one: The basics | Ars Technica

A very good 2 part article about how to stay relatively safe on internet

Code compiled to WASM may lack standard security defenses

[…]

In a paper titled, The Security Risk of Lacking Compiler Protection in WebAssembly, distributed via ArXiv, the technical trio say that when a C program is compiled to WASM, it may lack anti-exploit defenses that the programmer takes for granted on native architectures.

The reason for this, they explain, is that security protections available in compilers like Clang for x86 builds don’t show up when WASM output is produced.

“We compiled 4,469 C programs with known buffer overflow vulnerabilities to x86 code and to WebAssembly, and observed the outcome of the execution of the generated code to differ for 1,088 programs,” the paper states.

“Through manual inspection, we identified that the root cause for these is the lack of security measures such as stack canaries in the generated WebAssembly: while x86 code crashes upon a stack-based buffer overflow, the corresponding WebAssembly continues to be executed.”

[….]

For those not in the know, a stack is a structure in memory used by programs to store temporary variables and information controlling the operation of the application. A stack canary is a special value stored in the stack. When someone attempts to exploit, say, a buffer overflow vulnerability in an application, and overwrite data on the stack to hijack the program’s execution, they should end up overwriting the canary. Doing so will be detected by the program, allowing it to trap and end the exploitation attempt.

Without these canaries, an exploited WASM program could continue running, albeit at the bidding of whoever attacked it, whereas its x86 counterpart exits for its own protection, and that’s a potential security problem. Stack canaries aren’t a panacea, and they can be bypassed, though not having them at all makes exploitation a lot easier.

And these issues are not necessarily a deal-breaker: WASM bytecode still exists in a sandbox, and has further defenses against control-flow hijacking techniques such as return-oriented programming.

But as the researchers observe, WASM’s documentation insists that stack-smashing protection isn’t necessary for WASM code. The three boffins say their findings indicate security assumptions for x86 binaries should be questioned for WASM builds and should encourage others to explore the consequences of this divergent behavior, as it applies both to stack-based buffer overflows and other common security weaknesses.

[…]

Source: Code compiled to WASM may lack standard security defenses • The Register

US bans trade with security firm NSO Group over Pegasus spyware

Surveillance software developer NSO Group may have a very tough road ahead. The US Commerce Department has added NSO to its Entity List, effectively banning trade with the firm. The move bars American companies from doing business with NSO unless they receive explicit permission. That’s unlikely, too, when the rule doesn’t allow license exceptions for exports and the US will default to rejecting reviews.

NSO and fellow Israeli company Candiru (also on the Entity List) face accusations of enabling hostile spying by authoritarian governments. They’ve allegedly supplied spyware like NSO’s Pegasus to “authoritarian governments” that used the tools to track activists, journalists and other critics in a bid to crush political dissent. This is part of the Biden-Harris administration’s push to make human rights “the center” of American foreign policy, the Commerce Department said.

The latest round of trade bans also affects Russian company Positive Technologies and Singapore’s Computer Security Initiative Consultancy, bot of which were accused of peddling hacking tools.

[…]

Source: US bans trade with security firm NSO Group over Pegasus spyware (updated) | Engadget

Facial recognition scheme in place in some British schools – more to come

Facial recognition technology is being employed in more UK schools to allow pupils to pay for their meals, according to reports today.

In North Ayrshire Council, a Scottish authority encompassing the Isle of Arran, nine schools are set to begin processing meal payments for school lunches using facial scanning technology.

The authority and the company implementing the technology, CRB Cunninghams, claim the system will help reduce queues and is less likely to spread COVID-19 than card payments and fingerprint scanners, according to the Financial Times.

Speaking to the publication, David Swanston, the MD of supplier CRB Cunninghams, said the cameras verify the child’s identity against “encrypted faceprint templates”, and will be held on servers on-site at the 65 schools that have so far signed up.

[…]

North Ayrshire council said 97 per cent of parents had given their consent for the new system, although some said they were unsure whether their children had been given enough information to make their decision.

Seemingly unaware of the controversy surrounding facial recognition, education solutions provider CRB Cunninghams announced its introduction of the technology in schools in June as the “next step in cashless catering.”

[…]

Privacy campaigners voiced concerns that moving the technology into schools merely for payment was needlessly normalising facial recognition.

“No child should have to go through border style identity checks just to get a school meal,” Silkie Carlo of the campaign group Big Brother Watch told The Reg.

“We are supposed to live in a democracy, not a security state. This is highly sensitive, personal data that children should be taught to protect, not to give away on a whim. This biometrics company has refused to disclose who else children’s personal information could be shared with and there are some red flags here for us. “Facial recognition technology typically suffers from inaccuracy, particularly for females and people of colour, and we’re extremely concerned about how this invasive and discriminatory system will impact children.”

[…]

Those concerned about the security of schools systems now storing children’s biometric data will not be assured by the fact that educational establishments have become targets for cyber-attacks.

In March, the Harris Federation, a not-for-profit charity responsible for running 50 primary and secondary academies in London and Essex, became the latest UK education body to fall victim to ransomware. The institution said it was “at least” the fourth multi-academy trust targeted just that month alone. Meanwhile, South and City College Birmingham earlier this year told 13,000 students that all lectures would be delivered via the web because a ransomware attack had disabled its core IT systems.

[…]

Source: Facial recognition scheme in place in some British schools • The Register

The students probably gave their consent because if they didn’t, they wouldn’t get any lunch. The problem with biometrics is that they don’t change. So if someone steals yours, then it’s stolen forever. It’s not a password you can reset.

WhatsApp begins rolling out end-to-end encryption for chat backups

The wait is over. It’s now possible to encrypt your WhatsApp chat history on both Android and iOS, Facebook CEO Mark Zuckerberg announced on Thursday. The company plans to roll out the feature slowly to ensure it can deliver a consistent and reliable experience to all users.

However, once you can access the feature, it will allow you to secure your backups before they hit iCloud or Google Drive. At that point, neither WhatsApp nor your cloud service provider will be able to access the files. It’s also worth mentioning you won’t be able to recover your backups if you ever lose the 64-digit encryption key that secures your chat logs. That said, it’s also possible to secure your backups behind a password, in which case you can recover that if you ever lose it.

While WhatsApp has allowed users to securely message each other since 2016, it only started testing encrypted backups earlier this year. With today’s announcement, the company said it has taken the final step toward providing a full end-to-end encrypted messaging experience.

It’s worth pointing out that end-to-end encryption doesn’t guarantee your privacy will be fully protected. According to a report The Information published in August, Facebook was looking into an AI that could analyze encrypted data without having to decrypt it so that it could serve ads based on that information. The head of WhatsApp denied the report, but it’s a reminder that there’s more to privacy than merely the existence of end-to-end encryption.

Source: WhatsApp begins rolling out end-to-end encryption for chat backups | Engadget

How Apple Can Read Your Encrypted iMessages

If you have an iPhone, and your friends mostly have iPhones, you probably use Apple’s Messages app to communicate with them. That’s the nature of things. And aside from the platform’s convenience and ubiquity, one of the iMessage platform’s selling points is that its end-to-end encryption should theoretically ensure that only you and those you text can read your conversations. However, that might not be the case: Apple can likely access the messages for many, many iMessage users, even with end-to-end encryption in place.

[…]

How you back up your messages matters

So yes, your texts are encrypted as sent and received. But few of us delete every text as it comes in; we keep them around in case we want to revisit them later, which means we need to back them up somehow. And as it turns out, how you back up your messages might mean the difference between having an truly secure iMessage history, and giving Apple the key to unlock all your conversations.

[…]

iCloud Backup is not a secure method for saving your messages

Here’s the tricky thing; Messages in iCloud is end-to-end encrypted, just as you’d expect—that’s why there’s no way to access your messages on the web, such as by logging in to icloud.com. There’s one big problem, though: your iCloud Backup isn’t end-to-end encrypted—and Apple stores the key to unlock your encrypted messages within that backup.

[…]

It’s not just your messages; besides Keychain, Screen Time, and Health data, Apple has the key to decrypt all of your iCloud data

[…]

Source: How Apple Can Read Your Encrypted Messages

Telegraph newspaper exposes 10TB of server, user data online

The Telegraph newspaper managed to leak 10TB of subscriber data and server logs after leaving an Elasticsearch cluster unsecured for most of September, according to the researcher who found it online.

The blunder was uncovered by well-known security researcher Bob Diachenko, who said that the cluster had been freely accessible “without a password or any other authentication required to access it.”

After sampling the database to determine its owner, Diachenko saw the personal details of at least 1,200 Telegraph subscribers along with a substantial quantity of internal server logs, he told The Register.

“A significant portion of the records were unencrypted,” he said. Screenshots he provided showed information including the user-agent string and device type, while categories of personal data included subscribers’ first and last names, email addresses, subscriber status, IP addresses and device type and operating system.

Affected users “should be on the lookout for targeted phishing and scams,” Diachenko advised. “Names and emails in the database can be used to send readers targeted scam messages.”

Aside from potential scam emails, the risk from this breach is relatively low unless having your news-reading habits collated in one place might cause professional embarrassment: Diachenko highlighted that in the data sample he viewed were a handful of gov.uk email addresses.

[…]

Source: Telegraph newspaper exposes 10TB of server, user data online • The Register

Millions of AMD PCs affected by new CPU driver flaw need to be patched ASAP

After finding several security flaws in Intel’s System Guard Extensions (SGX), security researchers have now revealed a flaw in AMD’s Platform Security Processor (PSP) chipset driver that makes it easy for attackers to steal sensitive data from Ryzen-powered systems. On the upside, there’s already patches available from both Microsoft and AMD to shut the exploit.

Recently, AMD disclosed a vulnerability in the AMD Platform Security Processor (PSP) chipset driver that allows malicious actors to dump memory pages and exact sensitive information such as passwords and storage decryption keys.

The flaw is tracked under CVE-2021-26333 and is considered medium severity. It affects a wide range of AMD-powered systems, with all Ryzen desktop, mobile, and workstation CPUs being affected. Additionally, PCs equipped with a 6th and 7th generation AMD A-series APU or modern Athlon processors are vulnerable to the same attack.

Security researcher Kyriakos Economou over at ZeroPeril discovered the flaw back in April. His team tested a proof-of-concept exploit on several AMD systems and found it relatively easy to leak several gigabytes of uninitialized physical memory pages when logged in as a user with low privileges. At the same time, this attack method can bypass exploitation mitigations like kernel address space layout randomization (KASLR).

The good news is there are patches available for this flaw. One way to ensure you get them is to download the latest AMD chipset drivers from TechSpot Drivers page or AMD’s own website. The driver was released a month ago, but at the time AMD chose not to fully disclose the security fixes contained in the release.

[…]

Source: Millions of AMD PCs affected by new CPU driver flaw need to be patched ASAP | TechSpot

Millions Experience Browser Problems After Long-Anticipated Expiration of IdentTrust DST Root CA X3 SSL Certificate

“The expiration of a key digital encryption service on Thursday sent major tech companies nationwide scrambling to deal with internet outages that affected millions of online users,” reports the Washington Examiner.

The expiring certificate was issued by Let’s Encrypt — though ZDNet notes there’s been lots of warnings about its pending expiration: Digital Shadows senior cyber threat analyst Sean Nikkel told ZDNet that Let’s Encrypt put everyone on notice back in May about the expiration of the Root CA Thursday and offered alternatives and workarounds to ensure that devices would not be affected during the changeover. They have also kept a running forum thread open on this issue with fairly quick responses, Nikkel added.
Thursday night the Washington Examiner describes what happened when the big day arrived: Tech giants — such as Amazon, Google, Microsoft, and Cisco, as well as many smaller tech companies — were still battling with an endless array of issues by the end of the night… At least 2 million people have seen an error message on their phones, computers, or smart gadgets in the past 24 hours detailing some internet connectivity problems due to the certificate issue, according to Scott Helme, an internet security researcher and well-known cybersecurity expert. “So many people have been affected, even if it’s only the inconvenience of not being able to visit certain websites or some of their apps not working,” Helme said.

“This issue has been going on for many hours, and some companies are only just getting around to fixing it, even big companies with a lot of resources. It’s clearly not going smoothly,” he added.

There was an expectation before the certificate expired, Helme said, that the problem would be limited to gadgets and devices bought before 2017 that use the Let’s Encrypt digital certificate and haven’t updated their software. However, many users faced issues on Thursday despite having the most cutting-edge devices and software on hand. Dozens of major tech products and services have been significantly affected by the certificate expiration, such as cloud computing services for Amazon, Google, and Microsoft; IT and cloud security services for Cisco; sellers unable to log in on Shopify; games on RocketLeague; and workflows on Monday.com.
Security researcher Scott Helme also told ZDNet he’d also confirmed issues at many other companies, including Guardian Firewall, Auth0, QuickBooks, and Heroku — but there might be many more beyond that: “For the affected companies, it’s not like everything is down, but they’re certainly having service issues and have incidents open with staff working to resolve. In many ways, I’ve been talking about this for over a year since it last happened, but it’s a difficult problem to identify. it’s like looking for something that could cause a fire: it’s really obvious when you can see the smoke…!”

Digital certificates expert Tim Callan added that the popularity of DevOps-friendly architectures like containerization, virtualization and cloud has greatly increased the number of certificates the enterprise needs while radically decreasing their average lifespan. “That means many more expiration events, much more administration time required, and greatly increased risk of a failed renewal,” he said.

Source: Millions Experience Browser Problems After Long-Anticipated Expiration of ‘Let’s Encrypt’ Certificate – Slashdot

Unpatched flaw creates ‘weaponised’ Apple AirTags

[…]

Should your AirTag-equipped thing not be where you thought it was, you can enable Lost Mode. When in Lost Mode, an AirTag scanned via NFC provides a unique URL which lets the finder get in contact with the loser – and it’s this page where security researcher Bobby Rauch discovered a concerning vulnerability.

“An attacker can carry out Stored XSS on this https://found.apple.com page by injecting a malicious payload into the AirTag ‘Lost Mode’ phone number field,” Rauch wrote in an analysis of the issue. “A victim will believe they are being asked to sign into iCloud so they can get in contact with the owner of the AirTag, when in fact, the attacker has redirected them to a credential hijacking page.

“Other XSS exploits can be carried out as well like session token hijacking, clickjacking, and more. An attacker can create weaponised AirTags and leave them around, victimising innocent people who are simply trying to help a person find their lost AirTag.”

Apple has not commented publicly on the vulnerability nor does it seem to be taking the issue particularly seriously. Speaking to Brian Krebs, Rauch claimed that Apple sat on the flaw for three months – and that while it confirmed it planned to resolve the vulnerability in a future update, the company has not yet done so. Apple also refused to confirm whether Rauch’s discovery would qualify for its bug bounty programme and a potential cash payout – a final insult which led to his public release of the flaw.

It’s not the first time Apple has stood accused of failing to respond to security researchers. Earlier this month a pseudonymous researcher known as “IllusionOfChaos” dropped three zero-day vulnerabilities affecting Apple’s iOS 15 – six months after originally reporting them to the company. A fourth flaw had been fixed in an earlier iOS release, the researcher noted, “but Apple decided to cover it up and not list it on the security content page.”

The company has also been experiencing a few problems with the patches it does release. An update released to fix a vulnerability in the company’s Finder file manager, capable of bypassing the Quarantine and Gatekeeper security functions built into macOS, only worked for lowercase URLs – although emergency patches released two weeks ago appear to have had better luck.

[…]

Source: Unpatched flaw creates ‘weaponised’ Apple AirTags • The Register

Microsoft Exchange protocol can leak credentials cleartext

A flaw in Microsoft’s Autodiscover protocol, used to configure Exchange clients like Outlook, can cause user credentials to leak to miscreants in certain circumstances.

The upshot is that your Exchange-connected email client may give away your username and password to a stranger, if the flaw is successfully exploited. In a report scheduled to be published on Wednesday, security firm Guardicore said it has identified a design blunder that leaks web requests to Autodiscover domains that are outside the user’s domain but within the same top-level domain (TLD).

Exchange’s Autodiscover protocol, specifically the version based on POX XML, provides a way for client applications to obtain the configuration data necessary to communicate with the Exchange server. It gets invoked, for example, when adding a new Exchange account to Outlook. After a user supplies a name, email address, and password, Outlook tries to use Autodiscover to set up the client.

As Guardicore explained in a report provided to The Register, the client parses the email address – say, user@example.com – and tries to construct a URL for the configuration data using combinations of the email domain, a subdomain, and a path string as follows:

  • https://Autodiscover.example.com/Autodiscover/Autodiscover.xml
  • http://Autodiscover.example.com/Autodiscover/Autodiscover.xml
  • https://example.com/Autodiscover/Autodiscover.xml
  • http://example.com/Autodiscover/Autodiscover.xml

If the client doesn’t receive any response from these URLs – which would happen if Exchange was improperly configured or was somehow prevented from accessing the designated resources – the Autodiscover protocol tries a “back-off” algorithm that uses Autodiscover with a TLD as a hostname. Eg:

  • http://Autodiscover.com/Autodiscover/Autodiscover.xml

“This ‘back-off’ mechanism is the culprit of this leak because it is always trying to resolve the Autodiscover portion of the domain and it will always try to ‘fail up,’ so to speak,” explained Amit Serper, Guardicore area vice president of security research for North America, in the report. “This means that whoever owns Autodiscover.com will receive all of the requests that cannot reach the original domain.”

In an email to The Register, Serper said, “I believe that this was the consequence of careless, or rather, naïve design. [The] same flaws appear in other Microsoft protocols of similar functions.”

Sensing a potential problem with making credentials available to any old TLD with Autodiscover, Guardicore acquired several variations on that theme: Autodiscover.com.br, Autodiscover.com.cn, Autodiscover.com.co, Autodiscover.uk, and Autodiscover.online, among others.

After assigning these domains to its web server, Guardicore started receiving numerous requests to Autodiscover endpoints from assorted IP addresses and clients. It turns out a lot of Exchange servers and clients aren’t set up very carefully.

… with the Authorization header already populated with credentials in HTTP basic authentication

“The most notable thing about these requests was that they requested the relative path of /Autodiscover/Autodiscover.xml with the Authorization header already populated with credentials in HTTP basic authentication,” said Serper, who observed that web requests of this sort should not be sent blindly pre-authentication.

HTTP basic access authentication is Base64 encoded but is not encrypted, so this amounts to sending credentials in cleartext.

Between April 16, 2021 and August 25, 2021, Guardicore received about 649,000 HTTP requests aimed at its Autodiscover domains, 372,000 requests with credentials in basic authentication, and roughly 97,000 unique pre-authentication requests.

The credentials came from publicly traded companies in China, food makers, investment banks, power plants, energy delivery firms, real estate businesses, shipping and logistics operations, and fashion/jewelry companies.

There were also many requests that used alternatives to HTTP basic authentication, like NTLM and Oauth, that didn’t expose associated credentials immediately. To obtain access to these, Guardicore set up a downgrade attack.

So upon receiving an HTTP request with an authentication token or NLTM hash, the Guardicore server responded with an HTTP 401 with the WWW-Authenticate: basic header, which tells the client that the server only supports HTTP basic authentication. Then to make the session look legit, the company used a Let’s Encrypt certificate to prevent an SSL warning and ensure the presentation of a proper Outlook authentication prompt so potential victims enter their credentials with confidence.

[…]

Source: Microsoft Exchange protocol can leak credentials • The Register

Ministry of Defence: Another huge Afghanistan email blunder

A second leak of personal data was reportedly committed by the Ministry of Defence, raising further questions about the ministry’s commitment to the safety of people in Afghanistan, some of whom are its own former employees.

The BBC reported overnight that the details of a further 55 Afghans  – claimed to be candidates for potential relocation – had been leaked through the classic cc-instead-of-bcc email blunder, echoing the previously reported breach of 250 Afghan interpreters’ data through a similar failure.

An MoD spokeswoman said in a statement: “We have been made aware of a data breach that occurred earlier this month by the Afghan Relocation and Assistance Policy (Arap) team. This week, the defence secretary instigated an investigation into data-handling within that team.”

A defence official has reportedly been suspended from duty, following demands from defence secretary Ben Wallace for an immediate enquiry into how the blunder happened.

After the US-led military coalition left Afghanistan, a number of local civilians employed as translators were left behind as the Taliban re-established control over the country. Some of those civilians have since been murdered for their perceived support of the Western militaries.

[…]

Source: Ministry of Defence: Another huge Afghanistan email blunder • The Register

Database containing 106m Thailand travelers’ details over the past decade leaked

A database containing personal information on 106 million international travelers to Thailand was exposed to the public internet this year, a Brit biz claimed this week.

Bob Diachenko, head of cybersecurity research at product-comparison website Comparitech, said the Elasticsearch data store contained visitors’ full names, passport numbers, arrival dates, visa types, residency status, and more. It was indexed by search engine Censys on August 20, and spotted by Diachenko two days later. There were no credentials in the database, which is said to have held records dating back a decade.

[…]

Diachenko said he alerted the operator of the database, which led to the Thai authorities finding out about it, who “were quick to acknowledge the incident and swiftly secured the data,” Comparitech reported. We’re told that the IP address of the exposed database, hidden from sight a day after Diachenko raised the alarm, is still live, though connecting to it reports that the box is now a honeypot.

[…]

We’ve contacted the Thai embassy in the US for further comment. Diachenko told The Register a “server misconfiguration” by an IT outsourcer caused the database to be exposed to the whole world.

[…]

Additionally, it’s possible that if you’ve traveled to Thailand and stayed there during the pandemic, you’ve already been leaked. A government website used to sign foreigners up for COVID-19 vaccines spilled names and passport numbers in June.

Additionally, last month, Bangkok Airways was hit by ransomware group LockBit resulting in the publishing of passenger data. And in 2018, TrueMove H, the biggest 4G mobile operator in Thailand, suffered a database breach of around 46,000 records.

Comparitech said the database it found contained several assets, in addition to the 106 million records, making the total leaked information come to around 200 GB.

Source: Database containing 106m Thailand travelers’ details leaked • The Register

MoD apologises after Afghan interpreters’ personal data exposed (yes the ones still in Afghanistan)

The UK’s Ministry of Defence has launched an internal investigation after committing the classic CC-instead-of-BCC email error – but with the names and contact details of Afghan interpreters trapped in the Taliban-controlled nation.

The horrendous data breach took place yesterday, with Defence Secretary Ben Wallace promising an immediate investigation, according to the BBC.

Included in the breach were profile pictures associated with some email accounts, according to the state-owned broadcaster. The initial email was followed up by a second message urging people who had received the first one to delete it – a way of drawing close attention to an otherwise routine missive.

The email was reportedly sent by the British government’s Afghan Relocations and Assistance Policy (ARAP) unit, urging the interpreters not to put themselves or their families at risk. The ministry was said to have apologised for the “unacceptable breach.”

“This mistake could cost the life of interpreters, especially for those who are still in Afghanistan,” one source told the Beeb.

Since the US-led military coalition pulled out of Afghanistan at the end of August, there have been distressing scenes in the country as the ruling Taliban impose Islamic Sharia law – while hunting down and punishing those who helped the Western militaries. Some interpreters have reportedly been murdered, with others fearing for their lives and the well-being of their families.

[…]

Source: MoD apologises after Afghan interpreters’ data exposed • The Register

Glowworm Attack Captures Audio From Power LED Light Flickers

Researchers from Ben-Gurion University have come up with a way to listen in on a speaker from afar by just monitoring the subtle changes in brightness of its power status LED.

The Glowworm Attack, as the discovery is called, follows similar research from the university published in 2020 that found an electro-optical sensor paired with a telescope was able to decipher the sounds in a room. Sound waves bounced off a hanging light bulb create nearly imperceptible changes in the lighting in the room. With the Glowworm Attack, the same technology that made Lamphone possible is repurposed to remotely eavesdrop on sounds in a room again, but using a completely different approach that many speaker makers apparently never even considered.

[…]

Pairing the sensor with a telescope allowed the security researchers at Ben-Gurion University to successfully capture and decipher sounds being played by a speaker at distances of up to 35 meters, or close to 115 feet. The results aren’t crystal clear (you can hear the remote recordings the researchers made on Ben Nassi’s website), and the noise increases the farther away from the speaker the capture device is used, but with some intelligent audio processing, the results can undoubtedly be improved.

Source: Glowworm Attack Captures Audio From Power LED Light Flickers

Samsung Smart TVs Can Be Remotely Disabled

QLED-loving thieves, beware: Samsung revealed on Tuesday that its TVs can be remotely disabled if the company finds out they’ve been stolen, so long as the sets in question are connected to the internet.

Known as “Samsung TV Block,” the feature was first announced in a press release earlier this month after the company deployed it following a string of warehouse lootings triggered by unrest in South Africa. In the release, Samsung said that the technology comes “already pre-loaded on all Samsung TV products,” and said that it “ensures that the television sets can only be used by the rightful owners with a valid proof of purchase.”

TV Block kicks in after the user of the stolen television connects it to the internet, which is necessary in order to operate the smart TVs. Once connected, the serial number of the television pings the Samsung server, triggering a blocking mechanism that effectively disables all of the TV’s functions.

While the release only mentions the blocking function relative to the TVs that had been looted from the company’s warehouse, the protection could also ostensibly be applied to individual customers who’ve had their TVs stolen and report the device’s serial number to Samsung.

[…]

Source: Samsung Smart TVs Can Be Remotely Disabled If Stolen

This means that you could reroute the TVs to your own server and trigger the blocking mechanism yourself quite easily. Nice way to brick a whole load of Samsung TVs!

European Commission airs out new IoT device security draft law – interested parties have a week to weigh in

Infosec pros and other technically minded folk have just under a week left to comment on EU plans to introduce new regulations obligating consumer IoT device makers to address online security issues, data protection, privacy and fraud prevention.

Draft regulations applying to “internet-connected radio equipment and wearable radio equipment” are open for public comment until 27 August – and the resulting laws will apply across the bloc from the end of this year, according to the EU Commission.

Billed as assisting Internet of Things device security, the new regs will apply to other internet-connected gadgets in current use today, explicitly including “certain laptops” as well as “baby monitors, smart appliances, smart cameras and a number of other radio equipment”, “dongles, alarm systems, home automation systems” and more.

[…]

The Netherlands’ FME association has already raised public concerns about the scope of the EU’s plans, specifically raising the “feasibility of post market responsibility for cybersecurity”.

The trade association said: “If there is a low risk exploitable vulnerability; at what level can the manufacturer not release or delay a patch, and what documentation is required to demonstrate that this risk assessment was conducted with this outcome of a very low risk vulnerability?”

While there are certainly holes that can be picked in the draft regs, cheap and cheerful internet-connected devices pose a real risk to the wider internet because of the ease with which they can be hijacked by criminals.

[…]

Certain router makers have learned the hard way that end-of-life equipment that contain insecurities can have a reputational as well as security impact. That said, it’s perhaps unreasonable to expect kit makers to keep providing software patches for years after they’ve stopped shipping a device. Consumers cannot rely on news outlets shaming makers of internet-connected goods into providing better security; new laws are the inevitable next stage, and there’s a growing push for them on both sides of the Atlantic.

Device makers being banned from selling in the EU over security and data protection issues is not new. In 2017, the German telecoms regulator banned the sale of children’s smartwatches that allowed users to secretly listen in on nearby conversations and later that year, the French data protection agency issued a formal notice to a biz peddling allegedly insecure Bluetooth-enabled toys – Genesis Toys’ My Friend Cayla doll and the i-Que robot, because the doll could be misused to eavesdrop on kids. The manufacturers are also obliged to comply with the GDPR. However, the new draft law is evidence that certain loopholes might soon begin to close

Source: European Commission airs out new IoT device security draft law – interested parties have a week to weigh in • The Register

A Misused Microsoft Tool Leaked Data from 47 Organizations

New research shows that misconfigurations of a widely used web tool have led to the leaking of tens of millions of data records.

Microsoft’s Power Apps, a popular development platform, allows organizations to quickly create web apps, replete with public facing websites and related backend data management. A lot of governments have used Power Apps to swiftly stand up covid-19 contact tracing interfaces, for instance.

However, incorrect configurations of the product can leave large troves of data publicly exposed to the web—which is exactly what has been happening.

Researchers with cybersecurity firm UpGuard recently discovered that as many as 47 different entities—including governments, large companies, and Microsoft itself—had misconfigured their Power Apps to leave data exposed.

The list includes some very large institutions, including the state governments of Maryland and Indiana and public agencies for New York City, such as the MTA. Large private companies, including American Airlines and transportation and logistics firm J.B. Hunt, have also suffered leaks.

UpGuard researchers write that the troves of leaked data has included a lot of sensitive stuff, including “personal information used for COVID-19 contact tracing, COVID-19 vaccination appointments, social security numbers for job applicants, employee IDs, and millions of names and email addresses.”

[…]

Following UpGuard’s disclosures, Microsoft has since shifted permissions and default settings related to Power Apps to make the product more secure.

Source: A Misused Microsoft Tool Leaked Data from 47 Organizations

Sensitive Data On Afghan Allies Collected By The US Military Is Now In The Hands Of The Taliban

The problem with harvesting reams of sensitive data is that it presents a very tempting target for malicious hackers, enemy governments, and other wrongdoers. That hasn’t prevented anyone from collecting and storing all of this data, secure only in the knowledge this security will ultimately be breached.

[…]

The Taliban is getting everything we left behind. It’s not just guns, gear, and aircraft. It’s the massive biometric collections we amassed while serving as armed ambassadors of goodwill. The stuff the US government compiled to track its allies are now handy repositories that will allow the Taliban to hunt down its enemies. Ken Klippenstein and Sara Sirota have more details for The Intercept.

The devices, known as HIIDE, for Handheld Interagency Identity Detection Equipment, were seized last week during the Taliban’s offensive, according to a Joint Special Operations Command official and three former U.S. military personnel, all of whom worried that sensitive data they contain could be used by the Taliban. HIIDE devices contain identifying biometric data such as iris scans and fingerprints, as well as biographical information, and are used to access large centralized databases. It’s unclear how much of the U.S. military’s biometric database on the Afghan population has been compromised.

At first, it might seem that this will only allow the Taliban to high-five each other for making the US government’s shit list. But it wasn’t just used to track terrorists. It was used to track allies.

While billed by the U.S. military as a means of tracking terrorists and other insurgents, biometric data on Afghans who assisted the U.S. was also widely collected and used in identification cards, sources said.

[…]

Source: Sensitive Data On Afghan Allies Collected By The US Military Is Now In The Hands Of The Taliban | Techdirt

Zoom to pay $85M for lying about encryption and sending data to Facebook and Google

Zoom has agreed to pay $85 million to settle claims that it lied about offering end-to-end encryption and gave user data to Facebook and Google without the consent of users. The settlement between Zoom and the filers of a class-action lawsuit also covers security problems that led to rampant “Zoombombings.”

The proposed settlement would generally give Zoom users $15 or $25 each and was filed Saturday at US District Court for the Northern District of California. It came nine months after Zoom agreed to security improvements and a “prohibition on privacy and security misrepresentations” in a settlement with the Federal Trade Commission, but the FTC settlement didn’t include compensation for users.

As we wrote in November, the FTC said that Zoom claimed it offers end-to-end encryption in its June 2016 and July 2017 HIPAA compliance guides, in a January 2019 white paper, in an April 2017 blog post, and in direct responses to inquiries from customers and potential customers. In reality, “Zoom did not provide end-to-end encryption for any Zoom Meeting that was conducted outside of Zoom’s ‘Connecter’ product (which are hosted on a customer’s own servers), because Zoom’s servers—including some located in China—maintain the cryptographic keys that would allow Zoom to access the content of its customers’ Zoom Meetings,” the FTC said. In real end-to-end encryption, only the users themselves have access to the keys needed to decrypt content.

[…]

Source: Zoom to pay $85M for lying about encryption and sending data to Facebook and Google | Ars Technica

>83 million Web Cams, Baby Monitor Feeds and other IoT devices using Kalay backend Exposed

a vulnerability is lurking in numerous types of smart devices—including security cameras, DVRs, and even baby monitors—that could allow an attacker to access live video and audio streams over the internet and even take full control of the gadgets remotely. What’s worse, it’s not limited to a single manufacturer; it shows up in a software development kit that permeates more than 83 million devices, and over a billion connections to the internet each month.

The SDK in question is ThroughTek Kalay, which provides a plug-and-play system for connecting smart devices with their corresponding mobile apps. The Kalay platform brokers the connection between a device and its app, handles authentication, and sends commands and data back and forth. For example, Kalay offers built-in functionality to coordinate between a security camera and an app that can remotely control the camera angle. Researchers from the security firm Mandiant discovered the critical bug at the end of 2020, and they are publicly disclosing it today in conjunction with the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency.

“You build Kalay in, and it’s the glue and functionality that these smart devices need,” says Jake Valletta, a director at Mandiant. “An attacker could connect to a device at will, retrieve audio and video, and use the remote API to then do things like trigger a firmware update, change the panning angle of a camera, or reboot the device. And the user doesn’t know that anything is wrong.”

The flaw is in the registration mechanism between devices and their mobile applications. The researchers found that this most basic connection hinges on each device’s “UID,” a unique Kalay identifier. An attacker who learns a device’s UID—which Valletta says could be obtained through a social engineering attack, or by searching for web vulnerabilities of a given manufacturer—and who has some knowledge of the Kalay protocol can reregister the UID and essentially hijack the connection the next time someone attempts to legitimately access the target device. The user will experience a few seconds of lag, but then everything proceeds normally from their perspective.

The attacker, though, can grab special credentials—typically a random, unique username and password—that each manufacturer sets for its devices. With the UID plus this login the attacker can then control the device remotely through Kalay without any other hacking or manipulation. Attackers can also potentially use full control of an embedded device like an IP camera as a jumping-off point to burrow deeper into a target’s network.

By exploiting the flaw, an attacker could watch video feeds in real time, potentially viewing sensitive security footage or peeking inside a baby’s crib. They could launch a denial of service attack against cameras or other gadgets by shutting them down. Or they could install malicious firmware on target devices. Additionally, since the attack works by grabbing credentials and then using Kalay as intended to remotely manage embedded devices, victims wouldn’t be able to oust intruders by wiping or resetting their equipment. Hackers could simply relaunch the attack.

“The affected ThroughTek P2P products may be vulnerable to improper access controls,” CISA wrote in its Tuesday advisory. “This vulnerability can allow an attacker to access sensitive information (such as camera feeds) or perform remote code execution. … CISA recommends users take defensive measures to minimize the risk of exploitation of this vulnerability.”

[…]

To defend against exploitation, devices need to be running Kalay version 3.1.10, originally released by ThroughTek in late 2018, or higher. But even the current Kalay SDK version (3.1.5) does not automatically fix the vulnerability. Instead, ThroughTek and Mandiant say that to plug the hole manufacturers must turn on two optional Kalay features: the encrypted communication protocol DTLS and the API authentication mechanism AuthKey.

[…]

“For the past three years, we have been informing our customers to upgrade their SDK,” ThroughTek’s Chen says. “Some old devices lack OTA [over the air update] function which makes the upgrade impossible. In addition, we have customers who don’t want to enable the DTLS because it would slow down the connection establishment speed, therefore are hesitant to upgrade.”

[…]

Source: Millions of Web Camera and Baby Monitor Feeds Are Exposed | WIRED

China orders annual security reviews for all critical information infrastructure operators

An announcement by the Cyberspace Administration of China (CAC) said that cyber attacks are currently frequent in the Middle Kingdom, and the security challenges facing critical information infrastructure are severe. The announcement therefore defines infosec regulations and and responsibilities.

The CAC referred to critical infrastructure as “the nerve center of economic and social operations and the top priority of network security”. China’s definition of critical information infrastructure can be found in Article 2 of the State Council’s “Regulations on the Security Protection of Critical Information Infrastructure” and boils down to any system that could suffer significant damage from a cyber attack, and/or have such an attack damage society at large or even national security.

“The regulations clarify that important network facilities and information systems in key industries and fields belong to critical information infrastructure,” wrote the CAC in its announcement (as translated from Mandarin), adding that the state was adopting measures to monitor, defend and handle network risks and intrusions, originating domestically and globally.

The regulations themselves are lengthy and detailed, but the theme is that all Chinese enterprises whose operations depend on networks must conduct an annual security reviews, report breaches to government, and establish teams to monitor security constantly.

Those teams get to develop emergency plans and carry out emergency drills on a regular basis, in accordance with disaster management national plans.

If an incident is ever discovered, reporting and escalation to national authorities is mandatory.

The lengthy document also details a variety of organizational and logistical “clarifications”, while also outlining the state’s ability to adjust identification rules dynamically, how safeguarding measures can be implemented, and legal responsibilities and penalties for negligent parties.

[…]

Source: China orders annual security reviews for all critical information infrastructure operators • The Register

This sounds sensible. The Dutch NCSC has guidelines and an audit checklist recommending this, however this is not mandatory anywhere and very few companies actually use the monster checklist, let alone implement it. Nowadays this is not really acceptable behaviour any more.

Senators ask Amazon how it will use palm print data from its stores

If you’re concerned that Amazon might misuse palm print data from its One service, you’re not alone. TechCrunch reports that Senators Amy Klobuchar, Bill Cassidy and Jon Ossoff have sent a letter to new Amazon chief Andy Jassy asking him to explain how the company might expand use of One’s palm print system beyond stores like Amazon Go and Whole Foods. They’re also worried the biometric payment data might be used for more than payments, such as for ads and tracking.

The politicians are concerned that Amazon One reportedly uploads palm print data to the cloud, creating “unique” security issues. The move also casts doubt on Amazon’s “respect” for user privacy, the senators said.

In addition to asking about expansion plans, the senators wanted Jassy to outline the number of third-party One clients, the privacy protections for those clients and their customers and the size of the One user base. The trio gave Amazon until August 26th to provide an answer.

[…]

The company has offered $10 in credit to potential One users, raising questions about its eagerness to collect palm print data. This also isn’t the first time Amazon has clashed with government

[…]

Amazon declined to comment, but pointed to an earlier blog post where it said One palm images were never stored on-device and were sent encrypted to a “highly secure” cloud space devoted just to One content.

Source: Senators ask Amazon how it will use palm print data from its stores (updated) | Engadget

Basically having these palm prints all in the cloud is really an incredibly insecure way to keep all this biometric data of people that they can’t ever change, short of burning their palms off.