The Linkielist

Linking ideas with the world

The Linkielist

Decoupling for IT Security (=privacy)

Whether we like it or not, we all use the cloud to communicate and to store and process our data. We use dozens of cloud services, sometimes indirectly and unwittingly. We do so because the cloud brings real benefits to individuals and organizations alike. We can access our data across multiple devices, communicate with anyone from anywhere, and command a remote data center’s worth of power from a handheld device.

But using the cloud means our security and privacy now depend on cloud providers. Remember: the cloud is just another way of saying “someone else’s computer.” Cloud providers are single points of failure and prime targets for hackers to scoop up everything from proprietary corporate communications to our personal photo albums and financial documents.

The risks we face from the cloud today are not an accident. For Google to show you your work emails, it has to store many copies across many servers. Even if they’re stored in encrypted form, Google must decrypt them to display your inbox on a webpage. When Zoom coordinates a call, its servers receive and then retransmit the video and audio of all the participants, learning who’s talking and what’s said. For Apple to analyze and share your photo album, it must be able to access your photos.

Hacks of cloud services happen so often that it’s hard to keep up. Breaches can be so large as to affect nearly every person in the country, as in the Equifax breach of 2017, or a large fraction of the Fortune 500 and the U.S. government, as in the SolarWinds breach of 2019-20.

It’s not just attackers we have to worry about. Some companies use their access—benefiting from weak laws, complex software, and lax oversight—to mine and sell our data.

[…]

The less someone knows, the less they can put you and your data at risk. In security this is called Least Privilege. The decoupling principle applies that idea to cloud services by making sure systems know as little as possible while doing their jobs. It states that we gain security and privacy by separating private data that today is unnecessarily concentrated.

To unpack that a bit, consider the three primary modes for working with our data as we use cloud services: data in motion, data at rest, and data in use. We should decouple them all.

Our data is in motion as we exchange traffic with cloud services such as videoconferencing servers, remote file-storage systems, and other content-delivery networks. Our data at rest, while sometimes on individual devices, is usually stored or backed up in the cloud, governed by cloud provider services and policies. And many services use the cloud to do extensive processing on our data, sometimes without our consent or knowledge. Most services involve more than one of these modes.

[…]

Cryptographer David Chaum first applied the decoupling approach in security protocols for anonymity and digital cash in the 1980s, long before the advent of online banking or cryptocurrencies. Chaum asked: how can a bank or a network service provider provide a service to its users without spying on them while doing so?

Chaum’s ideas included sending Internet traffic through multiple servers run by different organizations and divvying up the data so that a breach of any one node reveals minimal information about users or usage. Although these ideas have been influential, they have found only niche uses, such as in the popular Tor browser.

Trust, but Don’t Identify

The decoupling principle can protect the privacy of data in motion, such as financial transactions and Web browsing patterns that currently are wide open to vendors, banks, websites, and Internet Service Providers (ISPs).

Illustration of a process.

STORYTK

1. Barath orders Bruce’s audiobook from Audible. 2. His bank does not know what he is buying, but it guarantees the payment. 3. A third party decrypts the order details but does not know who placed the order. 4. Audible delivers the audiobook and receives the payment.

DECOUPLED E-COMMERCE: By inserting an independent verifier between the bank and the seller and by blinding the buyer’s identity from the verifier, the seller and the verifier cannot identify the buyer, and the bank cannot identify the product purchased. But all parties can trust that the signed payment is valid.

Illustration of a process

STORYTK

1. Bruce’s browser sends a doubly encrypted request for the IP address of sigcomm.org. 2. A third-party proxy server decrypts one layer and passes on the request, replacing Bruce’s identity with an anonymous ID. 3. An Oblivious DNS server decrypts the request, looks up the IP address, and sends it back in an encrypted reply. 4. The proxy server forwards the encrypted reply to Bruce’s browser. 5. Bruce’s browser decrypts the response to obtain the IP address of sigcomm.org.

DECOUPLED WEB BROWSING: ISPs can track which websites their users visit because requests to the Domain Name System (DNS), which converts domain names to IP addresses, are unencrypted. A new protocol called Oblivious DNS can protect users’ browsing requests from third parties. Each name-resolution request is encrypted twice and then sent to an intermediary (a “proxy”) that strips out the user’s IP address and decrypts the outer layer before passing the request to a domain name server, which then decrypts the actual request. Neither the ISP nor any other computer along the way can see what name is being queried. The Oblivious resolver has the key needed to decrypt the request but no information about who placed it. The resolver encrypts its reply so that only the user can read it.

Similar methods have been extended beyond DNS to multiparty-relay protocols that protect the privacy of all Web browsing through free services such as Tor and subscription services such as INVISV Relay and Apple’s iCloud Private Relay.

[…]

Meetings that were once held in a private conference room are now happening in the cloud, and third parties like Zoom see it all: who, what, when, where. There’s no reason a videoconferencing company has to learn such sensitive information about every organization it provides services to. But that’s the way it works today, and we’ve all become used to it.

There are multiple threats to the security of that Zoom call. A Zoom employee could go rogue and snoop on calls. Zoom could spy on calls of other companies or harvest and sell user data to data brokers. It could use your personal data to train its AI models. And even if Zoom and all its employees are completely trustworthy, the risk of Zoom getting breached is omnipresent. Whatever Zoom can do with your data in motion, a hacker can do to that same data in a breach. Decoupling data in motion could address those threats.

[…]

Most storage and database providers started encrypting data on disk years ago, but that’s not enough to ensure security. In most cases, the data is decrypted every time it is read from disk. A hacker or malicious insider silently snooping at the cloud provider could thus intercept your data despite it having been encrypted.

Cloud-storage companies have at various times harvested user data for AI training or to sell targeted ads. Some hoard it and offer paid access back to us or just sell it wholesale to data brokers. Even the best corporate stewards of our data are getting into the advertising game, and the decade-old feudal model of security—where a single company provides users with hardware, software, and a variety of local and cloud services—is breaking down.

Decoupling can help us retain the benefits of cloud storage while keeping our data secure. As with data in motion, the risks begin with access the provider has to raw data (or that hackers gain in a breach). End-to-end encryption, with the end user holding the keys, ensures that the cloud provider can’t independently decrypt data from disk.

[…]

Modern protocols for decoupled data storage, like Tim Berners-Lee’s Solid, provide this sort of security. Solid is a protocol for distributed personal data stores, called pods. By giving users control over both where their pod is located and who has access to the data within it—at a fine-grained level—Solid ensures that data is under user control even if the hosting provider or app developer goes rogue or has a breach. In this model, users and organizations can manage their own risk as they see fit, sharing only the data necessary for each particular use.

[…]

the last few years have seen the advent of general-purpose, hardware-enabled secure computation. This is powered by special functionality on processors known as trusted execution environments (TEEs) or secure enclaves. TEEs decouple who runs the chip (a cloud provider, such as Microsoft Azure) from who secures the chip (a processor vendor, such as Intel) and from who controls the data being used in the computation (the customer or user). A TEE can keep the cloud provider from seeing what is being computed. The results of a computation are sent via a secure tunnel out of the enclave or encrypted and stored. A TEE can also generate a signed attestation that it actually ran the code that the customer wanted to run.

With TEEs in the cloud, the final piece of the decoupling puzzle drops into place. An organization can keep and share its data securely at rest, move it securely in motion, and decrypt and analyze it in a TEE such that the cloud provider doesn’t have access. Once the computation is done, the results can be reencrypted and shipped off to storage. CPU-based TEEs are now widely available among cloud providers, and soon GPU-based TEEs—useful for AI applications—will be common as well.

[…]

Decoupling also allows us to look at security more holistically. For example, we can dispense with the distinction between security and privacy. Historically, privacy meant freedom from observation, usually for an individual person. Security, on the other hand, was about keeping an organization’s data safe and preventing an adversary from doing bad things to its resources or infrastructure.

There are still rare instances where security and privacy differ, but organizations and individuals are now using the same cloud services and facing similar threats. Security and privacy have converged, and we can usefully think about them together as we apply decoupling.

[…]

Decoupling isn’t a panacea. There will always be new, clever side-channel attacks. And most decoupling solutions assume a degree of noncollusion between independent companies or organizations. But that noncollusion is already an implicit assumption today: we trust that Google and Advanced Micro Devices will not conspire to break the security of the TEEs they deploy, for example, because the reputational harm from being found out would hurt their businesses. The primary risk, real but also often overstated, is if a government secretly compels companies to introduce backdoors into their systems. In an age of international cloud services, this would be hard to conceal and would cause irreparable harm.

[…]

Imagine that individuals and organizations held their credit data in cloud-hosted repositories that enable fine-grained encryption and access control. Applying for a loan could then take advantage of all three modes of decoupling. First, the user could employ Solid or a similar technology to grant access to Equifax and a bank only for the specific loan application. Second, the communications to and from secure enclaves in the cloud could be decoupled and secured to conceal who is requesting the credit analysis and the identity of the loan applicant. Third, computations by a credit-analysis algorithm could run in a TEE. The user could use an external auditor to confirm that only that specific algorithm was run. The credit-scoring algorithm might be proprietary, and that’s fine: in this approach, Equifax doesn’t need to reveal it to the user, just as the user doesn’t need to give Equifax access to unencrypted data outside of a TEE.

Building this is easier said than done, of course. But it’s practical today, using widely available technologies. The barriers are more economic than technical.

[…]

One of the challenges of trying to regulate tech is that industry incumbents push for tech-only approaches that simply whitewash bad practices. For example, when Facebook rolls out “privacy-enhancing” advertising, but still collects every move you make, has control of all the data you put on its platform, and is embedded in nearly every website you visit, that privacy technology does little to protect you. We need to think beyond minor, superficial fixes.

Decoupling might seem strange at first, but it’s built on familiar ideas. Computing’s main tricks are abstraction and indirection. Abstraction involves hiding the messy details of something inside a nice clean package: when you use Gmail, you don’t have to think about the hundreds of thousands of Google servers that have stored or processed your data. Indirection involves creating a new intermediary between two existing things, such as when Uber wedged its app between passengers and drivers.

The cloud as we know it today is born of three decades of increasing abstraction and indirection. Communications, storage, and compute infrastructure for a typical company were once run on a server in a closet. Next, companies no longer had to maintain a server closet, but could rent a spot in a dedicated colocation facility. After that, colocation facilities decided to rent out their own servers to companies. Then, with virtualization software, companies could get the illusion of having a server while actually just running a virtual machine on a server they rented somewhere. Finally, with serverless computing and most types of software as a service, we no longer know or care where or how software runs in the cloud, just that it does what we need it to do.

[…]

We’re now at a turning point where we can add further abstraction and indirection to improve security, turning the tables on the cloud providers and taking back control as organizations and individuals while still benefiting from what they do.

The needed protocols and infrastructure exist, and there are services that can do all of this already, without sacrificing the performance, quality, and usability of conventional cloud services.

But we cannot just rely on industry to take care of this. Self-regulation is a time-honored stall tactic: a piecemeal or superficial tech-only approach would likely undermine the will of the public and regulators to take action. We need a belt-and-suspenders strategy, with government policy that mandates decoupling-based best practices, a tech sector that implements this architecture, and public awareness of both the need for and the benefits of this better way forward.

Source: Essays: Decoupling for Security – Schneier on Security

In a first, cryptographic keys protecting SSH connections stolen in new attack

For the first time, researchers have demonstrated that a large portion of cryptographic keys used to protect data in computer-to-server SSH traffic are vulnerable to complete compromise when naturally occurring computational errors occur while the connection is being established.

Underscoring the importance of their discovery, the researchers used their findings to calculate the private portion of almost 200 unique SSH keys they observed in public Internet scans taken over the past seven years. The researchers suspect keys used in IPsec connections could suffer the same fate. SSH is the cryptographic protocol used in secure shell connections that allows computers to remotely access servers, usually in security-sensitive enterprise environments. IPsec is a protocol used by virtual private networks that route traffic through an encrypted tunnel.

The vulnerability occurs when there are errors during the signature generation that takes place when a client and server are establishing a connection. It affects only keys using the RSA cryptographic algorithm, which the researchers found in roughly a third of the SSH signatures they examined. That translates to roughly 1 billion signatures out of the 3.2 billion signatures examined. Of the roughly 1 billion RSA signatures, about one in a million exposed the private key of the host.

While the percentage is infinitesimally small, the finding is nonetheless surprising for several reasons—most notably because most SSH software in use has deployed a countermeasure for decades that checks for signature faults before sending a signature over the Internet. Another reason for the surprise is that until now, researchers believed that signature faults exposed only RSA keys used in the TLS—or Transport Layer Security—protocol encrypting Web and email connections. They believed SSH traffic was immune from such attacks because passive attackers—meaning adversaries simply observing traffic as it goes by—couldn’t see some of the necessary information when the errors happened.

[…]

The new findings are laid out in a paper published earlier this month titled “Passive SSH Key Compromise via Lattices.” It builds on a series of discoveries spanning more than two decades. In 1996 and 1997, researchers published findings that, taken together, concluded that when naturally occurring computational errors resulted in a single faulty RSA signature, an adversary could use it to compute the private portion of the underlying key pair.

The reason: By comparing the malformed signature with a valid signature, the adversary could perform a GCD—or greatest common denominator—mathematical operation that, in turn, derived one of the prime numbers underpinning the security of the key. This led to a series of attacks that relied on actively triggering glitches during session negotiation, capturing the resulting faulty signature and eventually compromising the key. Triggering the errors relied on techniques such as tampering with a computer’s power supply or shining a laser on a smart card.

Then, in 2015, a researcher showed for the first time that attacks on keys used during TLS sessions were possible even when an adversary didn’t have physical access to the computing device. Instead, the attacker could simply connect to the device and opportunistically wait for a signature error to occur on its own. Last year, researchers found that even with countermeasures added to most TLS implementations as long as two decades earlier, they were still able to passively observe faulty signatures that allowed them to compromise the RSA keys of a small population of VPNs, network devices, and websites, most notably Baidu.com, a top-10 Alexa property.

[…]

The attack described in the paper published this month clears the hurdle of missing key material exposed in faulty SSH signatures by harnessing an advanced cryptanalytic technique involving the same mathematics found in lattice-based cryptography. The technique was first described in 2009, but the paper demonstrated only that it was theoretically possible to recover a key using incomplete information in a faulty signature. This month’s paper implements the technique in a real-world attack that uses a naturally occurring corrupted SSH signature to recover the underlying RSA key that generated it.

[…]

The researchers traced the keys they compromised to devices that used custom, closed-source SSH implementations that didn’t implement the countermeasures found in OpenSSH and other widely used open source code libraries. The devices came from four manufacturers: Cisco, Zyxel, Hillstone Networks, and Mocana.

[…]

Once attackers have possession of the secret key through passive observation of traffic, they can mount an active Mallory-in-the-middle attack against the SSH server, in which they use the key to impersonate the server and respond to incoming SSH traffic from clients. From there, the attackers can do things such as recover the client’s login credentials. Similar post-exploit attacks are also possible against IPsec servers if faults expose their private keys.

[…]

a single flip of a bit—in which a 0 residing in a memory chip register turns to 1 or vice versa—is all that’s required to trigger an error that exposes a secret RSA key. Consequently, it’s crucial that the countermeasures that detect and suppress such errors work with near-100 percent accuracy

[…]

Source: In a first, cryptographic keys protecting SSH connections stolen in new attack | Ars Technica

European digital identity: Council and Parliament reach a provisional agreement on eID

[…]

Under the new law, member states will offer citizens and businesses digital wallets that will be able to link their national digital identities with proof of other personal attributes (e.g., driving licence, diplomas, bank account). Citizens will be able to prove their identity and share electronic documents from their digital wallets with a click of a button on their mobile phone.

The new European digital identity wallets will enable all Europeans to access online services with their national digital identification, which will be recognised throughout Europe, without having to use private identification methods or unnecessarily sharing personal data. User control ensures that only information that needs to be shared will be shared.

Concluding the initial provisional agreement

Since the initial provisional agreement on some of the main elements of the legislative proposal at the end of June this year, a thorough series of technical meetings followed in order to complete a text that allowed the finalisation of the file in full. Some relevant aspects agreed by the co-legislators today are:

  • the e-signatures: the wallet will be free to use for natural persons by default, but member states may provide for measures to ensure that the free-of-charge use is limited to non-professional purposes
  • the wallet’s business model: the issuance, use and revocation will be free of charge for all natural persons
  • the validation of electronic attestation of attributes: member states shall provide free-of-charge validation mechanisms only to verify the authenticity and validity of the wallet and of the relying parties’ identity
  • the code for the wallets: the application software components will be open source, but member states are granted necessary leeway so that, for justified reasons, specific components other than those installed on user devices may not be disclosed
  • consistency between the wallet as an eID means and the underpinning scheme under which it is issued has been ensured

Finally, the revised law clarifies the scope of the qualified web authentication certificates (QWACs), which ensures that users can verify who is behind a website, while preserving the current well-established industry security rules and standards.

Next steps

Technical work will continue to complete the legal text in accordance with the provisional agreement. When finalised, the text will be submitted to the member states’ representatives (Coreper) for endorsement. Subject to a legal/linguistic review, the revised regulation will then need to be formally adopted by the Parliament and the Council before it can be published in the EU’s Official Journal and enter into force.

[…]

Source: European digital identity: Council and Parliament reach a provisional agreement on eID – Consilium

Cisco Can’t Stop Using Hard-Coded Passwords

There’s a new Cisco vulnerability in its Emergency Responder product:

This vulnerability is due to the presence of static user credentials for the root account that are typically reserved for use during development. An attacker could exploit this vulnerability by using the account to log in to an affected system. A successful exploit could allow the attacker to log in to the affected system and execute arbitrary commands as the root user.

This is not the first time Cisco products have had hard-coded passwords made public. You’d think it would learn.

Source: Cisco Can’t Stop Using Hard-Coded Passwords – Schneier on Security

Troy Hunt scours the dark web for your stolen data – a look at HaveIBeenPwned: a 1 man operation

[…] Have I Been Pwned started life as a hobby project. In fact, Troy wasn’t working in the cybersecurity industry until a chance encounter tweaked his curiosity.

[…]

Hackers had stolen the email addresses and passwords of 152 million of Adobe’s customers in November 2013 — including, as it turned out, Troy’s.

Only, he wasn’t an Adobe customer. He did some digging and found that Adobe had acquired another company that he did have an account with, and his data along with it.

But that wasn’t where it ended. Another question weighed on Troy’s mind — one he would soon become synonymous with. Where else had his data been leaked?

So, two months after the Adobe breach, he launched Have I Been Pwned — a website that would answer this exact question for anyone in the world.

Even though it’s grown into an industry behemoth, the day-to-day reality of running the site hasn’t changed all that much since 2013.

[…]

He only collects (and encrypts) the mobile numbers, emails and passwords that he finds in the breaches, discarding the victims’ names, physical addresses, bank details and other sensitive information.

The idea is to let users find out where their data has been leaked from, but without exposing them to further risk.

Once he identifies where a data breach has occurred, Troy also contacts the organisation responsible to allow it to inform its users before he does. This, he says, is often the hardest step of the process because he has to convince them it’s legitimate and not some kind of scam itself.

He’s not required to give organisations this opportunity, much less persist when they ignore his messages or accuse him of trying to shake them down for money.

[…]

These days, major tech companies like Mozilla and 1Password use Have I Been Pwned, and Troy likes to point out that dozens of national governments and law enforcement agencies also partner with his service.

[…]

the reality is Troy doesn’t answer to an electorate, or even a board.

“He’s not a company that’s audited. He’s just a dude on the web,” says Jane Andrew, an expert on data breaches at the University of Sydney.

“I think it’s so shocking that this is where we find out information about ourselves.

“It’s just one guy facilitating this. It’s a critical global risk.”

She says governments and law enforcement have, in general, left it to individuals to deal with the fallout from data breaches.

[…]

Without an effective global regulator, Professor Andrew says, a crucial part of the world’s cybersecurity infrastructure is left to rely on the goodwill of this one man on the Gold Coast.

[…]

Source: Troy Hunt scours the dark web for your stolen data — but he’s just trying to help – ABC News

Sourcegraph published admin token, someone creates API endpoint with free access

An unknown hacker gained administrative control of Sourcegraph, an AI-driven service used by developers at Uber, Reddit, Dropbox, and other companies, and used it to provide free access to resources that normally would have required payment.

In the process, the hacker(s) may have accessed personal information belonging to Sourcegraph users, Diego Comas, Sourcegraph’s head of security, said in a post on Wednesday. For paid users, the information exposed included license keys and the names and email addresses of license key holders. For non-paying users, it was limited to email addresses associated with their accounts. Private code, emails, passwords, usernames, or other personal information were inaccessible.

Free-for-all

The hacker gained administrative access by obtaining an authentication key a Sourcegraph developer accidentally included in a code published to a public Sourcegraph instance hosted on Sourcegraph.com. After creating a normal user Sourcegraph account, the hacker used the token to elevate the account privileges to those of an administrator. The access token appeared in a pull request posted on July 14, the user account was created on August 28, and the elevation to admin occurred on August 30.

“The malicious user, or someone connected to them, created a proxy app allowing users to directly call Sourcegraph’s APIs and leverage the underlying LLM [large language model],” Comas wrote. “Users were instructed to create free Sourcegraph.com accounts, generate access tokens, and then request the malicious user to greatly increase their rate limit. On August 30 (2023-08-30 13:25:54 UTC), the Sourcegraph security team identified the malicious site-admin user, revoked their access, and kicked off an internal investigation for both mitigation and next steps.”

The resource free-for-all generated a spike in calls to Sourcegraph programming interfaces, which are normally rate-limited for free accounts.

A graph showing API usage from July 31 to August 29 with a major spike at the end.
Enlarge / A graph showing API usage from July 31 to August 29 with a major spike at the end.
Sourcegraph

“The promise of free access to Sourcegraph API prompted many to create accounts and start using the proxy app,” Comas wrote. “The app and instructions on how to use it quickly made its way across the web, generating close to 2 million views. As more users discovered the proxy app, they created free Sourcegraph.com accounts, adding their access tokens, and accessing Sourcegraph APIs illegitimately.”

[…]

Source: Hacker gains admin control of Sourcegraph and gives free access to the masses | Ars Technica

Windows feature that resets system clocks based on random data is wreaking havoc

A few months ago, an engineer in a data center in Norway encountered some perplexing errors that caused a Windows server to suddenly reset its system clock to 55 days in the future. The engineer relied on the server to maintain a routing table that tracked cell phone numbers in real time as they moved from one carrier to the other. A jump of eight weeks had dire consequences because it caused numbers that had yet to be transferred to be listed as having already been moved and numbers that had already been transferred to be reported as pending.

[…]

The culprit was a little-known feature in Windows known as Secure Time Seeding. Microsoft introduced the time-keeping feature in 2016 as a way to ensure that system clocks were accurate. Windows systems with clocks set to the wrong time can cause disastrous errors when they can’t properly parse timestamps in digital certificates or they execute jobs too early, too late, or out of the prescribed order. Secure Time Seeding, Microsoft said, was a hedge against failures in the battery-powered onboard devices designed to keep accurate time even when the machine is powered down.

[…]

ometime last year, a separate engineer named Ken began seeing similar time drifts. They were limited to two or three servers and occurred every few months. Sometimes, the clock times jumped by a matter of weeks. Other times, the times changed to as late as the year 2159.

“It has exponentially grown to be more and more servers that are affected by this,” Ken wrote in an email. “In total, we have around 20 servers (VMs) that have experienced this, out of 5,000. So it’s not a huge amount, but it is considerable, especially considering the damage this does. It usually happens to database servers. When a database server jumps in time, it wreaks havoc, and the backup won’t run, either, as long as the server has such a huge offset in time. For our customers, this is crucial.”

Simen and Ken, who both asked to be identified only by their first names because they weren’t authorized by their employers to speak on the record, soon found that engineers and administrators had been reporting the same time resets since 2016.

[…]

“At this point, we are not completely sure why secure time seeding is doing this,” Ken wrote in an email. “Being so seemingly random, it’s difficult to [understand]. Microsoft hasn’t really been helpful in trying to track this, either. I’ve sent over logs and information, but they haven’t really followed this up. They seem more interested in closing the case.”

The logs Ken sent looked like the ones shown in the two screenshots below. They captured the system events that occurred immediately before and after the STS changed the times. The selected line in the first image shows the bounds of what STS calculates as the correct time based on data from SSL handshakes and the heuristics used to corroborate it.

Screenshot of a system event log as STS causes a system clock to jump to a date four months later than the current time.
Screenshot of a system event log as STS causes a system clock to jump to a date four months later than the current time.
Ken
Screenshot of a system event log when STS resets the system date to a few weeks later than the current date.
Screenshot of a system event log when STS resets the system date to a few weeks later than the current date.
Ken

The “Projected Secure Time” entry immediately above the selected line shows that Windows estimates the current date to be October 20, 2023, more than four months later than the time shown in the system clock. STS then changes the system clock to match the incorrectly projected secure time, as shown in the “Target system time.”

The second image shows a similar scenario in which STS changes the date from June 10, 2023, to July 5, 2023.

[…]

As the creator and lead developer of the Metasploit exploit framework, a penetration tester, and a chief security officer, Moore has a deep background in security. He speculated that it might be possible for malicious actors to exploit STS to breach Windows systems that don’t have STS turned off. One possible exploit would work with an attack technique known as Server Side Request Forgery.

Microsoft’s repeated refusal to engage with customers experiencing these problems means that for the foreseeable future, Windows will by default continue to automatically reset system clocks based on values that remote third parties include in SSL handshakes. Further, it means that it will be incumbent on individual admins to manually turn off STS when it causes problems.

That, in turn, is likely to keep fueling criticism that the feature as it has existed for the past seven years does more harm than good.

STS “is more like malware than an actual feature,” Simen wrote. “I’m amazed that the developers didn’t see it, that QA didn’t see it, and that they even wrote about it publicly without anyone raising a red flag. And that nobody at Microsoft has acted when being made aware of it.”

Source: Windows feature that resets system clocks based on random data is wreaking havoc | Ars Technica

Nearly every AMD CPU since 2017 vulnerable to Inception bug

AMD processor users, you have another data-leaking vulnerability to deal with: like Zenbleed, this latest hole can be to steal sensitive data from a running vulnerable machine.

The flaw (CVE-2023-20569), dubbed Inception in reference to the Christopher Nolan flick about manipulating a person’s dreams to achieve a desired outcome in the real world, was disclosed by ETH Zurich academics this week.

And yes, it’s another speculative-execution-based side-channel that malware or a rogue logged-in user can abuse to obtain passwords, secrets, and other data that should be off limits.

Inception utilizes a previously disclosed vulnerability alongside a novel kind of transient execution attack, which the researchers refer to as training in transient execution (TTE), to leak information from an operating system kernel at a rate of 39 bytes per second on vulnerable hardware. In this case, vulnerable systems encompasses pretty much AMD’s entire CPU lineup going back to 2017, including its latest Zen 4 Epyc and Ryzen processors.

Despite the potentially massive blast radius, AMD is downplaying the threat while simultaneously rolling out microcode updates for newer Zen chips to mitigate the risk. “AMD believes this vulnerability is only potentially exploitable locally, such as via downloaded malware,” the biz said in a public disclosure, which ranks Inception “medium” in severity.

Intel processors weren’t found to be vulnerable to Inception, but that doesn’t mean they’re entirely in the clear. Chipzilla is grappling with its own separate side-channel attack disclosed this week called Downfall.

How Inception works

As we understand it, successful exploitation of Inception takes advantage of the fact that in order for modern CPUs to achieve the performance they do, processor cores have to cut corners.

Rather than executing instructions strictly in order, the CPU core attempts to predict which ones will be needed and runs those out of sequence if it can, a technique called speculative execution. If the core guesses incorrectly, it discards or unwinds the computations it shouldn’t have done. That allows the core to continue getting work done without having to wait around for earlier operations to complete. Executing these instructions speculatively is also known as transient execution, and when this happens, a transient window is opened.

Normally, this process renders substantial performance advantages, and refining this process is one of several ways CPU designers eke out instruction-per-clock gains generation after generation. However, as we’ve seen with previous side-channel attacks, like Meltdown and Spectre, speculative execution can be abused to make the core start leaking information it otherwise shouldn’t to observers on the same box.

Inception is a fresh twist on this attack vector, and involves two steps. The first takes advantage of a previously disclosed vulnerability called Phantom execution (CVE-2022-23825) which allows an unprivileged user to trigger a misprediction — basically making the core guess the path of execution incorrectly — to create a transient execution window on demand.

This window serves as a beachhead for a TTE attack. Instead of leaking information from the initial window, the TTE injects new mispredictions, which trigger more future transient windows. This, the researchers explain, causes an overflow in the return stack buffer with an attacker-controlled target.

“The result of this insight is Inception, an attack that leaks arbitrary data from an unprivileged process on all AMD Zen CPUs,” they wrote.

In a video published alongside the disclosure, and included below, the Swiss team demonstrate this attack by leaking the root account hash from /etc/shadow on a Zen 4-based Ryzen 7700X CPU with all Spectre mitigations enabled.

You can find a more thorough explanation of Inception, including the researchers’ methodology in a paper here [PDF]. It was written by Daniël Trujillo, Johannes Wikner, and Kaveh Razavi, of ETH Zurich. They’ve also shared proof-of-concept exploit code here.

Source: Nearly every AMD CPU since 2017 vulnerable to Inception bug • The Register

AI listens to keyboards on video conferences – decodes passwords

[…] a new paper from the UK that shows how researchers trained an AI to decode keystrokes from noise on conference calls.

The researchers point out that people don’t expect sound-based exploits. The paper reads, “For example, when typing a password, people will regularly hide their screen but will do little to obfuscate their keyboard’s sound.”

The technique uses the same kind of attention network that makes models like ChatGPT so powerful. It seems to work well, as the paper claims a 97% peak accuracy over both a telephone or Zoom. In addition, where the model was wrong, it tended to be close, identifying an adjacent keystroke instead of the correct one. This would be easy to correct for in software, or even in your brain as infrequent as it is. If you see the sentence “Paris im the s[ring,” you can probably figure out what was really typed.

[…]

Source: Noisy Keyboards Sink Ships | Hackaday

Microsoft Comes Under Blistering Criticism For ‘Grossly Irresponsible’ Azure Security

An anonymous reader quotes a report from Ars Technica: Microsoft has once again come under blistering criticism for the security practices of Azure and its other cloud offerings, with the CEO of security firm Tenable saying Microsoft is “grossly irresponsible” and mired in a “culture of toxic obfuscation.” The comments from Amit Yoran, chairman and CEO of Tenable, come six days after Sen. Ron Wyden (D-Ore.) blasted Microsoft for what he said were “negligent cybersecurity practices” that enabled hackers backed by the Chinese government to steal hundreds of thousands of emails from cloud customers, including officials in the US Departments of State and Commerce. Microsoft has yet to provide key details about the mysterious breach, which involved the hackers obtaining an extraordinarily powerful encryption key granting access to a variety of its other cloud services. The company has taken pains ever since to obscure its infrastructure’s role in the mass breach.

On Wednesday, Yoran took to LinkedIn to castigate Microsoft for failing to fix what the company said on Monday was a “critical” issue that gives hackers unauthorized access to data and apps managed by Azure AD, a Microsoft cloud offering for managing user authentication inside large organizations. Monday’s disclosure said that the firm notified Microsoft of the problem in March and that Microsoft reported 16 weeks later that it had been fixed. Tenable researchers told Microsoft that the fix was incomplete. Microsoft set the date for providing a complete fix to September 28.

“To give you an idea of how bad this is, our team very quickly discovered authentication secrets to a bank,” Yoran wrote. “They were so concerned about the seriousness and the ethics of the issue that we immediately notified Microsoft.” He continued: “Did Microsoft quickly fix the issue that could effectively lead to the breach of multiple customers’ networks and services? Of course not. They took more than 90 days to implement a partial fix — and only for new applications loaded in the service.” In response, Microsoft officials wrote: “We appreciate the collaboration with the security community to responsibly disclose product issues. We follow an extensive process involving a thorough investigation, update development for all versions of affected products, and compatibility testing among other operating systems and applications. Ultimately, developing a security update is a delicate balance between timeliness and quality, while ensuring maximized customer protection with minimized customer disruption.” Microsoft went on to say that the initial fix in June “mitigated the issue for the majority of customers” and “no customer action is required.”

In a separate email, Yoran responded: “It now appears that it’s either fixed, or we are blocked from testing. We don’t know the fix, or mitigation, so hard to say if it’s truly fixed, or Microsoft put a control in place like a firewall rule or ACL to block us. When we find vulns in other products, vendors usually inform us of the fix so we can validate it effectively. With Microsoft Azure that doesn’t happen, so it’s a black box, which is also part of the problem. The ‘just trust us’ lacks credibility when you have the current track record.”

Source: Microsoft Comes Under Blistering Criticism For ‘Grossly Irresponsible’ Security – Slashdot

A great example of why a) closed source software is a really bad idea, b) why responsible disclosure is a good idea and c) why cloud is often a bad idea

Cult of Dead Cow hacktivists design distributed encryption system for mobile apps

Once known for distributing hacking tools and shaming software companies into improving their security, a famed group of technology activists is now working to develop a system that will allow the creation of messaging and social networking apps that won’t keep hold of users’ personal data.

The group, Cult of the Dead Cow, has developed a coding framework that can be used by app developers who are willing to embrace strong encryption and forsake revenue from advertising that is targeted to individuals based on detailed profiles gleaned from the data most apps now routinely collect.

The team is building on the work of such free products as Signal, which offers strong encryption for text messages and voice calls, and Tor, which offers anonymous web surfing by routing traffic through a series of servers to disguise the location of the person conducting the search.

The latest effort, to be detailed at the massive annual Def Con hacking conference in Las Vegas next week, seeks to provide a foundation for messaging, file sharing and even social networking apps without harvesting any data, all secured by the kind of end-to-end encryption that makes interception hard even for governments.

Called Veilid, and pronounced vay-lid, the code can be used by developers to build applications for mobile devices or the web. Those apps will pass fully encrypted content to one another using the Veilid protocol, its developers say. As with the file-sharing software BitTorrent, which distributes different pieces of the same content simultaneously, the network will get faster as more devices join and share the load, the developers say. In such decentralized “peer-to-peer” networks, users download data from each other instead of from a central machine.

As with some other open-source endeavors, the challenge will come in persuading programmers and engineers to devote time to designing apps that are compatible with Veilid. Though developers could charge money for those apps or sell ads, the potential revenue streams are limited by the inability to collect detailed information that has become a primary method for distributing targeted ads or pitching a product to a specific set of users.

The team behind Veilid has not yet released documentation explaining its design choices, and collaborative work on an initial messaging app, intended to function without requiring a phone number, has yet to produce a test version.

But the nascent project has other things going for it.

It arrives amid disarray, competition and a willingness to experiment among social network and chat users resentful of Twitter and Facebook. And it buttresses opposition to increasing moves by governments, lately including the United Kingdom, to undercut strong encryption with laws requiring disclosure on demand of content or user identities. Apple, Facebook parent Meta and Signal recently threatened to pull some UK services if that country’s Online Safety Bill is adopted unchanged.

Civil rights activists and abortion rights supporters have also been alarmed by police use of messages sent by text and Facebook Messenger to investigate abortions in states that have banned the procedure after the first six weeks of pregnancy.

“It’s great that people are developing an end-to-end encryption framework for everything,” said Cindy Cohn, executive director of the nonprofit Electronic Frontier Foundation. “We can move past the surveillance business model.”

Source: Cult of Dead Cow hacktivists design encryption system for mobile apps – The Washington Post

Android phones can now tell you if there’s an AirTag following you

When Google announced that trackers would be able to tie in to its 3 billion-device Bluetooth tracking network at its Google I/O 2023 conference, it also said that it would make it easier for people to avoid being tracked by trackers they don’t know about, like Apple AirTags.

Now Android users will soon get these “Unknown Tracker Alerts.” Based on the joint specification developed by Google and Apple, and incorporating feedback from tracker-makers like Tile and Chipolo, the alerts currently work only with AirTags, but Google says it will work with tag manufacturers to expand its coverage.

Android’s unknown tracker alerts, illustrated in moving Corporate Memphis style.

For now, if an AirTag you don’t own “is separated from its owner and determined to be traveling with you,” a notification will tell you this and that “the owner of the tracker can see its location.” Tapping the notification brings up a map tracing back to where it was first seen traveling with you. Google notes that this location data “is always encrypted and never shared with Google.”

Finally, Google offers a manual scan feature if you’re suspicious that your Android phone isn’t catching a tracker or want to see what’s nearby. The alerts are rolling out through a Google Play services update to devices on Android 6.0 and above over the coming weeks.

[…]

Source: Android phones can now tell you if there’s an AirTag following you

Firmware vulnerabilities in millions of servers could give hackers superuser status

[…] The vulnerabilities reside inside firmware that Duluth, Georgia-based AMI makes for BMCs (baseboard management controllers). These tiny computers soldered into the motherboard of servers allow cloud centers, and sometimes their customers, to streamline the remote management of vast fleets of computers. They enable administrators to remotely reinstall OSes, install and uninstall apps, and control just about every other aspect of the system—even when it’s turned off. BMCs provide what’s known in the industry as “lights-out” system management.

[…]

These vulnerabilities range in severity from High to Critical, including unauthenticated remote code execution and unauthorized device access with superuser permissions. They can be exploited by remote attackers having access to Redfish remote management interfaces, or from a compromised host operating system. Redfish is the successor to traditional IPMI and provides an API standard for the management of a server’s infrastructure and other infrastructure supporting modern data centers. Redfish is supported by virtually all major server and infrastructure vendors, as well as the OpenBMC firmware project often used in modern hyperscale environments.

[…]

The researchers went on to note that if they could locate the vulnerabilities and write exploits after analyzing the publicly available source code, there’s nothing stopping malicious actors from doing the same. And even without access to the source code, the vulnerabilities could still be identified by decompiling BMC firmware images. There’s no indication malicious parties have done so, but there’s also no way to know they haven’t.

The researchers privately notified AMI of the vulnerabilities, and the company created firmware patches, which are available to customers through a restricted support page. AMI has also published an advisory here.

The vulnerabilities are:

  • CVE-2023-34329, an authentication bypass via HTTP headers that has a severity rating of 9.9 out of 10, and
  • CVE-2023-34330, Code injection via Dynamic Redfish Extension. Its severity rating is 8.2.

[…]

“By spoofing certain HTTP headers, an attacker can trick BMC into believing that external communication is coming in from the USB0 internal interface,” the researchers wrote. “When this is combined on a system shipped with the No Auth option configured, the attacker can bypass authentication, and perform Redfish API actions.”

One example would be to create an account that poses as a legitimate administrator and has all system rights afforded one.

CVE-2023-34330, meanwhile, can be exploited on systems with the no auth setting to effectively execute code of their choice. In the event the no auth option isn’t enabled, the attackers first must have BMC credentials. That’s a higher bar but by no means out of reach for sophisticated actors.

[…]

Source: Firmware vulnerabilities in millions of computers could give hackers superuser status | Ars Technica

Google Urges Gmail Users to Enable ‘Enhanced Safe Browsing’ for Faster, More Proactive Protection – but also takes screenshots of your browsing habits

The Washington Post’s “Tech Friend” newsletter has the latest on Google’s “Enhanced Safe Browsing” for Chrome and Gmail, which “monitors the web addresses of sites that you visit and compares them to constantly updated Google databases of suspected scam sites.” You’ll see a red warning screen if Google believes you’re on a website that is, for example, impersonating your bank. You can also check when you’re downloading a file to see if Google believes it might be a scam document. In the normal mode without Enhanced Safe Browsing, Google still does many of those same security checks. But the company might miss some of the rapid-fire activity of crooks who can create a fresh bogus website minutes after another one is blocked as a scam.

This enhanced security feature has been around for three years, but Google recently started putting a message in Gmail inboxes suggesting that people turn on Enhanced Safe Browsing.

Security experts told me that it’s a good idea to turn on this safety feature but that it comes with trade-offs. The company already knows plenty about you, particularly when you’re logged into Gmail, YouTube, Chrome or other Google services. If you turn on Enhanced Safe Browsing, Google may know even more about what sites you’re visiting even if you’re not signed into a Google account. It also collects bits of visual images from sites you’re visiting to scan for hallmarks of scam sites.

Google said it will only use this information to stop bad guys and train its computers to improve security for you and everyone else. You should make the call whether you are willing to give up some of your privacy for extra security protections from common crimes.
Gmail users can toggle the feature on or off at this URL. Google tells users that enabling the feature will provide “faster and more proactive protection against dangerous websites, downloads, and extensions.”

The Post’s reporter also asked Google why it doesn’t just enable the extra security automatically, and “The company told me that because Google is collecting more data in Enhanced Safe Browsing mode, it wants to ask your permission.”

The Post adds as an aside that “It’s also not your fault that phishing scams are everywhere. Our whole online security system is unsafe and stupid… Our goal should be to slowly replace the broken online security system with newer technologies that ditch our crime-prone password system for different methods of verifying we are who we say we are.”

Source: Google Urges Gmail Users to Enable ‘Enhanced Safe Browsing’ for Faster, More Proactive Protection – Slashdot

TETRA Military and Police Radio Code Encryption Has a Flaw: A built in Backdoor

For more than 25 years, a technology used for critical data and voice radio communications around the world has been shrouded in secrecy to prevent anyone from closely scrutinizing its security properties for vulnerabilities

[…]

The backdoor, known for years by vendors that sold the technology but not necessarily by customers, exists in an encryption algorithm baked into radios sold for commercial use in critical infrastructure. It’s used to transmit encrypted data and commands in pipelines, railways, the electric grid, mass transit, and freight trains. It would allow someone to snoop on communications to learn how a system works, then potentially send commands to the radios that could trigger blackouts, halt gas pipeline flows, or reroute trains.

Researchers found a second vulnerability in a different part of the same radio technology that is used in more specialized systems sold exclusively to police forces, prison personnel, military, intelligence agencies, and emergency services, such as the C2000 communication system used by Dutch police, fire brigades, ambulance services, and Ministry of Defense for mission-critical voice and data communications. The flaw would let someone decrypt encrypted voice and data communications and send fraudulent messages to spread misinformation or redirect personnel and forces during critical times.

[…]

The Dutch National Cyber Security Centre assumed the responsibility of notifying radio vendors and computer emergency response teams around the world about the problems, and of coordinating a timeframe for when the researchers should publicly disclose the issues.

In a brief email, NCSC spokesperson Miral Scheffer called TETRA “a crucial foundation for mission-critical communication in the Netherlands and around the world” and emphasized the need for such communications to always be reliable and secure, “especially during crisis situations.” She confirmed the vulnerabilities would let an attacker in the vicinity of impacted radios “intercept, manipulate or disturb” communications and said the NCSC had informed various organizations and governments, including Germany, Denmark, Belgium, and England, advising them how to proceed.

[…]

The researchers plan to present their findings next month at the BlackHat security conference in Las Vegas, when they will release detailed technical analysis as well as the secret TETRA encryption algorithms that have been unavailable to the public until now. They hope others with more expertise will dig into the algorithms to see if they can find other issues.

[…]

Although the standard itself is publicly available for review, the encryption algorithms are only available with a signed NDA to trusted parties, such as radio manufacturers. The vendors have to include protections in their products to make it difficult for anyone to extract the algorithms and analyze them.

[…]

Source: TETRA Radio Code Encryption Has a Flaw: A Backdoor | WIRED

AMD ‘Zenbleed’ bug allows Meltdown-like data leakage

AMD has started issuing some patches for its processors affected by a serious silicon-level bug dubbed Zenbleed that can be exploited by rogue users and malware to steal passwords, cryptographic keys, and other secrets from software running on a vulnerable system.

Zenbleed affects Ryzen and Epyc Zen 2 chips, and can be abused to swipe information at a rate of at least 30Kb per core per second. That’s practical enough for someone on a shared server, such as a cloud-hosted box, to spy on other tenants. Exploiting Zenbleed involves abusing speculative execution, though unlike the related Spectre family of design flaws, the bug is pretty easy to exploit. It is more on a par with Meltdown.

Malware already running on a system, or a rogue logged-in user, can exploit Zenbleed without any special privileges and inspect data as it is being processed by applications and the operating system, which can include sensitive secrets, such as passwords. It’s understood a malicious webpage, running some carefully crafted JavaScript, could quietly exploit Zenbleed on a personal computer to snoop on this information.

The vulnerability was highlighted today by Google infosec guru Tavis Ormandy, who discovered the data-leaking vulnerability while fuzzing hardware for flaws, and reported it to AMD in May. Ormandy, who acknowledged some of his colleagues for their help in investigating the security hole, said AMD intends to address the flaw with microcode upgrades, and urged users to “please update” their vulnerable machines as soon as they are able to.

Proof-of-concept exploit code, produced by Ormandy, is available here, and we’ve confirmed it works on a Zen 2 Epyc server system when running on the bare metal. While the exploit runs, it shows off the sensitive data being processed by the box, which can appear in fragments or in whole depending on the code running at the time.

If you stick any emulation layer in between, such as Qemu, then the exploit understandably fails.

What’s hit?

The bug affects all AMD Zen 2 processors including the following series: Ryzen 3000; Ryzen Pro 3000; Ryzen Threadripper 3000; Ryzen 4000 Pro; Ryzen 4000, 5000, and 7020 with Radeon Graphics; and Epyc Rome datacenter processors.

AMD today issued a security advisory here, using the identifiers AMD-SB-7008 and CVE-2023-20593 to track the vulnerability. The chip giant scored the flaw as a medium severity one, describing it as a “cross-process information leak.”

A microcode patch for Epyc 7002 processors is available now. As for the rest of its affected silicon: AMD is targeting December 2023 for updates for desktop systems (eg, Ryzen 3000 and Ryzen 4000 with Radeon); October for high-end desktops (eg, Threadripper 3000); November and December for workstations (eg, Threadripper Pro 3000); and November to December for mobile (laptop-grade) Ryzens. Shared systems are the priority, it would seem, which makes sense given the nature of the design blunder.

[…]

Source: AMD ‘Zenbleed’ bug allows Meltdown-like data leakage

VanMoof ebike should be bricked if servers go down – fortunately security is so bad a rival has an app to allow you to unlock it

[…] an app is required to use many of the smart features of its bikes – and that app relies on communication with VanMoof servers. If the company goes under, and the servers go offline, that could leave ebike owners unable to even unlock their bikes

[…]

While unlocking is activated by Bluetooth when your phone comes into range of the bike, it relies on a rolling key code – and that function in turn relies on access to a VanMoof server. If the company goes bust, then no server, no key code generation, no unlock.

Rival ebike company Cowboy has a solution

A rival ebike company, Belgian company Cowboy, has stepped in to offer a solution. TNW reports that it has created an app which allows VanMoof owners to generate and save their own digital key, which can be used in place of one created by a VanMoof server.

If you have a VanMoof bike, grab the app now, as it requires an initial connection to the VanMoof server to fetch your current keycode. If the server goes offline, existing Bikey App users can continue to unlock their bikes, but it will no longer be possible for new users to activate it.

[…]

In some cases, a companion app may work perfectly well in standalone mode, but it’s surprising how often a server connection is required to access the full feature set.

[…]

Perhaps we need standards here. For example, requiring all functionality (bar firmware updates) to work without access to an external server.

Where this isn’t technically possible, perhaps there should be a legal requirement for essential software to be automatically open-sourced in the event of bankruptcy, so that there would be the option of techier owners banding together to host and maintain the server-side code?

[…]

Source: VanMoof ebike mess highlights a risk with pricey smart hardware

Yup, there are too many examples of good hardware being turned into junk because the OEM goes bankrupt or just decides to stop supporting it. Something needs to be done about this.

Brave to stop websites from port scanning visitors – wait that hasn’t been done by everyone yet?!

The Brave browser will take action against websites that snoop on visitors by scanning their open Internet ports or accessing other network resources that can expose personal information.

Starting in version 1.54, Brave will automatically block website port scanning, a practice that a surprisingly large number of sites were found engaging in a few years ago. According to this list compiled in 2021 by a researcher who goes by the handle G666g1e, 744 websites scanned visitors’ ports, most or all without providing notice or seeking permission in advance. eBay, Chick-fil-A, Best Buy, Kroger, and Macy’s were among the offending websites.

Some sites use similar tactics in an attempt to fingerprint visitors so they can be re-identified each time they return, even if they delete browser cookies. By running scripts that access local resources on the visiting devices, the sites can detect unique patterns in a visiting browser. Sometimes there are benign reasons a site will access local resources, such as detecting insecurities or allowing developers to test their websites. Often, however, there are more abusive or malicious motives involved.

The new version of Brave will curb the practice. By default, no website will be able to access local resources. More advanced users who want a particular site to have such access can add it to an allow list.

[…]

Brave will continue to use filter list rules to block scripts and sites known to abuse localhost resources. Additionally, the browser will include an allow list that gives the green light to sites known to access localhost resources for user-benefiting reasons.

“Brave has chosen to implement the localhost permission in this multistep way for several reasons,” developers of the browser wrote. “Most importantly, we expect that abuse of localhost resources is far more common than user-benefiting cases, and we want to avoid presenting users with permission dialogs for requests we expect will only cause harm.”

The scanning of ports and other activities that access local resources is typically done using JavaScript that’s hosted on the website and runs inside a visitor’s browser. A core web security principle known as the same origin policy bars JavaScript hosted by one Internet domain from accessing the data or resources of a different domain. This prevents malicious Site A from being able to obtain credentials or other personal data associated with Site B.

The same origin policy, however, doesn’t prevent websites from interacting in some ways with a visitor’s localhost IP address of 127.0.0.1.

[…]

“As far as we can tell, Brave is the only browser that will block requests to localhost resources from both secure and insecure public sites, while still maintaining a compatibility path for sites that users trust (in the form of the discussed localhost permission)” the Brave post said.

[…]

Source: Brave aims to curb practice of websites that port scan visitors | Ars Technica

This should not be a possibility!

JP Morgan “accidentally” deletes 47 million comms records related to Chase bank

JP Morgan has been fined $4 million by America’s securities watchdog, the SEC, for deleting millions of email records dating from 2018 relating to its Chase Bank subsidiary.

The financial services giant apparently deleted somewhere in the region of 47 million electronic communications records from about 8,700 electronic mailboxes covering the period January 1 through to April 23, 2018.

Many of these, it turns out, were business records that were required to be retained under the Securities Exchange Act of 1934, the SEC said in a filing [PDF] detailing its findings.

Worse still, the screwup meant that it couldn’t produce evidence that that the SEC and others subpoenaed in their investigations. “In at least 12 civil securities-related regulatory investigations, eight of which were conducted by the Commission staff, JPMorgan received subpoenas and document requests for communications which could not be retrieved or produced because they had been deleted permanently,” the SEC says.

What went wrong?

The trouble for JP Morgan can be traced to a project where the company aimed to delete from its systems any older communications and documents that were no longer required to be retained.

According to the SEC’s summary, the project experienced “glitches,” with those documents identified for deletion failing to be deleted under the processes implemented by JPMorgan.

[…]

Source: JP Morgan accidentally deletes 47 million comms records • The Register

Millions of Gigabyte Motherboards Were Sold With a Firmware Backdoor for updates

[…] Researchers at firmware-focused cybersecurity company Eclypsium revealed today that they’ve discovered a hidden mechanism in the firmware of motherboards sold by the Taiwanese manufacturer Gigabyte,

[…]

the hidden code is meant to be an innocuous tool to keep the motherboard’s firmware updated, researchers found that it’s implemented insecurely, potentially allowing the mechanism to be hijacked and used to install malware instead of Gigabyte’s intended program. And because the updater program is triggered from the computer’s firmware, outside its operating system, it’s tough for users to remove or even discover.

[…]

In its blog post about the research, Eclypsium lists 271 models of Gigabyte motherboards that researchers say are affected.

[…]

Gigabyte’s updater alone might have raised concerns for users who don’t trust Gigabyte to silently install code on their machine with a nearly invisible tool—or who worry that Gigabyte’s mechanism could be exploited by hackers who compromise the motherboard manufacturer to exploit its hidden access in a software supply chain attack. But Eclypsium also found that the update mechanism was implemented with glaring vulnerabilities that could allow it to be hijacked: It downloads code to the user’s machine without properly authenticating it, sometimes even over an unprotected HTTP connection, rather than HTTPS. This would allow the installation source to be spoofed by a man-in-the-middle attack carried out by anyone who can intercept the user’s internet connection, such as a rogue Wi-Fi network.

In other cases, the updater installed by the mechanism in Gigabyte’s firmware is configured to be downloaded from a local network-attached storage device (NAS), a feature that appears to be designed for business networks to administer updates without all of their machines reaching out to the internet. But Eclypsium warns that in those cases, a malicious actor on the same network could spoof the location of the NAS to invisibly install their own malware instead.

[…]

Source: Millions of Gigabyte Motherboards Were Sold With a Firmware Backdoor | WIRED

Fake scientific papers are alarmingly common and becoming more so

When neuropsychologist Bernhard Sabel put his new fake-paper detector to work, he was “shocked” by what it found. After screening some 5000 papers, he estimates up to 34% of neuroscience papers published in 2020 were likely made up or plagiarized; in medicine, the figure was 24%. Both numbers, which he and colleagues report in a medRxiv preprint posted on 8 May, are well above levels they calculated for 2010—and far larger than the 2% baseline estimated in a 2022 publishers’ group report.

[…]

Journals are awash in a rising tide of scientific manuscripts from paper mills—secretive businesses that allow researchers to pad their publication records by paying for fake papers or undeserved authorship. “Paper mills have made a fortune by basically attacking a system that has had no idea how to cope with this stuff,” says Dorothy Bishop, a University of Oxford psychologist who studies fraudulent publishing practices. A 2 May announcement from the publisher Hindawi underlined the threat: It shut down four of its journals it found were “heavily compromised” by articles from paper mills.

Sabel’s tool relies on just two indicators—authors who use private, noninstitutional email addresses, and those who list an affiliation with a hospital. It isn’t a perfect solution, because of a high false-positive rate. Other developers of fake-paper detectors, who often reveal little about how their tools work, contend with similar issues.

[…]

To fight back, the International Association of Scientific, Technical, and Medical Publishers (STM), representing 120 publishers, is leading an effort called the Integrity Hub to develop new tools. STM is not revealing much about the detection methods, to avoid tipping off paper mills. “There is a bit of an arms race,” says Joris van Rossum, the Integrity Hub’s product director. He did say one reliable sign of a fake is referencing many retracted papers; another involves manuscripts and reviews emailed from internet addresses crafted to look like those of legitimate institutions.

Twenty publishers—including the largest, such as Elsevier, Springer Nature, and Wiley—are helping develop the Integrity Hub tools, and 10 of the publishers are expected to use a paper mill detector the group unveiled in April. STM also expects to pilot a separate tool this year that detects manuscripts simultaneously sent to more than one journal, a practice considered unethical and a sign they may have come from paper mills.

[…]

STM hasn’t yet generated figures on accuracy or false-positive rates because the project is too new. But catching as many fakes as possible typically produces more false positives. Sabel’s tool correctly flagged nearly 90% of fraudulent or retracted papers in a test sample. However, it marked up to 44% of genuine papers as fake, so results still need to be confirmed by skilled reviewers.

[…]

Publishers embracing gold open access—under which journals collect a fee from authors to make their papers immediately free to read when published—have a financial incentive to publish more, not fewer, papers. They have “a huge conflict of interest” regarding paper mills, says Jennifer Byrne of the University of Sydney, who has studied how paper mills have doctored cancer genetics data.

The “publish or perish” pressure that institutions put on scientists is also an obstacle. “We want to think about engaging with institutions on how to take away perhaps some of the [professional] incentives which can have these detrimental effects,” van Rossum says. Such pressures can push clinicians without research experience to turn to paper mills, Sabel adds, which is why hospital affiliations can be a red flag.

[…]

Source: Fake scientific papers are alarmingly common | Science | AAAS

A closed approach to building a detection tool is an incredibly bad idea – no-one can really know what it is doing and certain types of research will be flagged every time, for example. This type of tool especially needs to be accountable and changeable to the peers who have to review the papers this tool spits out as suspect. Only by having this type of tool open, can it be improved by third parties who also have a vested interest in improving the fake detection rates (eg universities, who you would think have quite some smart people there). Having it closed also lends a false sense of security – especially if the detection methods already have been leaked and papers mills from certain sources are circumventing them already. Security by obscurity is never ever a good idea.

WhatsApp, Signal Threaten to Leave UK Over ‘Online Safety Bill’ – which wants big brother reading all your messages. So online snooping bill, really.

Meta’s WhatsApp is threatening to leave the UK if the government passes the Online Safety Bill, saying it will essentially eliminate its encryption methods. Alongside its rival company Signal and five other apps, the company said that, by passing the bill, users will no longer be protected by end-to-end encryption, which ensures no one but the recipient has access to sent messages.

The “Online Safety Bill” was originally proposed to criminalize content encouraging self-harm posted to social media platforms like Facebook, Instagram, TikTok, and YouTube, but was amended to more broadly focus on illegal content related to adult and child safety. Although government officials said the bill would not ban end-to-end encryption, the messaging apps said in an open letter, “The bill provides no explicit protection for encryption.”

It continues, “If implemented as written, could empower OFCOM [the Office of Communications] to try to force the proactive scanning of private messages on end-to-end encrypted communication services, nullifying the purpose of end-to-end encryption as a result and compromising the privacy of all users.”

[…]

“In short, the bill poses an unprecedented threat to the privacy, safety, and security of every UK citizen and the people with whom they communicate around the world while emboldening hostile governments who may seek to draft copycat laws.”

Signal said in a Twitter post that it will “not back down on providing private, safe communications,” as the open letter urges the UK government to reconsider the way the bill is currently laid out. Both companies have stood by their arguments, stating they will discontinue the apps in the UK rather than risk weakening their current encryption standards.

[…]

Source: WhatsApp, Signal Threaten to Leave UK Over ‘Online Safety Bill’

International Partners Publish Secure-by-Design and -Default Principles and Approaches   Guide – but don’t link to guide in press release

 The Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the National Security Agency (NSA), and the cybersecurity authorities of Australia, Canada, United Kingdom, Germany, Netherlands, and New Zealand (CERT NZ, NCSC-NZ

) published today “Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default.” This joint guidance urges software manufacturers to take urgent steps necessary to ship products that are secure-by-design and -default.  To create a future where technology and associated products are safe for customers, the authoring agencies urge manufacturers to revamp their design and development programs to permit only secure-by-design and -default products to be shipped to customers.

This guidance, the first of its kind, is intended to catalyze progress toward further investments and cultural shifts necessary to achieve a safe and secure future. In addition to specific technical recommendations, this guidance outlines several core principles to guide software manufacturers in building software security into their design processes prior to developing, configuring, and shipping their products, including:

  • Take ownership of the security outcomes of their technology products, shifting the burden of security from the customers. A secure configuration should be the default baseline, in which products automatically enable the most important security controls needed to protect enterprises from malicious cyber actors.
  • Embrace radical transparency and accountability—for example, by ensuring vulnerability advisories and associated common vulnerability and exposure (CVE) records are complete and accurate.
  • Build the right organizational structure by providing executive level commitment for software manufacturers to prioritize security as a critical element of product development.

[…]

With this joint guide, the authoring agencies seek to progress an international conversation about key priorities, investments, and decisions necessary to achieve a future where technology is safe, secure, and resilient by design and default. Feedback on this guide is welcome and can be sent to SecureByDesign@cisa.dhs.gov.

Source: U.S. and International Partners Publish Secure-by-Design and -Default Principles and Approaches    | CISA

Not having the guide linked in the press release means people have to search for it, which means it’s a great target for an attack. Not really secure at all!

So I have the link to the PDF guide, it’s here.

Disabling Intel and AMD’s Backdoors On Modern computers

Despite some companies making strides with ARM, for the most part, the desktop and laptop space is still dominated by x86 machines. For all their advantages, they have a glaring flaw for anyone concerned with privacy or security in the form of a hardware backdoor that can access virtually any part of the computer even with the power off. AMD calls their system the Platform Security Processor (PSP) and Intel’s is known as the Intel Management Engine (IME).

To fully disable these co-processors a computer from before 2008 is required, but if you need more modern hardware than that which still respects your privacy and security concerns you’ll need to either buy an ARM device, or disable the IME like NovaCustom has managed to do with their NS51 series laptop.

NovaCustom specializes in building custom laptops with customizations for various components and specifications to fit their needs, including options for the CPU, GPU, RAM, storage, keyboard layout, and other considerations. They favor Coreboot as a bootloader which already goes a long way to eliminating proprietary closed-source software at a fundamental level, but not all Coreboot machines have the IME completely disabled. There are two ways to do this, the HECI method which is better than nothing but not fully trusted, and the HAP bit, which completely disables the IME. NovaCustom is using the HAP bit approach to disable the IME, meaning that although it’s not completely eliminated from the computer, it is turned off in a way that’s at least good enough for computers that the NSA uses.

There are a lot of new computer manufacturers building conscientious hardware nowadays, but (with the notable exception of System76) the IME and PSP seem to be largely ignored by most computing companies we’d otherwise expect to care about an option like this. It’s certainly still an area of concern considering how much power the IME and PSP are given over their host computers, and we have seen even mainline manufacturers sometimes offer systems with the IME disabled. The only other options to solve this problem are based around specific motherboards for 8th and 9th generation Intel desktops, or you can go way back to hardware from 2008 and install libreboot to eliminate, rather than disable, the IME.

Source: Disabling Intel’s Backdoors On Modern Laptops | Hackaday

Google debuts deps.dev API to check security status of dependencies

[…]

On Tuesday, Google – which has answered the government’s call to secure the software supply chain with initiatives like the Open Source Vulnerabilities (OSV) database and Software Bills of Materials (SBOMs) – announced an open source software vetting service, its deps.dev API.

The API, accessible in a more limited form via the web, aims to provide software developers with access to security metadata on millions of code libraries, packages, modules, and crates.

By security metadata, Google means things like: how well maintained a library is, who maintains it, what vulnerabilities are known to be present in it and whether they have been fixed, whether it’s had a code review, whether it’s using old or new versions of other dependencies, what license covers it, and so on. For example, see the info on the Go package cmdr and the Rust Cargo crate crossbeam-utils.

The API also provides at least two capabilities not available through the web interface: the ability to query the hash of a file’s contents (to find all package versions with the file) and dependency graphs based on actual installation rather than just declarations.

“Software supply chain attacks are increasingly common and harmful, with high profile incidents such as Log4Shell, Codecov, and the recent 3CX hack,” said Jesper Sarnesjo and Nicky Ringland, with Google’s open source security team, in a blog post. “The overwhelming complexity of the software ecosystem causes trouble for even the most diligent and well-resourced developers.”

[…]

The deps.dev API indexes data from various software package registries, including Rust’s Cargo, Go, Maven, JavaScript’s npm, and Python’s PyPI, and combines that with data gathered from GitHub, GitLab, and Bitbucket, as well as security advisories from OSV. The idea is to make metadata about software packages more accessible, to promote more informed security decisions.

Developers can query the API to look up a dependency’s records, with the returned data available programmatically to CI/CD systems, IDE plugins that present the information, build tools and policy engines, and other development tools.

Sarnesjo and Ringland say they hope the API helps developers understand dependency data better so that they can respond to – or prevent – attacks that try to compromise the software supply chain.

There are already hundreds of software supply chain tools and projects, but the more the merrier. Judging by the average life expectancy of Google services, the deps.dev API should be available for at least four years.

Along similar lines, Google Cloud on Wednesday nudged its Assured Open Source Software (Assured OSS) service for Java and Python into general availability.

[…]

Source: Google debuts API to check security status of dependencies • The Register