There’s a new Cisco vulnerability in its Emergency Responder product:
This vulnerability is due to the presence of static user credentials for the root account that are typically reserved for use during development. An attacker could exploit this vulnerability by using the account to log in to an affected system. A successful exploit could allow the attacker to log in to the affected system and execute arbitrary commands as the root user.
This is notthe first time Cisco products have had hard-coded passwords made public. You’d think it would learn.
[…] Have I Been Pwned started life as a hobby project. In fact, Troy wasn’t working in the cybersecurity industry until a chance encounter tweaked his curiosity.
[…]
Hackers had stolen the email addresses and passwords of 152 million of Adobe’s customers in November 2013 — including, as it turned out, Troy’s.
Only, he wasn’t an Adobe customer. He did some digging and found that Adobe had acquired another company that he did have an account with, and his data along with it.
But that wasn’t where it ended. Another question weighed on Troy’s mind — one he would soon become synonymous with. Where else had his data been leaked?
So, two months after the Adobe breach, he launched Have I Been Pwned — a website that would answer this exact question for anyone in the world.
Even though it’s grown into an industry behemoth, the day-to-day reality of running the site hasn’t changed all that much since 2013.
[…]
He only collects (and encrypts) the mobile numbers, emails and passwords that he finds in the breaches, discarding the victims’ names, physical addresses, bank details and other sensitive information.
The idea is to let users find out where their data has been leaked from, but without exposing them to further risk.
Once he identifies where a data breach has occurred, Troy also contacts the organisation responsible to allow it to inform its users before he does. This, he says, is often the hardest step of the process because he has to convince them it’s legitimate and not some kind of scam itself.
He’s not required to give organisations this opportunity, much less persist when they ignore his messages or accuse him of trying to shake them down for money.
[…]
These days, major tech companies like Mozilla and 1Password use Have I Been Pwned, and Troy likes to point out that dozens of national governments and law enforcement agencies also partner with his service.
[…]
the reality is Troy doesn’t answer to an electorate, or even a board.
“He’s not a company that’s audited. He’s just a dude on the web,” says Jane Andrew, an expert on data breaches at the University of Sydney.
“I think it’s so shocking that this is where we find out information about ourselves.
“It’s just one guy facilitating this. It’s a critical global risk.”
She says governments and law enforcement have, in general, left it to individuals to deal with the fallout from data breaches.
[…]
Without an effective global regulator, Professor Andrew says, a crucial part of the world’s cybersecurity infrastructure is left to rely on the goodwill of this one man on the Gold Coast.
An unknown hacker gained administrative control of Sourcegraph, an AI-driven service used by developers at Uber, Reddit, Dropbox, and other companies, and used it to provide free access to resources that normally would have required payment.
In the process, the hacker(s) may have accessed personal information belonging to Sourcegraph users, Diego Comas, Sourcegraph’s head of security, said in a post on Wednesday. For paid users, the information exposed included license keys and the names and email addresses of license key holders. For non-paying users, it was limited to email addresses associated with their accounts. Private code, emails, passwords, usernames, or other personal information were inaccessible.
Free-for-all
The hacker gained administrative access by obtaining an authentication key a Sourcegraph developer accidentally included in a code published to a public Sourcegraph instance hosted on Sourcegraph.com. After creating a normal user Sourcegraph account, the hacker used the token to elevate the account privileges to those of an administrator. The access token appeared in a pull request posted on July 14, the user account was created on August 28, and the elevation to admin occurred on August 30.
“The malicious user, or someone connected to them, created a proxy app allowing users to directly call Sourcegraph’s APIs and leverage the underlying LLM [large language model],” Comas wrote. “Users were instructed to create free Sourcegraph.com accounts, generate access tokens, and then request the malicious user to greatly increase their rate limit. On August 30 (2023-08-30 13:25:54 UTC), the Sourcegraph security team identified the malicious site-admin user, revoked their access, and kicked off an internal investigation for both mitigation and next steps.”
The resource free-for-all generated a spike in calls to Sourcegraph programming interfaces, which are normally rate-limited for free accounts.
Enlarge/ A graph showing API usage from July 31 to August 29 with a major spike at the end.
Sourcegraph
“The promise of free access to Sourcegraph API prompted many to create accounts and start using the proxy app,” Comas wrote. “The app and instructions on how to use it quickly made its way across the web, generating close to 2 million views. As more users discovered the proxy app, they created free Sourcegraph.com accounts, adding their access tokens, and accessing Sourcegraph APIs illegitimately.”
A few months ago, an engineer in a data center in Norway encountered some perplexing errors that caused a Windows server to suddenly reset its system clock to 55 days in the future. The engineer relied on the server to maintain a routing table that tracked cell phone numbers in real time as they moved from one carrier to the other. A jump of eight weeks had dire consequences because it caused numbers that had yet to be transferred to be listed as having already been moved and numbers that had already been transferred to be reported as pending.
[…]
The culprit was a little-known feature in Windows known as Secure Time Seeding. Microsoft introduced the time-keeping feature in 2016 as a way to ensure that system clocks were accurate. Windows systems with clocks set to the wrong time can cause disastrous errors when they can’t properly parse timestamps in digital certificates or they execute jobs too early, too late, or out of the prescribed order. Secure Time Seeding, Microsoft said, was a hedge against failures in the battery-powered onboard devices designed to keep accurate time even when the machine is powered down.
[…]
ometime last year, a separate engineer named Ken began seeing similar time drifts. They were limited to two or three servers and occurred every few months. Sometimes, the clock times jumped by a matter of weeks. Other times, the times changed to as late as the year 2159.
“It has exponentially grown to be more and more servers that are affected by this,” Ken wrote in an email. “In total, we have around 20 servers (VMs) that have experienced this, out of 5,000. So it’s not a huge amount, but it is considerable, especially considering the damage this does. It usually happens to database servers. When a database server jumps in time, it wreaks havoc, and the backup won’t run, either, as long as the server has such a huge offset in time. For our customers, this is crucial.”
Simen and Ken, who both asked to be identified only by their first names because they weren’t authorized by their employers to speak on the record, soon found that engineers and administrators had been reporting the same time resets since 2016.
[…]
“At this point, we are not completely sure why secure time seeding is doing this,” Ken wrote in an email. “Being so seemingly random, it’s difficult to [understand]. Microsoft hasn’t really been helpful in trying to track this, either. I’ve sent over logs and information, but they haven’t really followed this up. They seem more interested in closing the case.”
The logs Ken sent looked like the ones shown in the two screenshots below. They captured the system events that occurred immediately before and after the STS changed the times. The selected line in the first image shows the bounds of what STS calculates as the correct time based on data from SSL handshakes and the heuristics used to corroborate it.
Screenshot of a system event log as STS causes a system clock to jump to a date four months later than the current time.
Ken
Screenshot of a system event log when STS resets the system date to a few weeks later than the current date.
Ken
The “Projected Secure Time” entry immediately above the selected line shows that Windows estimates the current date to be October 20, 2023, more than four months later than the time shown in the system clock. STS then changes the system clock to match the incorrectly projected secure time, as shown in the “Target system time.”
The second image shows a similar scenario in which STS changes the date from June 10, 2023, to July 5, 2023.
[…]
As the creator and lead developer of the Metasploit exploit framework, a penetration tester, and a chief security officer, Moore has a deep background in security. He speculated that it might be possible for malicious actors to exploit STS to breach Windows systems that don’t have STS turned off. One possible exploit would work with an attack technique known as Server Side Request Forgery.
Microsoft’s repeated refusal to engage with customers experiencing these problems means that for the foreseeable future, Windows will by default continue to automatically reset system clocks based on values that remote third parties include in SSL handshakes. Further, it means that it will be incumbent on individual admins to manually turn off STS when it causes problems.
That, in turn, is likely to keep fueling criticism that the feature as it has existed for the past seven years does more harm than good.
STS “is more like malware than an actual feature,” Simen wrote. “I’m amazed that the developers didn’t see it, that QA didn’t see it, and that they even wrote about it publicly without anyone raising a red flag. And that nobody at Microsoft has acted when being made aware of it.”
AMD processor users, you have another data-leaking vulnerability to deal with: like Zenbleed, this latest hole can be to steal sensitive data from a running vulnerable machine.
The flaw (CVE-2023-20569), dubbed Inception in reference to the Christopher Nolan flick about manipulating a person’s dreams to achieve a desired outcome in the real world, was disclosed by ETH Zurich academics this week.
And yes, it’s another speculative-execution-based side-channel that malware or a rogue logged-in user can abuse to obtain passwords, secrets, and other data that should be off limits.
Inception utilizes a previously disclosed vulnerability alongside a novel kind of transient execution attack, which the researchers refer to as training in transient execution (TTE), to leak information from an operating system kernel at a rate of 39 bytes per second on vulnerable hardware. In this case, vulnerable systems encompasses pretty much AMD’s entire CPU lineup going back to 2017, including its latest Zen 4 Epyc and Ryzen processors.
Despite the potentially massive blast radius, AMD is downplaying the threat while simultaneously rolling out microcode updates for newer Zen chips to mitigate the risk. “AMD believes this vulnerability is only potentially exploitable locally, such as via downloaded malware,” the biz said in a public disclosure, which ranks Inception “medium” in severity.
Intel processors weren’t found to be vulnerable to Inception, but that doesn’t mean they’re entirely in the clear. Chipzilla is grappling with its own separate side-channel attack disclosed this week called Downfall.
How Inception works
As we understand it, successful exploitation of Inception takes advantage of the fact that in order for modern CPUs to achieve the performance they do, processor cores have to cut corners.
Rather than executing instructions strictly in order, the CPU core attempts to predict which ones will be needed and runs those out of sequence if it can, a technique called speculative execution. If the core guesses incorrectly, it discards or unwinds the computations it shouldn’t have done. That allows the core to continue getting work done without having to wait around for earlier operations to complete. Executing these instructions speculatively is also known as transient execution, and when this happens, a transient window is opened.
Normally, this process renders substantial performance advantages, and refining this process is one of several ways CPU designers eke out instruction-per-clock gains generation after generation. However, as we’ve seen with previous side-channel attacks, like Meltdown and Spectre, speculative execution can be abused to make the core start leaking information it otherwise shouldn’t to observers on the same box.
Inception is a fresh twist on this attack vector, and involves two steps. The first takes advantage of a previously disclosed vulnerability called Phantom execution (CVE-2022-23825) which allows an unprivileged user to trigger a misprediction — basically making the core guess the path of execution incorrectly — to create a transient execution window on demand.
This window serves as a beachhead for a TTE attack. Instead of leaking information from the initial window, the TTE injects new mispredictions, which trigger more future transient windows. This, the researchers explain, causes an overflow in the return stack buffer with an attacker-controlled target.
“The result of this insight is Inception, an attack that leaks arbitrary data from an unprivileged process on all AMD Zen CPUs,” they wrote.
In a video published alongside the disclosure, and included below, the Swiss team demonstrate this attack by leaking the root account hash from /etc/shadow on a Zen 4-based Ryzen 7700X CPU with all Spectre mitigations enabled.
You can find a more thorough explanation of Inception, including the researchers’ methodology in a paper here [PDF]. It was written by Daniël Trujillo, Johannes Wikner, and Kaveh Razavi, of ETH Zurich. They’ve also shared proof-of-concept exploit code here.
The researchers point out that people don’t expect sound-based exploits. The paper reads, “For example, when typing a password, people will regularly hide their screen but will do little to obfuscate their keyboard’s sound.”
The technique uses the same kind of attention network that makes models like ChatGPT so powerful. It seems to work well, as the paper claims a 97% peak accuracy over both a telephone or Zoom. In addition, where the model was wrong, it tended to be close, identifying an adjacent keystroke instead of the correct one. This would be easy to correct for in software, or even in your brain as infrequent as it is. If you see the sentence “Paris im the s[ring,” you can probably figure out what was really typed.
An anonymous reader quotes a report from Ars Technica: Microsoft has once again come under blistering criticism for the security practices of Azure and its other cloud offerings, with the CEO of security firm Tenable saying Microsoft is “grossly irresponsible” and mired in a “culture of toxic obfuscation.” The comments from Amit Yoran, chairman and CEO of Tenable, come six days after Sen. Ron Wyden (D-Ore.) blasted Microsoft for what he said were “negligent cybersecurity practices” that enabled hackers backed by the Chinese government to steal hundreds of thousands of emails from cloud customers, including officials in the US Departments of State and Commerce. Microsoft has yet to provide key details about the mysterious breach, which involved the hackers obtaining an extraordinarily powerful encryption key granting access to a variety of its other cloud services. The company has taken pains ever since to obscure its infrastructure’s role in the mass breach.
On Wednesday, Yoran took to LinkedIn to castigate Microsoft for failing to fix what the company said on Monday was a “critical” issue that gives hackers unauthorized access to data and apps managed by Azure AD, a Microsoft cloud offering for managing user authentication inside large organizations. Monday’s disclosure said that the firm notified Microsoft of the problem in March and that Microsoft reported 16 weeks later that it had been fixed. Tenable researchers told Microsoft that the fix was incomplete. Microsoft set the date for providing a complete fix to September 28.
“To give you an idea of how bad this is, our team very quickly discovered authentication secrets to a bank,” Yoran wrote. “They were so concerned about the seriousness and the ethics of the issue that we immediately notified Microsoft.” He continued: “Did Microsoft quickly fix the issue that could effectively lead to the breach of multiple customers’ networks and services? Of course not. They took more than 90 days to implement a partial fix — and only for new applications loaded in the service.” In response, Microsoft officials wrote: “We appreciate the collaboration with the security community to responsibly disclose product issues. We follow an extensive process involving a thorough investigation, update development for all versions of affected products, and compatibility testing among other operating systems and applications. Ultimately, developing a security update is a delicate balance between timeliness and quality, while ensuring maximized customer protection with minimized customer disruption.” Microsoft went on to say that the initial fix in June “mitigated the issue for the majority of customers” and “no customer action is required.”
In a separate email, Yoran responded: “It now appears that it’s either fixed, or we are blocked from testing. We don’t know the fix, or mitigation, so hard to say if it’s truly fixed, or Microsoft put a control in place like a firewall rule or ACL to block us. When we find vulns in other products, vendors usually inform us of the fix so we can validate it effectively. With Microsoft Azure that doesn’t happen, so it’s a black box, which is also part of the problem. The ‘just trust us’ lacks credibility when you have the current track record.”
A great example of why a) closed source software is a really bad idea, b) why responsible disclosure is a good idea and c) why cloud is often a bad idea
Once known for distributing hacking tools and shaming software companies into improving their security, a famed group of technology activists is now working to develop a system that will allow the creation of messaging and social networking apps that won’t keep hold of users’ personal data.
The group, Cult of the Dead Cow, has developed a coding framework that can be used by app developers who are willing to embrace strong encryption and forsake revenue from advertising that is targeted to individuals based on detailed profiles gleaned from the data most apps now routinely collect.
The team is building on the work of such free products as Signal, which offers strong encryption for text messages and voice calls, and Tor, which offers anonymous web surfing by routing traffic through a series of servers to disguise the location of the person conducting the search.
The latest effort, to be detailed at the massive annual Def Con hacking conference in Las Vegas next week, seeks to provide a foundation for messaging, file sharing and even social networking apps without harvesting any data, all secured by the kind of end-to-end encryption that makes interception hard even for governments.
Called Veilid, and pronounced vay-lid, the code can be used by developers to build applications for mobile devices or the web. Thoseapps will pass fully encrypted content to one another using the Veilid protocol, its developers say.As with the file-sharing software BitTorrent, which distributes different pieces of the same content simultaneously, the network will get faster as more devices join and share the load, the developers say. In such decentralized “peer-to-peer” networks, users download data from each other instead of from a central machine.
As with some other open-source endeavors, the challenge will come in persuading programmers and engineers to devote time to designing appsthat are compatible with Veilid. Though developers could charge money for those apps or sell ads, the potential revenue streams are limited by the inability to collect detailed information that has become a primary method for distributing targeted ads or pitching a product to a specific set of users.
The team behind Veilid has not yet released documentation explaining its design choices, and collaborative work on an initial messaging app, intended to function without requiring a phone number, has yet to produce a test version.
But the nascent project has other things going for it.
It arrives amid disarray, competition and a willingness to experiment among social network and chat users resentful of Twitter and Facebook. And it buttresses opposition to increasing moves by governments, lately including the United Kingdom, to undercut strong encryption with laws requiring disclosure on demand of content or user identities. Apple, Facebook parent Meta and Signal recently threatened to pull some UK services if that country’s Online Safety Bill is adopted unchanged.
Civil rights activists and abortion rights supporters have also been alarmed by police use of messages sent by text and Facebook Messenger to investigate abortions in states that have banned the procedure after the first six weeks of pregnancy.
“It’s great that people are developing an end-to-end encryption framework for everything,” said Cindy Cohn, executive director of the nonprofit Electronic Frontier Foundation. “We can move past the surveillance business model.”
When Google announced that trackers would be able to tie in to its 3 billion-device Bluetooth tracking network at its Google I/O 2023 conference, it also said that it would make it easier for people to avoid being tracked by trackers they don’t know about, like Apple AirTags.
Now Android users will soon get these “Unknown Tracker Alerts.” Based on the joint specification developed by Google and Apple, and incorporating feedback from tracker-makers like Tile and Chipolo, the alerts currently work only with AirTags, but Google says it will work with tag manufacturers to expand its coverage.
Android’s unknown tracker alerts, illustrated in moving Corporate Memphis style.
For now, if an AirTag you don’t own “is separated from its owner and determined to be traveling with you,” a notification will tell you this and that “the owner of the tracker can see its location.” Tapping the notification brings up a map tracing back to where it was first seen traveling with you. Google notes that this location data “is always encrypted and never shared with Google.”
Finally, Google offers a manual scan feature if you’re suspicious that your Android phone isn’t catching a tracker or want to see what’s nearby. The alerts are rolling out through a Google Play services update to devices on Android 6.0 and above over the coming weeks.
[…] The vulnerabilities reside inside firmware that Duluth, Georgia-based AMI makes for BMCs (baseboard management controllers). These tiny computers soldered into the motherboard of servers allow cloud centers, and sometimes their customers, to streamline the remote management of vast fleets of computers. They enable administrators to remotely reinstall OSes, install and uninstall apps, and control just about every other aspect of the system—even when it’s turned off. BMCs provide what’s known in the industry as “lights-out” system management.
[…]
These vulnerabilities range in severity from High to Critical, including unauthenticated remote code execution and unauthorized device access with superuser permissions. They can be exploited by remote attackers having access to Redfish remote management interfaces, or from a compromised host operating system. Redfish is the successor to traditional IPMI and provides an API standard for the management of a server’s infrastructure and other infrastructure supporting modern data centers. Redfish is supported by virtually all major server and infrastructure vendors, as well as the OpenBMC firmware project often used in modern hyperscale environments.
[…]
The researchers went on to note that if they could locate the vulnerabilities and write exploits after analyzing the publicly available source code, there’s nothing stopping malicious actors from doing the same. And even without access to the source code, the vulnerabilities could still be identified by decompiling BMC firmware images. There’s no indication malicious parties have done so, but there’s also no way to know they haven’t.
The researchers privately notified AMI of the vulnerabilities, and the company created firmware patches, which are available to customers through a restricted support page. AMI has also published an advisory here.
The vulnerabilities are:
CVE-2023-34329, an authentication bypass via HTTP headers that has a severity rating of 9.9 out of 10, and
CVE-2023-34330, Code injection via Dynamic Redfish Extension. Its severity rating is 8.2.
[…]
“By spoofing certain HTTP headers, an attacker can trick BMC into believing that external communication is coming in from the USB0 internal interface,” the researchers wrote. “When this is combined on a system shipped with the No Auth option configured, the attacker can bypass authentication, and perform Redfish API actions.”
One example would be to create an account that poses as a legitimate administrator and has all system rights afforded one.
CVE-2023-34330, meanwhile, can be exploited on systems with the no auth setting to effectively execute code of their choice. In the event the no auth option isn’t enabled, the attackers first must have BMC credentials. That’s a higher bar but by no means out of reach for sophisticated actors.
The Washington Post’s “Tech Friend” newsletter has the latest on Google’s “Enhanced Safe Browsing” for Chrome and Gmail, which “monitors the web addresses of sites that you visit and compares them to constantly updated Google databases of suspected scam sites.” You’ll see a red warning screen if Google believes you’re on a website that is, for example, impersonating your bank. You can also check when you’re downloading a file to see if Google believes it might be a scam document. In the normal mode without Enhanced Safe Browsing, Google still does many of those same security checks. But the company might miss some of the rapid-fire activity of crooks who can create a fresh bogus website minutes after another one is blocked as a scam.
This enhanced security feature has been around for three years, but Google recently started putting a message in Gmail inboxes suggesting that people turn on Enhanced Safe Browsing.
Security experts told me that it’s a good idea to turn on this safety feature but that it comes with trade-offs. The company already knows plenty about you, particularly when you’re logged into Gmail, YouTube, Chrome or other Google services. If you turn on Enhanced Safe Browsing, Google may know even more about what sites you’re visiting even if you’re not signed into a Google account. It also collects bits of visual images from sites you’re visiting to scan for hallmarks of scam sites.
Google said it will only use this information to stop bad guys and train its computers to improve security for you and everyone else. You should make the call whether you are willing to give up some of your privacy for extra security protections from common crimes.
Gmail users can toggle the feature on or off at this URL. Google tells users that enabling the feature will provide “faster and more proactive protection against dangerous websites, downloads, and extensions.”
The Post’s reporter also asked Google why it doesn’t just enable the extra security automatically, and “The company told me that because Google is collecting more data in Enhanced Safe Browsing mode, it wants to ask your permission.”
The Post adds as an aside that “It’s also not your fault that phishing scams are everywhere. Our whole online security system is unsafe and stupid… Our goal should be to slowly replace the broken online security system with newer technologies that ditch our crime-prone password system for different methods of verifying we are who we say we are.”
For more than 25 years, a technology used for critical data and voice radio communications around the world has been shrouded in secrecy to prevent anyone from closely scrutinizing its security properties for vulnerabilities
[…]
The backdoor, known for years by vendors that sold the technology but not necessarily by customers, exists in an encryption algorithm baked into radios sold for commercial use in critical infrastructure. It’s used to transmit encrypted data and commands in pipelines, railways, the electric grid, mass transit, and freight trains. It would allow someone to snoop on communications to learn how a system works, then potentially send commands to the radios that could trigger blackouts, halt gas pipeline flows, or reroute trains.
Researchers found a second vulnerability in a different part of the same radio technology that is used in more specialized systems sold exclusively to police forces, prison personnel, military, intelligence agencies, and emergency services, such as the C2000 communication system used by Dutch police, fire brigades, ambulance services, and Ministry of Defense for mission-critical voice and data communications. The flaw would let someone decrypt encrypted voice and data communications and send fraudulent messages to spread misinformation or redirect personnel and forces during critical times.
[…]
The Dutch National Cyber Security Centre assumed the responsibility of notifying radio vendors and computer emergency response teams around the world about the problems, and of coordinating a timeframe for when the researchers should publicly disclose the issues.
In a brief email, NCSC spokesperson Miral Scheffer called TETRA “a crucial foundation for mission-critical communication in the Netherlands and around the world” and emphasized the need for such communications to always be reliable and secure, “especially during crisis situations.” She confirmed the vulnerabilities would let an attacker in the vicinity of impacted radios “intercept, manipulate or disturb” communications and said the NCSC had informed various organizations and governments, including Germany, Denmark, Belgium, and England, advising them how to proceed.
[…]
The researchers plan to present their findings next month at the BlackHat security conference in Las Vegas, when they will release detailed technical analysis as well as the secret TETRA encryption algorithms that have been unavailable to the public until now. They hope others with more expertise will dig into the algorithms to see if they can find other issues.
[…]
Although the standard itself is publicly available for review, the encryption algorithms are only available with a signed NDA to trusted parties, such as radio manufacturers. The vendors have to include protections in their products to make it difficult for anyone to extract the algorithms and analyze them.
AMD has started issuing some patches for its processors affected by a serious silicon-level bug dubbed Zenbleed that can be exploited by rogue users and malware to steal passwords, cryptographic keys, and other secrets from software running on a vulnerable system.
Zenbleed affects Ryzen and Epyc Zen 2 chips, and can be abused to swipe information at a rate of at least 30Kb per core per second. That’s practical enough for someone on a shared server, such as a cloud-hosted box, to spy on other tenants. Exploiting Zenbleed involves abusing speculative execution, though unlike the related Spectre family of design flaws, the bug is pretty easy to exploit. It is more on a par with Meltdown.
Malware already running on a system, or a rogue logged-in user, can exploit Zenbleed without any special privileges and inspect data as it is being processed by applications and the operating system, which can include sensitive secrets, such as passwords. It’s understood a malicious webpage, running some carefully crafted JavaScript, could quietly exploit Zenbleed on a personal computer to snoop on this information.
The vulnerability was highlighted today by Google infosec guru Tavis Ormandy, who discovered the data-leaking vulnerability while fuzzing hardware for flaws, and reported it to AMD in May. Ormandy, who acknowledged some of his colleagues for their help in investigating the security hole, said AMD intends to address the flaw with microcode upgrades, and urged users to “please update” their vulnerable machines as soon as they are able to.
Proof-of-concept exploit code, produced by Ormandy, is available here, and we’ve confirmed it works on a Zen 2 Epyc server system when running on the bare metal. While the exploit runs, it shows off the sensitive data being processed by the box, which can appear in fragments or in whole depending on the code running at the time.
If you stick any emulation layer in between, such as Qemu, then the exploit understandably fails.
What’s hit?
The bug affects all AMD Zen 2 processors including the following series: Ryzen 3000; Ryzen Pro 3000; Ryzen Threadripper 3000; Ryzen 4000 Pro; Ryzen 4000, 5000, and 7020 with Radeon Graphics; and Epyc Rome datacenter processors.
AMD today issued a security advisory here, using the identifiers AMD-SB-7008 and CVE-2023-20593 to track the vulnerability. The chip giant scored the flaw as a medium severity one, describing it as a “cross-process information leak.”
A microcode patch for Epyc 7002 processors is available now. As for the rest of its affected silicon: AMD is targeting December 2023 for updates for desktop systems (eg, Ryzen 3000 and Ryzen 4000 with Radeon); October for high-end desktops (eg, Threadripper 3000); November and December for workstations (eg, Threadripper Pro 3000); and November to December for mobile (laptop-grade) Ryzens. Shared systems are the priority, it would seem, which makes sense given the nature of the design blunder.
[…] an app is required to use many of the smart features of its bikes – and that app relies on communication with VanMoof servers. If the company goes under, and the servers go offline, that could leave ebike owners unable to even unlock their bikes
[…]
While unlocking is activated by Bluetooth when your phone comes into range of the bike, it relies on a rolling key code – and that function in turn relies on access to a VanMoof server. If the company goes bust, then no server, no key code generation, no unlock.
Rival ebike company Cowboy has a solution
A rival ebike company, Belgian company Cowboy, has stepped in to offer a solution. TNW reports that it has created an app which allows VanMoof owners to generate and save their own digital key, which can be used in place of one created by a VanMoof server.
If you have a VanMoof bike, grab the app now, as it requires an initial connection to the VanMoof server to fetch your current keycode. If the server goes offline, existing Bikey App users can continue to unlock their bikes, but it will no longer be possible for new users to activate it.
[…]
In some cases, a companion app may work perfectly well in standalone mode, but it’s surprising how often a server connection is required to access the full feature set.
[…]
Perhaps we need standards here. For example, requiring all functionality (bar firmware updates) to work without access to an external server.
Where this isn’t technically possible, perhaps there should be a legal requirement for essential software to be automatically open-sourced in the event of bankruptcy, so that there would be the option of techier owners banding together to host and maintain the server-side code?
Yup, there are too many examples of good hardware being turned into junk because the OEM goes bankrupt or just decides to stop supporting it. Something needs to be done about this.
The Brave browser will take action against websites that snoop on visitors by scanning their open Internet ports or accessing other network resources that can expose personal information.
Starting in version 1.54, Brave will automatically block website port scanning, a practice that a surprisingly large number of sites were found engaging in a few years ago. According to this list compiled in 2021 by a researcher who goes by the handle G666g1e, 744 websites scanned visitors’ ports, most or all without providing notice or seeking permission in advance. eBay, Chick-fil-A, Best Buy, Kroger, and Macy’s were among the offending websites.
Some sites use similar tactics in an attempt to fingerprint visitors so they can be re-identified each time they return, even if they delete browser cookies. By running scripts that access local resources on the visiting devices, the sites can detect unique patterns in a visiting browser. Sometimes there are benign reasons a site will access local resources, such as detecting insecurities or allowing developers to test their websites. Often, however, there are more abusive or malicious motives involved.
The new version of Brave will curb the practice. By default, no website will be able to access local resources. More advanced users who want a particular site to have such access can add it to an allow list.
[…]
Brave will continue to use filter list rules to block scripts and sites known to abuse localhost resources. Additionally, the browser will include an allow list that gives the green light to sites known to access localhost resources for user-benefiting reasons.
“Brave has chosen to implement the localhost permission in this multistep way for several reasons,” developers of the browser wrote. “Most importantly, we expect that abuse of localhost resources is far more common than user-benefiting cases, and we want to avoid presenting users with permission dialogs for requests we expect will only cause harm.”
The scanning of ports and other activities that access local resources is typically done using JavaScript that’s hosted on the website and runs inside a visitor’s browser. A core web security principle known as the same origin policy bars JavaScript hosted by one Internet domain from accessing the data or resources of a different domain. This prevents malicious Site A from being able to obtain credentials or other personal data associated with Site B.
The same origin policy, however, doesn’t prevent websites from interacting in some ways with a visitor’s localhost IP address of 127.0.0.1.
[…]
“As far as we can tell, Brave is the only browser that will block requests to localhost resources from both secure and insecure public sites, while still maintaining a compatibility path for sites that users trust (in the form of the discussed localhost permission)” the Brave post said.
JP Morgan has been fined $4 million by America’s securities watchdog, the SEC, for deleting millions of email records dating from 2018 relating to its Chase Bank subsidiary.
The financial services giant apparently deleted somewhere in the region of 47 million electronic communications records from about 8,700 electronic mailboxes covering the period January 1 through to April 23, 2018.
Many of these, it turns out, were business records that were required to be retained under the Securities Exchange Act of 1934, the SEC said in a filing [PDF] detailing its findings.
Worse still, the screwup meant that it couldn’t produce evidence that that the SEC and others subpoenaed in their investigations. “In at least 12 civil securities-related regulatory investigations, eight of which were conducted by the Commission staff, JPMorgan received subpoenas and document requests for communications which could not be retrieved or produced because they had been deleted permanently,” the SEC says.
What went wrong?
The trouble for JP Morgan can be traced to a project where the company aimed to delete from its systems any older communications and documents that were no longer required to be retained.
According to the SEC’s summary, the project experienced “glitches,” with those documents identified for deletion failing to be deleted under the processes implemented by JPMorgan.
[…] Researchers at firmware-focused cybersecurity company Eclypsium revealed today that they’ve discovered a hidden mechanism in the firmware of motherboards sold by the Taiwanese manufacturer Gigabyte,
[…]
the hidden code is meant to be an innocuous tool to keep the motherboard’s firmware updated, researchers found that it’s implemented insecurely, potentially allowing the mechanism to be hijacked and used to install malware instead of Gigabyte’s intended program. And because the updater program is triggered from the computer’s firmware, outside its operating system, it’s tough for users to remove or even discover.
[…]
In its blog post about the research, Eclypsium lists 271 models of Gigabyte motherboards that researchers say are affected.
[…]
Gigabyte’s updater alone might have raised concerns for users who don’t trust Gigabyte to silently install code on their machine with a nearly invisible tool—or who worry that Gigabyte’s mechanism could be exploited by hackers who compromise the motherboard manufacturer to exploit its hidden access in a software supply chain attack. But Eclypsium also found that the update mechanism was implemented with glaring vulnerabilities that could allow it to be hijacked: It downloads code to the user’s machine without properly authenticating it, sometimes even over an unprotected HTTP connection, rather than HTTPS. This would allow the installation source to be spoofed by a man-in-the-middle attack carried out by anyone who can intercept the user’s internet connection, such as a rogue Wi-Fi network.
In other cases, the updater installed by the mechanism in Gigabyte’s firmware is configured to be downloaded from a local network-attached storage device (NAS), a feature that appears to be designed for business networks to administer updates without all of their machines reaching out to the internet. But Eclypsium warns that in those cases, a malicious actor on the same network could spoof the location of the NAS to invisibly install their own malware instead.
When neuropsychologist Bernhard Sabel put his new fake-paper detector to work, he was “shocked” by what it found. After screening some 5000 papers, he estimates up to 34% of neuroscience papers published in 2020 were likely made up or plagiarized; in medicine, the figure was 24%. Both numbers, which he and colleagues report in a medRxiv preprint posted on 8 May, are well above levels they calculated for 2010—and far larger than the 2% baseline estimated in a 2022 publishers’ group report.
[…]
Journals are awash in a rising tide of scientific manuscripts from paper mills—secretive businesses that allow researchers to pad their publication records by paying for fake papers or undeserved authorship. “Paper mills have made a fortune by basically attacking a system that has had no idea how to cope with this stuff,” says Dorothy Bishop, a University of Oxford psychologist who studies fraudulent publishing practices. A 2 May announcement from the publisher Hindawi underlined the threat: It shut down four of its journals it found were “heavily compromised” by articles from paper mills.
Sabel’s tool relies on just two indicators—authors who use private, noninstitutional email addresses, and those who list an affiliation with a hospital. It isn’t a perfect solution, because of a high false-positive rate. Other developers of fake-paper detectors, who often reveal little about how their tools work, contend with similar issues.
[…]
To fight back, the International Association of Scientific, Technical, and Medical Publishers (STM), representing 120 publishers, is leading an effort called the Integrity Hub to develop new tools. STM is not revealing much about the detection methods, to avoid tipping off paper mills. “There is a bit of an arms race,” says Joris van Rossum, the Integrity Hub’s product director. He did say one reliable sign of a fake is referencing many retracted papers; another involves manuscripts and reviews emailed from internet addresses crafted to look like those of legitimate institutions.
Twenty publishers—including the largest, such as Elsevier, Springer Nature, and Wiley—are helping develop the Integrity Hub tools, and 10 of the publishers are expected to use a paper mill detector the group unveiled in April. STM also expects to pilot a separate tool this year that detects manuscripts simultaneously sent to more than one journal, a practice considered unethical and a sign they may have come from paper mills.
[…]
STM hasn’t yet generated figures on accuracy or false-positive rates because the project is too new. But catching as many fakes as possible typically produces more false positives. Sabel’s tool correctly flagged nearly 90% of fraudulent or retracted papers in a test sample. However, it marked up to 44% of genuine papers as fake, so results still need to be confirmed by skilled reviewers.
[…]
Publishers embracing gold open access—under which journals collect a fee from authors to make their papers immediately free to read when published—have a financial incentive to publish more, not fewer, papers. They have “a huge conflict of interest” regarding paper mills, says Jennifer Byrne of the University of Sydney, who has studied how paper mills have doctored cancer genetics data.
The “publish or perish” pressure that institutions put on scientists is also an obstacle. “We want to think about engaging with institutions on how to take away perhaps some of the [professional] incentives which can have these detrimental effects,” van Rossum says. Such pressures can push clinicians without research experience to turn to paper mills, Sabel adds, which is why hospital affiliations can be a red flag.
A closed approach to building a detection tool is an incredibly bad idea – no-one can really know what it is doing and certain types of research will be flagged every time, for example. This type of tool especially needs to be accountable and changeable to the peers who have to review the papers this tool spits out as suspect. Only by having this type of tool open, can it be improved by third parties who also have a vested interest in improving the fake detection rates (eg universities, who you would think have quite some smart people there). Having it closed also lends a false sense of security – especially if the detection methods already have been leaked and papers mills from certain sources are circumventing them already. Security by obscurity is never ever a good idea.
Meta’s WhatsApp is threatening to leave the UK if the government passes the Online Safety Bill, saying it will essentially eliminate its encryption methods. Alongside its rival company Signal and five other apps, the company said that, by passing the bill, users will no longer be protected by end-to-end encryption, which ensures no one but the recipient has access to sent messages.
The “Online Safety Bill” was originally proposed to criminalize content encouraging self-harm posted to social media platforms like Facebook, Instagram, TikTok, and YouTube, but was amended to more broadly focus on illegal content related to adult and child safety. Although government officials said the bill would not ban end-to-end encryption, the messaging apps said in an open letter, “The bill provides no explicit protection for encryption.”
It continues, “If implemented as written, could empower OFCOM [the Office of Communications] to try to force the proactive scanning of private messages on end-to-end encrypted communication services, nullifying the purpose of end-to-end encryption as a result and compromising the privacy of all users.”
[…]
“In short, the bill poses an unprecedented threat to the privacy, safety, and security of every UK citizen and the people with whom they communicate around the world while emboldening hostile governments who may seek to draft copycat laws.”
Signal said in a Twitter post that it will “not back down on providing private, safe communications,” as the open letter urges the UK government to reconsider the way the bill is currently laid out. Bothcompanies have stood by their arguments, stating they will discontinue the apps in the UK rather than risk weakening their current encryption standards.
) published today “Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default.” This joint guidance urges software manufacturers to take urgent steps necessary to ship products that are secure-by-design and -default. To create a future where technology and associated products are safe for customers, the authoring agencies urge manufacturers to revamp their design and development programs to permit only secure-by-design and -default products to be shipped to customers.
This guidance, the first of its kind, is intended to catalyze progress toward further investments and cultural shifts necessary to achieve a safe and secure future. In addition to specific technical recommendations, this guidance outlines several core principles to guide software manufacturers in building software security into their design processes prior to developing, configuring, and shipping their products, including:
Take ownership of the security outcomes of their technology products, shifting the burden of security from the customers. A secure configuration should be the default baseline, in which products automatically enable the most important security controls needed to protect enterprises from malicious cyber actors.
Embrace radical transparency and accountability—for example, by ensuring vulnerability advisories and associated common vulnerability and exposure (CVE) records are complete and accurate.
Build the right organizational structure by providing executive level commitment for software manufacturers to prioritize security as a critical element of product development.
[…]
With this joint guide, the authoring agencies seek to progress an international conversation about key priorities, investments, and decisions necessary to achieve a future where technology is safe, secure, and resilient by design and default. Feedback on this guide is welcome and can be sent to SecureByDesign@cisa.dhs.gov.
Not having the guide linked in the press release means people have to search for it, which means it’s a great target for an attack. Not really secure at all!
Despite some companies making strides with ARM, for the most part, the desktop and laptop space is still dominated by x86 machines. For all their advantages, they have a glaring flaw for anyone concerned with privacy or security in the form of a hardware backdoor that can access virtually any part of the computer even with the power off. AMD calls their system the Platform Security Processor (PSP) and Intel’s is known as the Intel Management Engine (IME).
To fully disable these co-processors a computer from before 2008 is required, but if you need more modern hardware than that which still respects your privacy and security concerns you’ll need to either buy an ARM device, or disable the IME like NovaCustom has managed to do with their NS51 series laptop.
NovaCustom specializes in building custom laptops with customizations for various components and specifications to fit their needs, including options for the CPU, GPU, RAM, storage, keyboard layout, and other considerations. They favor Coreboot as a bootloader which already goes a long way to eliminating proprietary closed-source software at a fundamental level, but not all Coreboot machines have the IME completely disabled. There are two ways to do this, the HECI method which is better than nothing but not fully trusted, and the HAP bit, which completely disables the IME. NovaCustom is using the HAP bit approach to disable the IME, meaning that although it’s not completely eliminated from the computer, it is turned off in a way that’s at least good enough for computers that the NSA uses.
On Tuesday, Google – which has answered the government’s call to secure the software supply chain with initiatives like the Open Source Vulnerabilities (OSV) database and Software Bills of Materials (SBOMs) – announced an open source software vetting service, its deps.dev API.
The API, accessible in a more limited form via the web, aims to provide software developers with access to security metadata on millions of code libraries, packages, modules, and crates.
By security metadata, Google means things like: how well maintained a library is, who maintains it, what vulnerabilities are known to be present in it and whether they have been fixed, whether it’s had a code review, whether it’s using old or new versions of other dependencies, what license covers it, and so on. For example, see the info on the Go package cmdr and the Rust Cargo crate crossbeam-utils.
The API also provides at least two capabilities not available through the web interface: the ability to query the hash of a file’s contents (to find all package versions with the file) and dependency graphs based on actual installation rather than just declarations.
“Software supply chain attacks are increasingly common and harmful, with high profile incidents such as Log4Shell, Codecov, and the recent 3CX hack,” said Jesper Sarnesjo and Nicky Ringland, with Google’s open source security team, in a blog post. “The overwhelming complexity of the software ecosystem causes trouble for even the most diligent and well-resourced developers.”
[…]
The deps.dev API indexes data from various software package registries, including Rust’s Cargo, Go, Maven, JavaScript’s npm, and Python’s PyPI, and combines that with data gathered from GitHub, GitLab, and Bitbucket, as well as security advisories from OSV. The idea is to make metadata about software packages more accessible, to promote more informed security decisions.
Developers can query the API to look up a dependency’s records, with the returned data available programmatically to CI/CD systems, IDE plugins that present the information, build tools and policy engines, and other development tools.
Sarnesjo and Ringland say they hope the API helps developers understand dependency data better so that they can respond to – or prevent – attacks that try to compromise the software supply chain.
About a year ago, Google announced its Assured Open Source Software (Assured OSS) service, a service that helps developers defend against supply chain security attacks by regularly scanning and analyzing for vulnerabilities some of the world’s most popular software libraries. Today, Google is launching Assured OSS into general availability with support for well over a thousand Java and Python packages — and while Google didn’t initially disclose pricing when it first announced the service, the company has now revealed that it will be available for free.
Software development has long depended on third-party libraries (which are often maintained by only a single developer), but it wasn’t until the industry got hit with a number of high-profile exploits that everyone (including the White House) perked up and started taking software supply chain security seriously. Now, you can’t attend an open source conference without hearing about Software Bills of Materials (SBOMs), artifact registries and similar topics
[…]
Google promises that it will constantly keep these libraries up to date (without creating forks) and continuously scan for known vulnerabilities, do fuzz tests to discover new ones and then fix these issues and contribute these fixes back upstream. The company notes that when it first launched the service with around 250 Java libraries, it was responsible for discovering 48% of the new CVEs for these libraries and subsequently addressing them.
[…]
By partnering with a trusted supplier, organizations can mitigate these risks and ensure the integrity of their software supply chain to better protect their business applications.”
Developers and organizations that want to use the new service can sign up here and then integrate Assured OSS into their existing development pipeline.
Google unveiled a new open source security project on Thursday centered around software supply chain management.
Given the acronym GUAC – which stands for Graph for Understanding Artifact Composition – the project is focused on creating sets of data about a software’s build, security and dependency.
Google worked with Purdue University, Citibank and supply chain security company Kusari on GUAC, a free tool built to bring together many different sources of software security metadata. Google has also assembled a group of technical advisory members to help with the project — including IBM, Intel, Anchore and more.
“GUAC addresses a need created by the burgeoning efforts across the ecosystem to generate software build, security, and dependency metadata,” they wrote in a blog post. “GUAC is meant to democratize the availability of this security information by making it freely accessible and useful for every organization, not just those with enterprise-scale security and IT funding.”
They noted that U.S. President Joe Biden issued an executive order last year that said all federal government agencies must send a Software Bill of Materials (SBOM) to Allan Friedman, the director Cybersecurity Initiatives at National Telecommunications and Information Administration (NIST).
[…]
While SBOMs are becoming increasingly common thanks to the work of several tech industry groups like OpenSSF, there have been a number of complaints, one of those centered around the difficulty of sorting through troves of metadata, some of which is not useful.
Maruseac, Lum and Hepworth explained that it is difficult to combine and collate the kind of information found in many SBOMs.
“The documents are scattered across different databases and producers, are attached to different ecosystem entities, and cannot be easily aggregated to answer higher-level questions about an organization’s software assets,” they said.
Google shared a proof of concept of the project, which allows users to search data sets of software metadata.
The three explained that GUAC effectively aggregates software security metadata into a database and makes it searchable.
They used the example of a CISO or compliance officer that needs to understand the “blast radius” of a vulnerability. GUAC would allow them to “trace the relationship between a component and everything else in the portfolio.”
Google says the tool will allow anyone to figure out the most used critical components in their software supply chain ecosystem, the security weak points and any risky dependencies.
with Microsoft’s unveiling of the new Security Copilot AI at its inaugural Microsoft Secure event. The automated enterprise-grade security system is powered by OpenAI’s GPT-4, runs on the Azure infrastructure and promises admins the ability “to move at the speed and scale of AI.”
Security Copilot is similar to the large language model (LLM) that drives the Bing Copilot feature, but with a training geared heavily towards network security rather than general conversational knowledge and web search optimization. […]
“Just since the pandemic, we’ve seen an incredible proliferation [in corporate hacking incidents],”Jakkal told Bloomberg. For example, “it takes one hour and 12 minutes on average for an attacker to get full access to your inbox once a user has clicked on a phishing link. It used to be months or weeks for someone to get access.”
[…]
Jakkal anticipates these new capabilities enabling Copilot-assisted admins to respond within minutes to emerging security threats, rather than days or weeks after the exploit is discovered. Being a brand new, untested AI system, Security Copilot is not meant to operate fully autonomously, a human admin needs to remain in the loop. “This is going to be a learning system,” she said. “It’s also a paradigm shift: Now humans become the verifiers, and AI is giving us the data.”
To more fully protect the sensitive trade secrets and internal business documents Security Copilot is designed to protect, Microsoft has also committed to never use its customers data to train future Copilot iterations. Users will also be able to dictate their privacy settings and decide how much of their data (or the insights gleaned from it) will be shared. The company has not revealed if, or when, such security features will become available for individual users as well.