GitHub.com rotates its exposed private SSH key

GitHub has rotated its private SSH key for GitHub.com after the secret was was accidentally published in a public GitHub repository.

The software development and version control service says, the private RSA key was only “briefly” exposed, but that it took action out of “an abundance of caution.”

Unclear window of exposure

In a succinct blog post published today, GitHub acknowledged discovering this week that the RSA SSH private key for GitHub.com had been ephemerally exposed in a public GitHub repository.

“We immediately acted to contain the exposure and began investigating to understand the root cause and impact,” writes Mike Hanley, GitHub’s Chief Security Officer and SVP of Engineering.

“We have now completed the key replacement, and users will see the change propagate over the next thirty minutes. Some users may have noticed that the new key was briefly present beginning around 02:30 UTC during preparations for this change.”

The timing of the discovery is interesting—just weeks after GitHub rolled out secrets scanning for all public repos.

GitHub.com’s latest public key fingerprints are shown below. These can be used to validate that your SSH connection to GitHub’s servers is indeed secure.

As some may notice, only GitHub.com’s RSA SSH key has been impacted and replaced. No change is required for ECDSA or Ed25519 users.

SHA256:uNiVztksCsDhcc0u9e8BujQXVUpKZIDTMczCvj3tD2s (RSA)
SHA256:br9IjFspm1vxR3iA35FWE+4VTyz1hYVLIE2t1/CeyWQ (DSA – deprecated)
SHA256:p2QAMXNIC1TJYWeIOttrVc98/R1BUFWu3/LiyKgUfQM (ECDSA)
SHA256:+DiY3wvvV6TuJJhbpZisF/zLDA0zPMSvHdkr4UvCOqU (Ed25519)

“Please note that this issue was not the result of a compromise of any GitHub systems or customer information,” says GitHub.

“Instead, the exposure was the result of what we believe to be an inadvertent publishing of private information.”

The blog post, however, does not answer when exactly was the key exposed, and for how long, making the timeline of exposure a bit murky. Such timestamps can typically be ascertained from security logs—should these be available, and Git commit history.

[…]

Source: GitHub.com rotates its exposed private SSH key

Planting Undetectable Backdoors in Machine Learning Models

[…]

We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key,” the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.•First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given query access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Moreover, even if the distinguisher can request backdoored inputs of its choice, they cannot backdoor a new input—a property we call non-replicability.•Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm (Rahimi, Recht; NeurIPS 2007). In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is “clean” or contains a backdoor.

[…]

Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, by constructing undetectable backdoor for an “adversarially-robust” learning algorithm, we can produce a classifier that is indistinguishable from a robust classifier, but where every input has an adversarial example! In this way, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.

Source: Planting Undetectable Backdoors in Machine Learning Models : [Extended Abstract] | IEEE Conference Publication | IEEE Xplore

Whistleblowers Take Note: Don’t Trust Cropping Tools – you can often uncrop them

[…] It is, in fact, possible to uncrop images and documents across a variety of work-related computer apps. Among the suites that include the ability are Google Workspace, Microsoft Office, and Adobe Acrobat.

Being able to uncrop images and documents poses risks for sources who may be under the impression that cropped materials don’t contain the original uncropped content.

One of the hazards lies in the fact that, for some of the programs, downstream crop reversals are possible for viewers or readers of the document, not just the file’s creators or editors. Official instruction manuals, help pages, and promotional materials may mention that cropping is reversible, but this documentation at times fails to note that these operations are reversible by any viewers of a given image or document.

For instance, while Google’s help page mentions that a cropped image may be reset to its original form, the instructions are addressed to the document owner. “If you want to undo the changes you’ve made to your photo,” the help page says, “reset an image back to its original photo.” The page doesn’t specify that if a reader is viewing a Google Doc someone else created and wants to undo the changes the editor made to a photo, the reader, too, can reset the image without having edit permissions for the document.

For users with viewer-only access permissions, right-clicking on an image doesn’t yield the option to “reset image.” In this situation, however, all one has to do is right-click on the image, select copy, and then paste the image into a new Google Doc. Right-clicking the pasted image in the new document will allow the reader to select “reset image.” (I’ve put together an example to show how the crop reversal works in this case.)

[…]

Uncropped versions of images can be preserved not just in Office apps, but also in a file’s own metadata. A photograph taken with a modern digital camera contains all types of metadata. Many image files record text-based metadata such as the camera make and model or the GPS coordinates at which the image was captured. Some photos also include binary data such as a thumbnail version of the original photo that may persist in the file’s metadata even after the photo has been edited in an image editor.

Images and photos are not the only digital files susceptible to uncropping: Some digital documents may also be uncropped. While Adobe Acrobat has a page-cropping tool, the instructions point out that “information is merely hidden, not discarded.” By manually setting the margins to zero, it is possible to restore previously cropped areas in a PDF file.

[…]

Images and documents should be thoroughly stripped of metadata using tools such as ExifTool and Dangerzone. Additionally, sensitive materials should not be edited through online tools, as the potential always exists for original copies of the uploaded materials to be preserved and revealed.

[…]

 

Source: Whistleblowers Take Note: Don’t Trust Cropping Tools

DNA Diagnostics Center DCC Forgot About 2.1m Clients’ Data, Leaked It

A prominent DNA testing firm has settled a pair of lawsuits with the attorney generals of Pennsylvania and Ohio after a 2021 episode that saw cybercriminals steal data on 2.1 million people, including the social security numbers of 45,000 customers from both states. As a result of the lawsuits, the company in question, DNA Diagnostics Center (or DDC), will have to pay out a cumulative $400,000 to both governments and has also agreed to beef up its digital security practices. The company said it didn’t even know it had the data that was stolen because it was stored in an old database.

On its website, DDC calls itself the “world leader in private DNA testing,” and boasts of its lab director’s affiliation with a number of high-profile criminal cases, including the OJ Simpson trial and the Anna Nicole Smith paternity case. The company also claims that it is the “media’s primary source for answers to DNA testing questions” and that it’s considered the “premier laboratory to perform DNA testing for TV shows and radio programs.” While that may all sound very impressive, there’s definitely one thing DDC isn’t the “world leader” in—cybersecurity practices. Prior to the recent lawsuits, it doesn’t really sound like the company had any.

Evidence of the hacking episode first surfaced in May of 2021, when DDC’s managed service provider reached out via automated notification to inform the firm of unusual activity on its network. Unfortunately, DDC didn’t do much with that information. Instead, it waited several months before the MSP reached out yet again—this time to inform it that there was now evidence of Cobalt Strike on its network.

Cobalt Strike is a popular penetration testing tool that has frequently been co-opted by criminals to further penetrate already compromised networks. Unexpectedly finding it on your network is never a good sign. By the time DDC officially responded to its MSP’s warnings, a hacker had managed to steal data connected to 2.1 million people who had been genetically tested in the U.S., including the social security numbers of 45,000 customers from both Ohio and Pennsylvania.

The Register reports that the stolen data was part of a “legacy database” that DDC had amassed years ago and then apparently forgot that it had. In 2012, DDC had purchased another forensics firm, Orchid Cellmark, accumulating the firm’s databases along with the sale. DDC has subsequently claimed that it was unaware that the data was even in its systems, alleging that a prior inventory of its digital vaults turned up no sign of the information of millions of people that was later boosted by the hacker.

[…]

Source: DNA Diagnostics Center Forgot About Clients’ Data, Leaked It

It Took Months For Anker To Finally Admit Its Eufy Cameras Weren’t Really Secure

Last November, The Verge discovered that Anker, the maker of popular USB chargers and the Eufy line of “smart” cameras, had a bit of a security issue. Despite the fact the company advertised its Eufy cameras as having “end-to-end” military-grade encryption, security researcher Paul Moore and a hacker named Wasabi found it was pretty easy to intercept user video streams.

The researchers found that an attacker simply needed a device serial number to connect to a unique address at Eufy’s cloud servers using the free VLC Media Player, giving them access to purportedly private video feeds. When approached by The Verge, Anker apparently thought the best approach was to simply lie and insist none of this was possible, despite repeated demonstrations that it was very possible:

When we asked Anker point-blank to confirm or deny that, the company categorically denied it. “I can confirm that it is not possible to start a stream and watch live footage using a third-party player such as VLC,” Brett White, a senior PR manager at Anker, told me via email.

Not only that, Anker apparently thought it would be a good idea to purge its website of all of its past promises related to privacy, thinking this would somehow cause folks to forget they’d misled their customers on proper end to end encryption. It didn’t.

It took several months, but The Verge kept pressing Anker to come clean, and only this week did the company finally decide to do so:

In a series of emails to The Verge, Anker has finally admitted its Eufy security cameras are not natively end-to-end encrypted — they can and did produce unencrypted video streams for Eufy’s web portal, like the ones we accessed from across the United States using an ordinary media player.

But Anker says that’s now largely fixed. Every video stream request originating from Eufy’s web portal will now be end-to-end encrypted — like they are with Eufy’s app — and the company says it’s updating every single Eufy camera to use WebRTC, which is encrypted by default. Reading between the lines, though, it seems that these cameras could still produce unencrypted footage upon request.

I don’t know why anybody in tech PR in 2023 would think the best response to a privacy scandal is to lie, pretend nothing happened, and then purge your company’s website of past promises. Perhaps that works in some industries, but when you’re selling products to techies with very specific security promises attached, it’s just idiotic, and kudos to The Verge for relentlessly calling Anker out for it.

Source: It Took Months For Anker To Finally Admit Its Eufy Cameras Weren’t Really Secure | Techdirt

European Police Arrest 42 After Cracking another Covert comms App: Exclu

European police arrested 42 suspects and seized guns, drugs and millions in cash, after cracking another encrypted online messaging service used by criminals, Dutch law enforcement said Friday.

Police launched raids on 79 premises in Belgium, Germany and the Netherlands following an investigation that started back in September 2020 and led to the shutting down of the covert Exclu Messenger service.

Exclu is just the latest encrypted online chat service to be unlocked by law enforcement. In 2021 investigators broke into Sky ECC — another “secure” app used by criminal gangs.

After police and prosecutors got into the Exclu secret communications system, they were able to read the messages passed between criminals for five months before the raids, said Dutch police.

[…]

The police raids uncovered at least two drugs labs, one cocaine-processing facility, several kilogrammes of drugs, four million euros ($4.3 million) in cash, luxury goods and guns, Dutch police said.

Used by around 3,000 people, including around 750 Dutch speakers, Exclu was installed on smartphones with a licence to operate costing 800 euros for six months.

[…]

Source: European Police Arrest 42 After Cracking Covert App | Barron’s

This goes to show again – don’t do your own encyrption!

Corrupt NOTAM database file and backup led to the FAA ground stoppage.

Officials are still trying to figure out exactly what led to the Federal Aviation Administration system outage on Wednesday but have traced it to a corrupt file, which was first reported by CNN.
In a statement late Wednesday, the FAA said it was continuing to investigate the outage and “take all needed steps to prevent this kind of disruption from happening again.”
“Our preliminary work has traced the outage to a damaged database file. At this time, there is no evidence of a cyberattack,” the FAA said.
The FAA is still trying to determine whether any one person or “routine entry” into the database is responsible for the corrupted file, a government official familiar with the investigation into the NOTAM system outage told CNN.
Another source familiar with the Federal Aviation Administration operation described exclusively to CNN on Wednesday how the outage played out.
When air traffic control officials realized they had a computer issue late Tuesday, they came up with a plan, the source said, to reboot the system when it would least disrupt air travel, early on Wednesday morning.
But ultimately that plan and the outage led to massive flight delays and an unprecedented order to stop all aircraft departures nationwide.
The computer system that failed was the central database for all NOTAMs (Notice to Air Missions) nationwide. Those notices advise pilots of issues along their route and at their destination. It has a backup, which officials switched to when problems with the main system emerged, according to the source.
FAA officials told reporters early Wednesday that the issues developed in the 3 p.m. ET hour on Tuesday.
Officials ultimately found a corrupt file in the main NOTAM system, the source told CNN. A corrupt file was also found in the backup system.
In the overnight hours of Tuesday into Wednesday, FAA officials decided to shut down and reboot the main NOTAM system — a significant decision, because the reboot can take about 90 minutes, according to the source.
They decided to perform the reboot early Wednesday, before air traffic began flying on the East Coast, to minimize disruption to flights.
“They thought they’d be ahead of the rush,” the source said.
During this early morning process, the FAA told reporters that the system was “beginning to come back online,” but said it would take time to resolve.
The system, according to the source, “did come back up, but it wasn’t completely pushing out the pertinent information that it needed for safe flight, and it appeared that it was taking longer to do that.”
That’s when the FAA issued a nationwide ground stop at around 7:30 a.m. ET, halting all domestic departures.
Aircraft in line for takeoff were held before entering runways. Flights already in the air were advised verbally of the safety notices by air traffic controllers, who keep a static electronic or paper record at their desks of the active notices.
Transportation Secretary Pete Buttigieg ordered an after-action review and also said there was “no direct evidence or indication” that the issue was a cyberattack.
The source said the NOTAM system is an example of aging infrastructure due for an overhaul.
[…]

Source: A corrupt file led to the FAA ground stoppage. It was also found in the backup system | CNN Travel

Agreed, the NOTAM system (which stood for NOtice To AirMen until this article) is definitely ancient and in dire need of a refresh.

Citizen’s volunteer ‘safety’ app accidentally doxxes singer Billie Eilish

Citizen, the provocative crime-reporting app formerly known as Vigilante, is in the news again for all the wrong reasons. On Thursday evening, it doxxed singer Billie Eilish, publishing her address to thousands of people after an alleged burglary at her home.

Shortly after the break-in, the app notified users of a break-in in Los Angeles’ Highland Park neighborhood — including the home’s address. As reported by Vice, Citizen’s message was updated at 9:41 PM to state that the house belonged to Eilish. According to Citizen’s metrics, the alert was sent to 178,000 people and viewed by nearly 78,000. On Friday morning, Citizen updated the app’s description of the incident, replacing the precise address with a nearby cross-street.

Although celebrity home addresses are often publicly available (usually on seedy websites specializing in such invasive nonsense), a popular app pushing the home address of one of pop music’s biggest stars to thousands of users is… new. Unfortunately, it’s also just the latest potentially destructive move from Citizen.

 

When Citizen launched as Vigilante in 2016, Apple quickly pulled the title from the App Store based on concerns about its encouraging users to thrust themselves into dangerous situations. So it rebranded as Citizen with a new focus on safety, and Apple re-opened its gates. The app began advising users to avoid incidents in progress while providing tools to help those caught in a dangerous situation. Although that sounds reasonable, at least one episode reveals an overzealousness company prioritizing attention and profit over social responsibility.

Visual of three phones showing screenshots from the Citizen app
Citizen

In May 2021, CEO Andrew Frame ordered the launch of a live stream, encouraging the app’s users to hunt down a suspected wildfire arsonist (based on a tip from an LAPD sergeant and emails from residents questioned by police). He offered a $10,000 bounty for finding the suspect, which grew to $30,000 later in the evening. As the hunt continued, the CEO reportedly grew more frantic, with one of his internal Slack conversations encouraging the team to “get this guy before midnight” in an ecstatic, all-caps message.

[…]

Source: Citizen’s volunteer ‘safety’ app accidentally doxxes singer Billie Eilish | Engadget

Connected car security is very poor – fortunately they do actually take it seriously, fix bugs quickly

Multiple bugs affecting millions of vehicles from almost all major car brands could allow miscreants to perform any manner of mischief — in some cases including full takeovers —  by exploiting vulnerabilities in the vehicles’ telematic systems, automotive APIs and supporting infrastructure, according to security researchers.

Specifically, the vulnerabilities affect Mercedes-Benz, BMW, Rolls Royce, Ferrari, Ford, Porsche, Toyota, Jaguar and Land Rover, plus fleet management company Spireon and digital license plate company Reviver.

The research builds on Yuga Labs’ Sam Curry’s earlier car hacking expeditions that uncovered flaws affecting Hyundai and Genesis vehicles, as well as Hondas, Nissans, Infinitis and Acuras via an authorization flaw in Sirius XM’s Connected Vehicle Services.

All of the bugs have since been fixed.

“The affected companies all fixed the issues within one or two days of reporting,” Curry told The Register. ” We worked with all of them to validate them and make sure there weren’t any bypasses.”

[…]

Curry and the team discovered multiple vulnerabilities in SQL injection and authorization bypass to perform remote code execution across all of Spireon and fully take over any fleet vehicle.

“This would’ve allowed us to track and shut off starters for police, ambulances, and law enforcement vehicles for a number of different large cities and dispatch commands to those vehicles,” the researchers wrote.

The bugs also gave them full administrator access to Spireon and a company-wide administration panel from which an attacker could send arbitrary commands to all 15 million vehicles, thus remotely unlocking doors, honking horns, starting engines […]

[…]

With Ferrari, the researchers found overly permissive access controls that allowed them to access JavaScript code for several internal applications. The code contained API keys and credentials that could have allowed attackers to access customer records and take over (or delete) customer accounts.

[…]

a misconfigured single-sign on (SSO) portal for all employees and contractors of BMW, which owns Rolls-Royce, would have allowed access to any application behind the portal.

[…]

misconfigured SSO for Mercedes-Benz allowed the researchers to create a user account on a website intended for vehicle repair shops to request specific tools. They then used this account to sign in to the Mercedes-Benz Github, which held internal documentation and source code for various Mercedes-Benz projects including its Me Connect app used by customers to remotely connect to their vehicles.

The researchers reported this vulnerability to the automaker, and they noted that Mercedes-Benz “seemed to misunderstand the impact” and wanted further details about why this was a problem.

So the team used their newly created account credentials to login to several applications containing sensitive data. Then they “achieved remote code execution via exposed actuators, spring boot consoles, and dozens of sensitive internal applications used by Mercedes-Benz employees.”

One of these was the carmaker’s version of Slack. “We had permission to join any channel, including security channels, and could pose as a Mercedes-Benz employee who could ask whatever questions necessary for an actual attacker to elevate their privileges across the Benz infrastructure,” the researchers explained.

A Mercedes-Benz spokesperson confirmed that Curry contacted the company about the vulnerability and that it had been fixed.

[…]

vulnerabilities affecting Porsche’s telematics service that allowed them to remotely retrieve vehicle location and send vehicle commands.

Plus, they found an access-control vulnerability on the Toyota Financial app that disclosed the name, phone number, email address, and loan status of any customers. Toyota Motor Credit told The Register that it fixed the issue

[…]

Source: Here’s how to remotely takeover a Ferrari…account, that is • The Register

LastPass is being sued following major cyberattack

[…]

According to the class action complaint filed in a Massachusetts court, names, usernames, billing addresses, email addresses, telephone numbers, and even the IP addresses used to access the service were all made available to wrongdoers.

The final straw in the hat could have been the leak of customers’ unencrypted vault data, which includes all manner of information ranging from website usernames and passwords to other secure notes and form data.

According to the lawsuit, “LastPass understood and appreciated the value of this Information yet chose to ignore it by failing to invest in adequate data security measures”.

The case’s plaintiff claims to have invested $53,000 in Bitcoin since July 2022, which was later “stolen” several months later, leading to police and FBI reports.

[…]

Source: LastPass is being sued following major cyberattack

There are more articles about LastPass on this blog. It seems they did not take their security quite as seriously as they led us to believe.

FBI warns of fake shopping sites – recommends to use an ad blocker

The FBI is warning the public that cyber criminals are using search engine advertisement services to impersonate brands and direct users to malicious sites that host ransomware and steal login credentials and other financial information.

[…]

Cyber criminals purchase advertisements that appear within internet search results using a domain that is similar to an actual business or service. When a user searches for that business or service, these advertisements appear at the very top of search results with minimum distinction between an advertisement and an actual search result. These advertisements link to a webpage that looks identical to the impersonated business’s official webpage.

[…]

The FBI recommends individuals take the following precautions:

  • Before clicking on an advertisement, check the URL to make sure the site is authentic. A malicious domain name may be similar to the intended URL but with typos or a misplaced letter.
  • Rather than search for a business or financial institution, type the business’s URL into an internet browser’s address bar to access the official website directly.
  • Use an ad blocking extension when performing internet searches. Most internet browsers allow a user to add extensions, including extensions that block advertisements. These ad blockers can be turned on and off within a browser to permit advertisements on certain websites while blocking advertisements on others.

The FBI recommends businesses take the following precautions:

  • Use domain protection services to notify businesses when similar domains are registered to prevent domain spoofing.
  • Educate users about spoofed websites and the importance of confirming destination URLs are correct.
  • Educate users about where to find legitimate downloads for programs provided by the business.

Source: Internet Crime Complaint Center (IC3) | Cyber Criminals Impersonating Brands Using Search Engine Advertisement Services to Defraud Users

For Firefox you have uBlock Origin or NoScript / Disconnect / Facebook Container / Privacy Badger / Ghostery / Super Agent / LocalCDN – you can run them all at once, but will have to sometimes whitelist certain sites just to get them to work. It’s a bit of trouble but internet will look much better being mainly ad free.

LastPass breached again

In keeping with our commitment to transparency, I wanted to inform you of a security incident that our team is currently investigating. 

We recently detected unusual activity within a third-party cloud storage service, which is currently shared by both LastPass and its affiliate, GoTo. We immediately launched an investigation, engaged Mandiant, a leading security firm, and alerted law enforcement. 

We have determined that an unauthorized party, using information obtained in the August 2022 incident, was able to gain access to certain elements of our customers’ information. Our customers’ passwords remain safely encrypted due to LastPass’s Zero Knowledge architecture. 

We are working diligently to understand the scope of the incident and identify what specific information has been accessed. In the meantime, we can confirm that LastPass products and services remain fully functional.

[…]

Source: Notice of Recent Security Incident – The LastPass Blog

Token tactics: How to prevent, detect, and respond to cloud token theft

[…] Recently, the Microsoft Detection and Response Team (DART) has seen an increase in attackers utilizing token theft for this purpose. By compromising and replaying a token issued to an identity that has already completed multifactor authentication, the threat actor satisfies the validation of MFA and access is granted to organizational resources accordingly. This poses to be a concerning tactic for defenders because the expertise needed to compromise a token is very low, is hard to detect, and few organizations have token theft mitigations in their incident response plan.

[…]

Tokens are at the center of OAuth 2.0 identity platforms, such as Azure Active Directory (Azure AD). To access a resource (for example, a web application protected by Azure AD), a user must present a valid token. To obtain that token, the user must sign into Azure AD using their credentials. At that point, depending on policy, they may be required to complete MFA. The user then presents that token to the web application, which validates the token and allows the user access.

Flowchart for Azure Active Directory issuing tokens.
Figure 1. OAuth Token flow chart

When Azure AD issues a token, it contains information (claims) such as the username, source IP address, MFA, and more. It also includes any privilege a user has in Azure AD. If you sign in as a Global Administrator to your Azure AD tenant, then the token will reflect that. Two of the most common token theft techniques DART has observed have been through adversary-in-the-middle (AitM) frameworks or the utilization of commodity malware (which enables a ‘pass-the-cookie’ scenario).

[…]

When the user is phished, the malicious infrastructure captures both the credentials of the user, and the token.

Flowchart describing how an adversary in the middle attack works.
Figure 3. Adversary-in-the-middle (AitM) attack flowchart

If a regular user is phished and their token stolen, the attacker may attempt business email compromise (BEC) for financial gain.

[…]

A “pass-the-cookie” attack is a type of attack where an attacker can bypass authentication controls by compromising browser cookies.

[…]

Commodity credential theft malware like Emotet, Redline, IcedID, and more all have built-in functionality to extract and exfiltrate browser cookies. Additionally, the attacker does not have to know the compromised account password or even the email address for this to work those details are held within the cookie.

[…]

Recommendations

Protect

Organizations can take a significant step toward reducing the risk of token theft by ensuring that they have full visibility of where and how their users are authenticating. To access critical applications like Exchange Online or SharePoint, the device used should be known by the organization. Utilizing compliance tools like Intune in combination with device based conditional access policies can help to keep devices up to date with patches, antivirus definitions, and EDR solutions. Allowing only known devices that adhere to Microsoft’s recommended security baselines helps mitigate the risk of commodity credential theft malware being able to compromise end user devices.

For those devices that remain unmanaged, consider utilizing session conditional access policies and other compensating controls to reduce the impact of token theft:

Protect your users by blocking initial access:

  • Plan and implement phishing resistant MFA solutions such as FIDO2 security keys, Windows Hello for Business, or certificate-based authentication for users.
    • While this may not be practical for all users, it should be considered for users of significant privilege like Global Admins or users of high-risk applications.
  • Users that hold a high level of privilege in the tenant should have a segregated cloud-only identity for all administrative activities, to reduce the attack surface from on-premises to cloud in the event of on-premises domain compromise and abuse of privilege. These identities should also not have a mailbox attached to them to prevent the likelihood of privileged account compromise via phishing techniques.

[…]

In instances of token theft, adversaries insert themselves in the middle of the trust chain and often subsequently circumvent security controls. Having visibility, alerting, insights, and a full understanding of where security controls are enforced is key. Treating both identity providers that generate access tokens and their associated privileged identities as critical assets is strongly encouraged.

[…]

Source: Token tactics: How to prevent, detect, and respond to cloud token theft – Microsoft Security Blog

Fix the Android Security Flaw That Lets Anyone Unlock Your Phone

[…] If an attacker inserts their own SIM into a target’s Android, then enters the wrong SIM PIN three times, they can enter their SIM’s PUK to be able to create a new SIM PIN. Once they do, they bypass the lock screen entirely and access the phone. You can watch the hypothetical attack play out in the video below:

Pixel 6 Full Lockscreen Bypass POC

Schütz brought this flaw to Google’s attention back in June of this year, but it took the company five months to finally push a patch.[…]

Source: Fix the Android Security Flaw That Lets Anyone Unlock Your Phone

Introducing Shufflecake: plausible deniability for multiple hidden filesystems on Linux

Today we are excited to release Shufflecake, a tool aimed at helping people whose freedom of expression is threatened by repressive authorities or dangerous criminal organizations, in particular: whistleblowers, investigative journalists, and activists for human rights in oppressive regimes. Shufflecake is FLOSS (Free/Libre, Open Source Software). Source code in C is available and released under the GNU General Public License v3.0 or superior.

[…]

Shufflecake is a tool for Linux that allows creation of multiple hidden volumes on a storage device in such a way that it is very difficult, even under forensic inspection, to prove the existence of such volumes. Each volume is encrypted with a different secret key, scrambled across the empty space of an underlying existing storage medium, and indistinguishable from random noise when not decrypted. Even if the presence of the Shufflecake software itself cannot be hidden – and hence the presence of secret volumes is suspected – the number of volumes is also hidden. This allows a user to create a hierarchy of plausible deniability, where “most hidden” secret volumes are buried under “less hidden” decoy volumes, whose passwords can be surrendered under pressure. In other words, a user can plausibly “lie” to a coercive adversary about the existence of hidden data, by providing a password that unlocks “decoy” data. Every volume can be managed independently as a virtual block device, i.e. partitioned, formatted with any filesystem of choice, and mounted and dismounted like a normal disc. The whole system is very fast, with only a minor slowdown in I/O throughput compared to a bare LUKS-encrypted disk, and with negligible waste of memory and disc space.

You can consider Shufflecake a “spiritual successor” of tools such as Truecrypt and Veracrypt, but vastly improved. First of all, it works natively on Linux, it supports any filesystem of choice, and can manage up to 15 nested volumes per device, so to make deniability of the existence of these partitions really plausible.

[…]

Source: Introducing Shufflecake: plausible deniability for multiple hidden filesystems on Linux – Kudelski Security Research

Lenovo driver goof poses security risk for users of 25 notebook models

More than two dozen Lenovo notebook models are vulnerable to malicious hacks that disable the UEFI secure-boot process and then run unsigned UEFI apps or load bootloaders that permanently backdoor a device, researchers warned on Wednesday.

At the same time that researchers from security firm ESET disclosed the vulnerabilities, the notebook maker released security updates for 25 models, including ThinkPads, Yoga Slims, and IdeaPads. Vulnerabilities that undermine the UEFI secure boot can be serious because they make it possible for attackers to install malicious firmware that survives multiple operating system reinstallations.

[…]

Short for Unified Extensible Firmware Interface, UEFI is the software that bridges a computer’s device firmware with its operating system. As the first piece of code to run when virtually any modern machine is turned on, it’s the first link in the security chain. Because the UEFI resides in a flash chip on the motherboard, infections are difficult to detect and remove. Typical measures such as wiping the hard drive and reinstalling the OS have no meaningful impact because the UEFI infection will simply reinfect the computer afterward.

[…]

Disabling the UEFI Secure Boot frees attackers to execute malicious UEFI apps, something that’s normally not possible because secure boot requires UEFI apps to be cryptographically signed. Restoring the factory-default DBX, meanwhile, allows attackers to load vulnerable bootloaders. In August, researchers from security firm Eclypsium identified three prominent software drivers that could be used to bypass secure boot when an attacker has elevated privileges, meaning administrator on Windows or root on Linux.

The vulnerabilities can be exploited by tampering with variables in NVRAM, the non-volatile RAM that stores various boot options. The vulnerabilities are the result of Lenovo mistakenly shipping Notebooks with drivers that had been intended for use only during the manufacturing process. The vulnerabilities are:

  • CVE-2022-3430: A potential vulnerability in the WMI Setup driver on some consumer Lenovo Notebook devices may allow an attacker with elevated privileges to modify secure boot settings by changing an NVRAM variable.
  • CVE-2022-3431: A potential vulnerability in a driver used during the manufacturing process on some consumer Lenovo Notebook devices that was mistakenly not deactivated may allow an attacker with elevated privileges to modify the secure boot setting by altering an NVRAM variable.
  • CVE-2022-3432: A potential vulnerability in a driver used during the manufacturing process on the Ideapad Y700-14ISK that was mistakenly not deactivated may allow an attacker with elevated privileges to modify the secure boot setting by adjusting an NVRAM variable.

Lenovo is patching only the first two. CVE-2022-3432 will not be patched because the company no longer supports the Ideapad Y700-14ISK, the end-of-life notebook model that’s affected. People using any of the other vulnerable models should install patches as soon as practical.

Source: Lenovo driver goof poses security risk for users of 25 notebook models | Ars Technica

Egypt’s COP27 summit app can read your emails and encrypted messages, scan your device, send your location

Western security advisers are warning delegates at the COP27 climate summit not to download the host Egyptian government’s official smartphone app, amid fears it could be used to hack their private emails, texts and even voice conversations.

[…]

The potential vulnerability from the Android app, which has been downloaded thousands of times and provides a gateway for participants at COP27, was confirmed separately by four cybersecurity experts who reviewed the digital application for POLITICO.

The app is being promoted as a tool to help attendees navigate the event. But it risks giving the Egyptian government permission to read users’ emails and messages. Even messages shared via encrypted services like WhatsApp are vulnerable, according to POLITICO’s technical review of the application, and two of the outside experts.

The app also provides Egypt’s Ministry of Communications and Information Technology, which created it, with other so-called backdoor privileges, or the ability to scan people’s devices.

On smartphones running Google’s Android software, it has permission to potentially listen into users’ conversations via the app, even when the device is in sleep mode, according to the three experts and POLITICO’s separate analysis. It can also track people’s locations via smartphone’s built-in GPS and Wi-Fi technologies, according to two of the analysts.

The app is nothing short of “a surveillance tool that could be weaponized by the Egyptian authorities to track activists, government delegates and anyone attending COP27,” said Marwa Fatafta, digital rights lead for the Middle East and North Africa for Access Now, a nonprofit digital rights organization.

[…]

Both Google and Apple approved the app to appear in their separate app stores. All of the analysts only reviewed the Android version of the app, and not the separate app created for Apple’s devices. Apple declined to comment on the separate app created for its App Store.

[…]

As part of the smartphone app’s privacy notice, the Egyptian government says it has the right to use information provided by those who have downloaded the app, including GPS locations, camera access, photos and Wi-Fi details.

“Our application reserves the right to access customer accounts for technical and administrative purposes and for security reasons,” the privacy statement said.

Yet the technical review, both by POLITICO and the outside experts of the COP27 smartphone application discovered further permissions that people had granted, unwittingly, to the Egyptian government that were not made public via its public statements.

These included the application having the right to track what attendees did on other apps on their phone; connecting users’ smartphones via Bluetooth to other hardware in ways that could lead to data being offloaded onto government-owned devices; and independently linking individuals’ phones to Wi-Fi networks, or making calls on their behalf without them knowing.

[…]

Source: Egypt’s COP27 summit app is a cyber weapon, experts warn – POLITICO

AstraZeneca puts username and password on Github, exposes patient data in test environment for a year

Pharmaceutical giant AstraZeneca has blamed “user error” for leaving a list of credentials online for more than a year that exposed access to sensitive patient data.

Mossab Hussein, chief security officer at cybersecurity startup SpiderSilk, told TechCrunch that a developer left the credentials for an AstraZeneca internal server on code sharing site GitHub in 2021. The credentials allowed access to a test Salesforce cloud environment, often used by businesses to manage their customers, but the test environment contained some patient data, Hussein said.

[…]

Due to an [sic] user error, some data records were temporarily available on a developer platform. We stopped access to this data immediately after we have been [sic] informed. We are investigating the root cause as well as assessing our regulatory obligations.”

Barth declined to say for what reason patient data was stored on a test environment, and if AstraZeneca has the technical means, such as logs, to determine if anyone accessed the data and what, if any, data was exfiltrated.

[…]

Source: AstraZeneca password lapse exposed patient data | TechCrunch

Wi-Peep drone locates all your wifi devices and maps them in your home, can tell if your watch is moving around

We present Wi-Peep – a new location-revealing privacy attack on non-cooperative Wi-Fi devices. Wi-Peep exploits loopholes in the 802.11 protocol to elicit responses from Wi-Fi devices on a network that we do not have access to. It then uses a novel time-of-flight measurement scheme to locate these devices. Wi-Peep works without any hardware or software modifications on target devices and without requiring access to the physical space that they are deployed in. Therefore, a pedestrian or a drone that carries a Wi-Peep device can estimate the location of every Wi-Fi device in a building. Our Wi-Peep design costs $20 and weighs less than 10 g. We deploy it on a lightweight drone and show that a drone flying over a house can estimate the location of Wi-Fi devices across multiple floors to meter-level accuracy. Finally, we investigate different mitigation techniques to secure future Wi-Fi devices against such attacks.

Source: Non-cooperative wi-fi localization & its privacy implications | Proceedings of the 28th Annual International Conference on Mobile Computing And Networking

British govt is scanning all Internet devices hosted in UK

The United Kingdom’s National Cyber Security Centre (NCSC), the government agency that leads the country’s cyber security mission, is now scanning all Internet-exposed devices hosted in the UK for vulnerabilities.

The goal is to assess UK’s vulnerability to cyber-attacks and to help the owners of Internet-connected systems understand their security posture.

“These activities cover any internet-accessible system that is hosted within the UK and vulnerabilities that are common or particularly important due to their high impact,” the agency said.

“The NCSC uses the data we have collected to create an overview of the UK’s exposure to vulnerabilities following their disclosure, and track their remediation over time.”

NCSC’s scans are performed using tools hosted in a dedicated cloud-hosted environment from scanner.scanning.service.ncsc.gov.uk and two IP addresses (18.171.7.246 and 35.177.10.231).

The agency says that all vulnerability probes are tested within its own environment to detect any issues before scanning the UK Internet.

“We’re not trying to find vulnerabilities in the UK for some other, nefarious purpose,” NCSC technical director Ian Levy explained.

“We’re beginning with simple scans, and will slowly increase the complexity of the scans, explaining what we’re doing (and why we’re doing it).”

How to opt out of vulnerability probes

Data collected from these scans includes any data sent back when connecting to services and web servers, such as the full HTTP responses (including headers).

Requests are designed to harvest the minimum amount of info required to check if the scanned asset is affected by a vulnerability.

If any sensitive or personal data is inadvertently collected, the NCSC says it will “take steps to remove the data and prevent it from being captured again in the future.”

British organizations can also opt out of having their servers scanned by the government by emailing a list of IP addresses they want to be excluded at scanning@ncsc.gov.uk.

In January, the cybersecurity agency also started releasing NMAP Scripting Engine scripts to help defenders scan for and remediate vulnerable systems on their networks.

The NCSC plans to release new Nmap scripts only for critical security vulnerabilities it believes to be at the top of threat actors’ targeting lists.

Source: British govt is scanning all Internet devices hosted in UK

Multi-factor authentication bombing fatigue can blow open security

The September cyberattack on ride-hailing service Uber began when a criminal bought the stolen credentials of a company contractor on the dark web.

The miscreant then repeatedly tried to log into the contractor’s Uber account, triggering the two-factor login approval request that the contractor initially denied, blocking access. However, eventually the contractor accepted one of many push notifications, enabling the attacker to log into the account and get access to Uber’s corporate network, systems, and data.

[…]

Microsoft and Cisco Systems were also victims of MFA fatigue – also known as MFA spamming or MFA bombing – this year, and such attacks are rising rapidly. According to Microsoft, between December 2021 and August, the number of multi-factor MFA attacks spiked. There were 22,859 Azure Active Directory Protection sessions with multiple failed MFA attempts last December. In August, there were 40,942.

[…]

In an MFA fatigue situation, the attacker uses the stolen credentials to try to sign into an protected account over and over, overwhelming the user with push notifications. The user may initially tap on the prompt saying it isn’t them trying to sign in, but eventually they wear down from the spamming and accept it just to stop their phone going off. They may assume it’s a temporary glitch or an automated system causing the surge in requests.

[…]

sometimes the attacker will pose as part of the organization’s IT staff, messaging the employee to accept the access attempt.

[…]

Ensuring authentication apps can’t be fat-fingered and requests wrongly accepted before they can be fully evaluated, for instance, would be handy. Adding intelligent handling of logins, so that there’s a cooling off period after a bout of MFA spam, is, again, useful, too.

And on top of this, some forms of MFA, such as one-time authentication tokens, can be phished along with usernames and passwords to allow a miscreant to login as their victim. Finding and implementing a phish-resistant MFA approach is something worth thinking about.

[…]

Some companies are on the ball. Microsoft, for instance, is making number matching a default feature in its Authenticator app. This requires a user who responds to an MFA push notification using the tool to type in a number that appears on their device’s screen to approve a login. The number will only be sent to users who have been enabled for number matching, according to Microsoft.

They’re also adding other features to Authenticator, including showing users what application they’re signing into and the location of the device, based on its IP address, that is being used for signing in. If the user is in California but the device is in Europe, that should raise a big red flag. That also ought to be automatically caught by authentication systems, too.

[…]

As to limiting the number of unsuccessful MFA authentication requests: Okta limits that number to five; Microsoft and Duo offer organizations the ability to implement it in their settings and adjust the number of failed attempts before the user’s account is automatically locked. With Microsoft Authenticator, enterprises also can set the number of minutes before an account lockout counter is reset.

[…]

Source: Multi-factor authentication fatigue can blow open security • The Register

Whoops! Amazon Left Prime Video DB with viewing habits (Named ‘Sauron’) Unprotected – yup Elasticsearch

Amazon didn’t protect one of its internal servers, allowing anyone to view a database named “Sauron” which was full of Prime Video viewing habits.

As TechCrunch reports(Opens in a new window), the unprotected Elasticsearch database was discovered by security researcher Anurag Sen(Opens in a new window). Contained within the database, which anyone who knew the IP address could access using a web browser, were roughly 215 million records of Prime Video viewing habit information. The data included show/movie name, streaming device used, network quality, subscription details, and Prime customer status.

[…]

Source: Whoops! Amazon Left a Prime Video Database Named ‘Sauron’ Unprotected | PCMag

Thomson Reuters leaked at least 3TB of sensitive data – yes, open elasticsearch instances

The Cybernews research team found that Thomson Reuters left at least three of its databases accessible for anyone to look at. One of the open instances, the 3TB public-facing ElasticSearch database, contains a trove of sensitive, up-to-date information from across the company’s platforms. The company recognized the issue and fixed it immediately.

Thomson Reuters provides customers with products such as the business-to-business media tool Reuters Connect, legal research service and database Westlaw, the tax automation system ONESOURCE, online research suite of editorial and source materials Checkpoint, and other tools.

The size of the open database the team discovered corresponds with the company using ElasticSearch, a data storage favored by enterprises dealing with extensive, constantly updated volumes of data.

  • Media giant with $6.35 billion in revenue left at least three of its databases open
  • At least 3TB of sensitive data exposed including Thomson Reuters plaintext passwords to third-party servers
  • The data company collects is a treasure trove for threat actors, likely worth millions of dollars on underground criminal forums
  • The company has immediately fixed the issue, and started notifying their customers
  • Thomson Reuters downplayed the issue, saying it affects only a “small subset of Thomson Reuters Global Trade customers”
  • The dataset was open for several days – malicious bots are capable of discovering instances within mere hours
  • Threat actors could use the leak for attacks, from social engineering attacks to ransomware

The naming of ElasticSearch indices inside the Thomson Reuters server suggests that the open instance was used as a logging server to collect vast amounts of data gathered through user-client interaction. In other words, the company collected and exposed thousands of gigabytes of data that Cybernews researchers believe would be worth millions of dollars on underground criminal forums because of the potential access it could give to other systems.

Meanwhile, Thomson Reuters claims that out of three misconfigured servers the team informed the company about, two were designed to be publicly accessible. The third server was a non-production server meant for “application logs from the pre-production/implementation environment.”

[…]

For example, the open dataset held access credentials to third-party servers. The details were held in plaintext format, visible to anyone crawling through the open instance.

[…]

The team also found the open instance to contain login and password reset logs. While these don’t expose either old or new passwords, the logs show the account holder’s email address, and the exact time the password change query was sent can be seen.

Another piece of sensitive information includes SQL (structured query language) logs that show what information Thomson Reuters clients were looking for. The records also include what information the query brought back.

That includes documents with corporate and legal information about specific businesses or individuals. For instance, an employee of a company based in the US was looking for information about an organization in Russia using Thomson Reuters services, only to find out that its board members were under US sanctions over their role in the invasion of Ukraine.

The team has also discovered that the open database included an internal screening of other platforms such as YouTube, Thomson Reuters clients’ access logs, and connection strings to other databases. The exposure of connection strings is particularly dangerous because the company’s internal network elements are exposed, enabling threat actors’ lateral movement and pivoting through Reuter Thomson’s internal systems.

[…]

The team contacted Thomson Reuters upon discovering the leaking database, and the company took down the open instance immediately.

“Upon notification we immediately investigated the findings provided by Cybernews regarding the three potentially misconfigured servers,” a Thomson Reuters representative told Cybernews.

[…]

Source: Thomson Reuters leaked at least 3TB of sensitive data | Cybernews

Advocate Aurora Health leaks 3 million patient’s data to big tech through webtracker installation

A hospital network in Wisconsin and Illinois fears visitor tracking code on its websites may have transmitted personal information on as many as 3 million patients to Meta, Google, and other third parties.

Advocate Aurora Health (AAH) reported the potential breach to the US government’s Health and Human Services. As well as millions of patients, AAH has 27 hospitals and 32,000 doctors and nurses on its books.

[…]

Essentially, AAH is saying that it placed analytics code on its online portals to get an idea of how many people visit and login to their accounts, what they use, and so on. It’s now determined that code – known also as trackers or pixels because they may be loaded onto pages as invisible single pixels – may have sent personal info from the pages patients had open to those providing the trackers, such as Facebook or Google.

You might imagine these trackers simply transmit a unique identifier and IP address for the visitor and some details about their actions on the site for subsequent analysis and record keeping. But it turns out these pixels can send back all sorts of things like search terms, your doctor’s name, and the illnesses you’re suffering from.

[…]

The data that may have been sent, though, is extensive: IP addresses, appointment information including scheduling and type, proximity to an AAH facility, provider information, digital messages, first and last name, insurance data, and MyChart account information may all have been exposed. AAH said financial and Social Security information was not compromised.

[…]

Earlier this year, it was shown that Meta’s pixels could collect a lot more than basic usage metrics, transmitting personal data to Zuckercorp even for people who didn’t have Facebook accounts. The same is true of other trackers, such as TikTok’s, which can gather personal data regardless of whether a website’s visitor has ever set a digital foot on the China-owned social network.

Generally speaking, site and app owners have control over how much or how little is collected by the trackers they place on their pages. You can configure which activities trigger a ping back to the pixel provider, such as Meta, which you can then review from a backend dashboard.

While the info exposed by AAH was not grabbed by hackers, it is now in the hands of Big Tech, which is a privacy concern no matter what those technology companies say.

AAH said it – like so many other organizations, government and private – was using the trackers to aggregate user data for analysis, and it only seems to have just occurred to the nonprofit that this data is private health information and shouldn’t really be fed into Meta or Google.

[…]

Source: Advocate Aurora Health in potential 3 million patient leak • The Register

iOS 16 VPN Tunnels Leak Data, Even When Lockdown Mode Is Enabled

AmiMoJo shares a report from MacRumors: iOS 16 continues to leak data outside an active VPN tunnel, even when Lockdown mode is enabled, security researchers have discovered. Speaking to MacRumors, security researchers Tommy Mysk and Talal Haj Bakry explained that iOS 16’s approach to VPN traffic is the same whether Lockdown mode is enabled or not. The news is significant since iOS has a persistent, unresolved issue with leaking data outside an active VPN tunnel.

According to a report from privacy company Proton, an iOS VPN bypass vulnerability had been identified in iOS 13.3.1, which persisted through three subsequent updates. Apple indicated it would add Kill Switch functionality in a future software update that would allow developers to block all existing connections if a VPN tunnel is lost, but this functionality does not appear to prevent data leaks as of iOS 15 and iOS 16. Mysk and Bakry have now discovered that iOS 16 communicates with select Apple services outside an active VPN tunnel and leaks DNS requests without the user’s knowledge.

Mysk and Bakry also investigated whether iOS 16’s Lockdown mode takes the necessary steps to fix this issue and funnel all traffic through a VPN when one is enabled, and it appears that the exact same issue persists whether Lockdown mode is enabled or not, particularly with push notifications. This means that the minority of users who are vulnerable to a cyberattack and need to enable Lockdown mode are equally at risk of data leaks outside their active VPN tunnel. […] Due to the fact that iOS 16 leaks data outside the VPN tunnel even where Lockdown mode is enabled, internet service providers, governments, and other organizations may be able to identify users who have a large amount of traffic, potentially highlighting influential individuals. It is possible that Apple does not want a potentially malicious VPN app to collect some kinds of traffic, but seeing as ISPs and governments are then able to do this, even if that is what the user is specifically trying to avoid, it seems likely that this is part of the same VPN problem that affects iOS 16 as a whole

https://m.slashdot.org/story/405931