Microsoft researchers have found evidence that Russian and North Korean hackers have systematically attacked covid-19 labs and vaccine makers in an effort to steal data and initiate ransomware attacks.
“Among the targets, the majority are vaccine makers that have Covid-19 vaccines in various stages of clinical trials, clinical research organization involved in trials, and one has developed a Covid-19 test,” said Tom Burt, a VP in Customer Security at Microsoft. “Multiple organizations targeted have contracts with or investments from government agencies from various democratic countries for Covid-19 related work.”
“The targets include leading pharmaceutical companies and vaccine researchers in Canada, France, India, South Korea, and the United States. The attacks came from Strontium, an actor originating from Russia, and two actors originating from North Korea that we call Zinc and Cerium,” wrote Burt.
The attacks seem to be brute force login attempts and spear-phishing meant to lure victims to give up their security credentials. Microsoft, obviously, reports that its tools were able to catch and prevent most of the attacks. Sadly, the hackers are pretending to be World Health Organization reps in order to trick doctors into installing malware.
Zack Whittaker at TechCrunch noted that the Russian group, Strontium, is better known as APT28 or Fancy Bear, and the other groups are probably part of the North Korean Lazarus Group, the hackers responsible for WannaCry ransomware and the Sony hack in 2016.
In a blog post, Alex Weinert, director of identity security at Microsoft, says people should definitely use MFA. He claims that accounts using any type of MFA get compromised at a rate that’s less than 0.1 per cent of the general population.
At the same time, he argues people should avoid relying on SMS messages or voice calls to handle one-time passcodes (OTPs) because phone-based protocols are fundamentally insecure.
“These mechanisms are based on public switched telephone networks (PSTN), and I believe they’re the least secure of the MFA methods available today,” said Weinert. “That gap will only widen as MFA adoption increases attackers’ interest in breaking these methods and purpose-built authenticators extend their security and usability advantages.”
Hacking techniques like SIM swapping – where a miscreant calls a mobile carrier posing as a customer to request the customer’s number be ported to a different SIM card in the attacker’s possession – and more sophisticated network attacks like SS7 interception have demonstrated the security shortcomings of public phone networks and the companies running them.
Computer scientists from Princeton University examined SIM swapping in a research study [PDF] earlier this year and their results support Weinert’s claims. They tested AT&T, T-Mobile, Tracfone, US Mobile, and Verizon Wireless and found “all 5 carriers used insecure authentication challenges that could easily be subverted by attackers.”
They also looked at 140 online services that used phone-based authentication to see whether they resisted SIM swapping attacks. And they found 17 had authentication policies that allowed an attacker to hijack an account with a SIM swap.
In September, security firm Check Point Research published a report describing various espionage campaigns, including the discovery of malware that sets up an Android backdoor to steal two-factor authentication codes from SMS messages.
Weinert argues that SMS and voice protocols were not designed with encryption, are easy to attack using social engineering, rely on unreliable mobile carriers, and are subject to shifting regulation.
Swiss politicians only found out last year that cipher machine company Crypto AG was (quite literally) owned by the US and Germany during the Cold War, a striking report from its parliament has revealed.
The company, which supplied high-grade encryption machines to governments and corporations around the world, was in fact owned by the US civilian foreign intelligence service the CIA and Germany’s BND spy agency during the Cold War, as we reported earlier this year.
Although Swiss spies themselves knew that Crypto AG’s products were being intentionally weakened so the West could read messages passing over them, they didn’t tell governmental overseers until last year – barely one year after the operation ended.
So stated the Swiss federal parliament in a report published yesterday afternoon, which has caused fresh raising of eyebrows over the scandal. While infosec greybeard Bruce Schneier told El Reg last year: “I thought we knew this for decades,” referring to age-old (but accurate, though officially denied) news reports of the compromise, this year’s revelations have been the first official admissions that not only was this going on, but that it was deliberately hidden from overseers.
[…]
The revelations that the Swiss state itself knew about Crypto AG’s operations may prove to be a diplomatic embarrassment; aside from secrecy and chocolate, Switzerland’s other big selling point on the international stage is that it is very publicly and deliberately neutral. Secretly cooperating with Western spies during the Cold War and beyond, and enabling spying on state-level customers, is likely to harm that reputation.
Professor Woodward concluded: “If nothing else this whole episode shows that it’s easier to interfere with equipment handling encryption than to try to tackle the encryption head on. But, it has a warning for those who would seek to give a golden key, weaken encryption or provide some other means for government agencies to read encrypted messages. Just like you can’t be a little bit pregnant, if the crypto is weakened then you have to assume your communications are no longer secure.”
In September, we noted that officials in the EU were continuing an effort to try to ban end-to-end encryption. Of course, that’s not how they put it. They say they just want “lawful access” to encrypted content, not recognizing that any such backdoor effectively obliterates the protections of end-to-end encryption. A new “Draft Council Resolution on Encryption” has come out as the EU Council of Ministers continues to drift dangerously towards this ridiculous position.
We’ve seen documents like this before. It starts out with a preamble insisting that they’re not really trying to undermine encryption, even though they absolutely are.
The European Union fully supports the development, implementation and use of strong encryption. Encryption is a necessary means of protecting fundamental rights and the digital security of governments, industry and society. At the same time, the European Union needs to ensure the ability of competent authorities in the area of security and criminal justice, e.g. law enforcement and judicial authorities, to exercise their lawful powers, both online and offline.
Uh huh. That’s basically we fully support you having privacy in your own home, except when we need to spy on you at a moment’s notice. It’s not so comforting when put that way, but it’s what they’re saying.
[…]
This is the same old garbage we’ve seen before. Technologically illiterate bureaucrats who have no clue at all, insisting that if they just “work together” with the tech industry, some magic golden key will be found. This is not how any of this works. Introducing a backdoor into encryption is introducing a massive, dangerous vulnerability
[…]
Attacking end-to-end encryption in order to deal with the miniscule number of situations where law enforcement is stymied by encryption would, in actuality, put everyone at massive risk of having their data accessed by malicious parties.
Introducing a backdoor is introducing a vulnerability – one that anyone can exploit. The good guys, the bad guys and the idiots. There is a long and varied history of exploited backdoors in all kinds of very important stuff (eg the clipper chip, the encryption hardware sold to governments, mobile phone networks, even kids smartwatches, switches, and they’ve all been misused by malicious actors.
Website Planetreports that Prestige Software, the company behind hotel reservation platforms for Hotels.com, Booking.com and Expedia, left data exposed for “millions” of guests on an Amazon Web Services S3 bucket. The 10 million-plus log files dated as far back as 2013 and included names, credit card details, ID numbers and reservation details.
It’s not certain how long the data was left open, or if anyone took the data. Website Planet said the hole was closed a day after telling AWS about the exposure. Prestige confirmed that it owned the data.
The damage could be severe if crooks found the data. WP warned that it could lead to all too common risks with hotel data exposures like credit card fraud, identity theft and phishing scams. Perpetrators could even hijack a reservation to steal someone else’s vacation.
A British software engineer came up with “a fun playful name” for his consulting business. He’d named it:
“”>
Unfortunately, this did not amuse the official registrar of companies in the United Kingdom (known as Companies House). The Guardian reports that the U.K. agency “has forced the company to change its name after it belatedly realised it could pose a security risk.” Henceforward, the software engineer’s consulting business will instead be legally known as “THAT COMPANY WHOSE NAME USED TO CONTAIN HTML SCRIPT TAGS LTD.” He now says he didn’t realise that Companies House was actually vulnerable to the extremely simple technique he used, known as “cross-site scripting”, which allows an attacker to run code from one website on another.
Engadget adds: Companies House, meanwhile, said it had “put measures in place” to prevent a repeat. You won’t be trying this yourself, at least not in the U.K.
It’s more than a little amusing to see a for-the-laughs code name stir up trouble, but this also illustrates just how fragile web security can be.
One of the world’s top certificate authorities warns that phones running versions of Android prior to 7.1.1 Nougat will be cut off from large portions of the secure web starting in 2021, Android Police reported Saturday.
The Mozilla-partnered nonprofit Let’s Encrypt said that its partnership with fellow certificate authority IdenTrust will expire on Sept. 1, 2021. Since it has no plans to renew its cross-signing agreement, Let’s Encrypt plans to stop default cross-signing for IdenTrust’s root certificate, DST Root X3, beginning on Jan. 11 as the organization switches over to solely using its own ISRG Root X1 root.
It’s a pretty significant shift considering that as much as one-third of all web domains rely on the organization’s certificates. But since older software won’t trust Let’s Encrypt’s root certificate, this could “introduce some compatibility woes,” lead developer Jacob Hoffman-Andrews said in a blog post Friday.
“Some software that hasn’t been updated since 2016 (approximately when our root was accepted to many root programs) still doesn’t trust our root certificate, ISRG Root X1,” he said. “Most notably, this includes versions of Android prior to 7.1.1. That means those older versions of Android will no longer trust certificates issued by Let’s Encrypt.”
The only workaround for these users would be to install Firefox since it relies on its own certificate store that includes Let’s Encrypt’s root, though that wouldn’t keep applications from breaking or ensure functionality beyond your browser.
Let’s Encrypt noted that roughly 34% of Android devices are running a version older than 7.1 based on data from Google’s Android development suite. That translates to millions of users potentially being cut off from large portions of the secure web beginning in 2021
In March 2020, KrebsOnSecurity alerted Swedish security giant Gunnebo Group that hackers had broken into its network and sold the access to a criminal group which specializes in deploying ransomware. In August, Gunnebo said it had successfully thwarted a ransomware attack, but this week it emerged that the intruders stole and published online tens of thousands of sensitive documents — including schematics of client bank vaults and surveillance systems.
The Gunnebo Group is a Swedish multinational company that provides physical security to a variety of customers globally, including banks, government agencies, airports, casinos, jewelry stores, tax agencies and even nuclear power plants. The company has operations in 25 countries, more than 4,000 employees, and billions in revenue annually.
Acting on a tip from Milwaukee, Wis.-based cyber intelligence firm Hold Security, KrebsOnSecurity in March told Gunnebo about a financial transaction between a malicious hacker and a cybercriminal group which specializes in deploying ransomware. That transaction included credentials to a Remote Desktop Protocol (RDP) account apparently set up by a Gunnebo Group employee who wished to access the company’s internal network remotely.
[…]
Larsson quotes Gunnebo CEO Stefan Syrén saying the company never considered paying the ransom the attackers demanded in exchange for not publishing its internal documents. What’s more, Syrén seemed to downplay the severity of the exposure.
“I understand that you can see drawings as sensitive, but we do not consider them as sensitive automatically,” the CEO reportedly said. “When it comes to cameras in a public environment, for example, half the point is that they should be visible, therefore a drawing with camera placements in itself is not very sensitive.”
It remains unclear whether the stolen RDP credentials were a factor in this incident. But the password to the Gunnebo RDP account — “password01” — suggests the security of its IT systems may have been lacking in other areas as well.
Researchers have extracted the secret key that encrypts updates to an assortment of Intel CPUs, a feat that could have wide-ranging consequences for the way the chips are used and, possibly, the way they’re secured.
The key makes it possible to decrypt the microcode updates Intel provides to fix security vulnerabilities and other types of bugs. Having a decrypted copy of an update may allow hackers to reverse engineer it and learn precisely how to exploit the hole it’s patching. The key may also allow parties other than Intel—say a malicious hacker or a hobbyist—to update chips with their own microcode, although that customized version wouldn’t survive a reboot.
“At the moment, it is quite difficult to assess the security impact,” independent researcher Maxim Goryachy said in a direct message. “But in any case, this is the first time in the history of Intel processors when you can execute your microcode inside and analyze the updates.” Goryachy and two other researchers—Dmitry Sklyarov and Mark Ermolov, both with security firm Positive Technologies—worked jointly on the project.
The key can be extracted for any chip—be it a Celeron, Pentium, or Atom—that’s based on Intel’s Goldmont architecture.
[…]
attackers can’t use Chip Red Pill and the decryption key it exposes to remotely hack vulnerable CPUs, at least not without chaining it to other vulnerabilities that are currently unknown. Similarly, attackers can’t use these techniques to infect the supply chain of Goldmont-based devices.
[…]
In theory, it might also be possible to use Chip Red Pill in an evil maid attack, in which someone with fleeting access to a device hacks it. But in either of these cases, the hack would be tethered, meaning it would last only as long as the device was turned on. Once restarted, the chip would return to its normal state. In some cases, the ability to execute arbitrary microcode inside the CPU may also be useful for attacks on cryptography keys, such as those used in trusted platform modules.
“For now, there’s only one but very important consequence: independent analysis of a microcode patch that was impossible until now,” Positive Technologies researcher Mark Ermolov said. “Now, researchers can see how Intel fixes one or another bug/vulnerability. And this is great. The encryption of microcode patches is a kind of security through obscurity.”
It’s said the NSA drew up a report on what it learned after a foreign government exploited a weak encryption scheme, championed by the US spying agency, in Juniper firewall software.
However, curiously enough, the NSA has been unable to find a copy of that report.
On Wednesday, Reuters reporter Joseph Menn published an account of US Senator Ron Wyden’s efforts to determine whether the NSA is still in the business of placing backdoors in US technology products.
Wyden (D-OR) opposes such efforts because, as the Juniper incident demonstrates, they can backfire, thereby harming national security, and because they diminish the appeal of American-made tech products.
But Wyden’s inquiries, as a member of the Senate Intelligence Committee, have been stymied by lack of cooperation from the spy agency and the private sector. In June, Wyden and various colleagues sent a letter to Juniper CEO Rami Rahim asking about “several likely backdoors in its NetScreen line of firewalls.”
Juniper acknowledged in 2015 that “unauthorized code” had been found in ScreenOS, which powers its NetScreen firewalls. It’s been suggested that the code was in place since around 2008.
The Reuters report, citing a previously undisclosed statement to Congress from Juniper, claims that the networking biz acknowledged that “an unnamed national government had converted the mechanism first created by the NSA.”
Wyden staffers in 2018 were told by the NSA that a “lessons learned” report about the incident had been written. But Wyden spokesperson Keith Chu told Reuters that the NSA now claims it can’t find the file. Wyden’s office did not immediately respond to a request for comment.
The reason this malicious code was able to decrypt ScreenOS VPN connections has been attributed to Juniper’s “decision to use the NSA-designed Dual EC Pseudorandom Number Generator.”
[…]
After Snowden’s disclosures about the extent of US surveillance operations in 2013, the NSA is said to have revised its policies for compromising commercial products. Wyden and other lawmakers have tried to learn more about these policies but they’ve been stonewalled, according to Reuters.
And this is why you don’t put out insecure security products, which is exactly what products with a backdoor are. Here’s looking at you, UK and Australia and all the other countries trying to force insecure products on us.
In a world first, researchers from the University of Ottawa in collaboration with Israeli scientists have been able to create optical framed knots in the laboratory that could potentially be applied in modern technologies. Their work opens the door to new methods of distributing secret cryptographic keys—used to encrypt and decrypt data, ensure secure communication and protect private information. The group recently published their findings in Nature Communications.
“This is fundamentally important, in particular from a topology-focused perspective, since framed knots provide a platform for topological quantum computations,” explained senior author, Professor Ebrahim Karimi, Canada Research Chair in Structured Light at the University of Ottawa.
“In addition, we used these non-trivial optical structures as information carriers and developed a security protocol for classical communication where information is encoded within these framed knots.”
The concept
The researchers suggest a simple do-it-yourself lesson to help us better understand framed knots, those three-dimensional objects that can also be described as a surface.
“Take a narrow strip of a paper and try to make a knot,” said first author Hugo Larocque, uOttawa alumnus and current Ph.D. student at MIT.
“The resulting object is referred to as a framed knot and has very interesting and important mathematical features.”
The group tried to achieve the same result but within an optical beam, which presents a higher level of difficulty. After a few tries (and knots that looked more like knotted strings), the group came up with what they were looking for: a knotted ribbon structure that is quintessential to framed knots.
Encryption scheme of a framed braid within a framed knot. The knot along with a pair of numbers can be used to recover the encrypted braid by means of a procedure relying on prime factorization. Credit: University of Ottawa
“In order to add this ribbon, our group relied on beam-shaping techniques manipulating the vectorial nature of light,” explained Hugo Larocque. “By modifying the oscillation direction of the light field along an “unframed” optical knot, we were able to assign a frame to the latter by “gluing” together the lines traced out by these oscillating fields.”
According to the researchers, structured light beams are being widely exploited for encoding and distributing information.
“So far, these applications have been limited to physical quantities which can be recognized by observing the beam at a given position,” said uOttawa Postdoctoral Fellow and co-author of this study, Dr. Alessio D’Errico.
“Our work shows that the number of twists in the ribbon orientation in conjunction with prime number factorization can be used to extract a so-called “braid representation” of the knot.”
“The structural features of these objects can be used to specify quantum information processing programs,” added Hugo Larocque. “In a situation where this program would want to be kept secret while disseminating it between various parties, one would need a means of encrypting this “braid” and later deciphering it. Our work addresses this issue by proposing to use our optical framed knot as an encryption object for these programs which can later be recovered by the braid extraction method that we also introduced.”
“For the first time, these complicated 3-D structures have been exploited to develop new methods for the distribution of secret cryptographic keys. Moreover, there is a wide and strong interest in exploiting topological concepts in quantum computation, communication and dissipation-free electronics. Knots are described by specific topological properties too, which were not considered so far for cryptographic protocols.”
Rendition of the reconstructed structure of a framed trefoil knot generated within an optical beam. Credit: University
[…]
The paper “Optical framed knots as information carriers” was recently published in Nature Communications.
More information: Hugo Larocque et al, Optical framed knots as information carriers, Nature Communications (2020). DOI: 10.1038/s41467-020-18792-z
Owners of the brand-new Oculus Quest 2—the first VR headset which requires a Facebook account to use—are finding themselves screwed out of their new purchases by Facebook’s account verification system.
As first reported by UploadVR this week, some Oculus 2 owners are finding that Facebook’s reportedly AI-powered account verification system is demanding some users upload a photo before they can proceed with logging in. Others who have previously suspended their Facebook accounts are getting insta-banned upon reactivation and reported they were subsequently unable to create a new account, or said they were locked out upon trying to merge their old Oculus usernames with their Facebook accounts. Facebook’s failure prompt gave no way for users to appeal directly, essentially turning the $300 units into expensive bricks.
On the Oculus subreddit, one user reported that they had uploaded a photo ID to Facebook and received a response stating that “we have already reviewed this decision and it can’t be reversed.”
The Xplora 4 smartwatch, made by Chinese outfit Qihoo 360 Technology Co, and marketed to children under the Xplora brand in the US and Europe, can covertly take photos and record audio when activated by an encrypted SMS message, says Norwegian security firm Mnemonic.
This backdoor is not a bug, the finders insist, but a deliberate, hidden feature. Around 350,000 watches have been sold so far, Xplora says. Exploiting this security hole is essentially non-trivial, we note, though it does reveal the kind of remotely accessible stuff left in the firmware of today’s gizmos.
“The backdoor itself is not a vulnerability,” said infosec pros Harrison Sand and Erlend Leiknes in a report on Monday. “It is a feature set developed with intent, with function names that include remote snapshot, send location, and wiretap. The backdoor is activated by sending SMS commands to the watch.”
The researchers suggest these smartwatches could be used to capture photos covertly from its built-in camera, to track the wearer’s location, and to conduct wiretapping via the built-in mic. They have not claimed any such surveillance has actually been done. The watches are marketed as a child’s first phone, we’re told, and thus contain a SIM card for connectivity (with an associated phone number). Parents can track the whereabouts of their offspring by using an app that finds the wearer of the watch.
It is a feature set developed with intent, with function names that include remote snapshot, send location, and wiretap. The backdoor is activated by sending SMS commands to the watch
Xplora contends the security issue is just unused code from a prototype and has now been patched. But the company’s smartwatches were among those cited by Mnemonic and Norwegian Consumer Council in 2017 for assorted security and privacy concerns.
Sand and Leiknes note in their report that while the Norwegian company Xplora Mobile AS distributes the Xplora watch line in Europe and, as of September, in the US, the hardware was made by Qihoo 360 and 19 of its 90 Android-based applications come from the Chinese company.
They also point out that in June, the US Department of Commerce placed the Chinese and UK business groups of Qihoo 360 on its Entities List, a designation that limits Qihoo 360’s ability to do business with US companies. US authorities claim, without offering any supporting evidence, that the company represents a potential threat to US national security.
In 2012, a report by a China-based civilian hacker group called Intelligent Defense Friends Laboratory accused Qihoo 360 of having a backdoor in its 360 secure browser [[PDF]].
In March, Qihoo 360 claimed that the US Central Intelligence Agency has been conducting hacking attacks on China for over a decade. Qihoo 360 did not immediately respond to a request for comment.
According to Mnemonic, the Xplora 4 contains a package called “Persistent Connection Service” that runs during the Android boot process and iterates through the installed apps to construct a list of “intents,” commands for invoking functionality in other apps.
With the appropriate Android intent, an incoming encrypted SMS message received by the Qihoo SMS app could be directed through the command dispatcher in the Persistent Connection Service to trigger an application command, like a remote memory snapshot.
Exploiting this backdoor requires knowing the phone number of the target device and its factory-set encryption key. This data is available to those to Qihoo and Xplora, according to the researchers, and can be pulled off the device physically using specialist tools. This basically means ordinary folks aren’t going to be hacked, either by the manufacturer under orders from Beijing or opportunistic miscreants attacking gizmos in the wild, though it is an issue for persons of interest. It also highlights the kind of code left lingering in mass-market devices.
Apple’s T2 security chip is insecure and cannot be fixed, a group of security researchers report.
Over the past three years, a handful of hackers have delved into the inner workings of the custom silicon, fitted inside recent Macs, and found that they can use an exploit developed for iPhone jailbreaking, checkm8, in conjunction with a memory controller vulnerability known as blackbird, to compromise the T2 on macOS computers.
The primary researchers involved – @h0m3us3r, @mcmrarm, @aunali1 and Rick Mark (@su_rickmark) – expanded on the work @axi0mX did to create checkm8 and adapted it to target the T2, in conjunction with a group that built checkm8 into their checkra1n jailbreaking software. Mark on Wednesday published a timeline of relevant milestones.
The T2, which contains a so-called secure enclave processor (SEP) intended to safeguard Touch ID data, encrypted storage, and secure boot capabilities, was announced in 2017. Based on the Arm-compatible A10 processor used in the iPhone 7, the T2 first appeared in devices released in 2018, including MacBook Pro, MacBook Air, and Mac mini. It has also shown up in the iMac Pro and was added to the Mac Pro in 2019, and the iMac in 2020.
The checkm8 exploit, which targets a use-after-free() vulnerability, allows an attacker to run unsigned code during recovery mode, or Device Firmware Update (DFU) mode. It has been modified to enable a tethered debug interface that can be used to subvert the T2 chip.
So with physical access to your T2-equipped macOS computer, and an appropriate USB-C cable and checkra1n 0.11, you – or a miscreant in your position – can obtain root access and kernel execution privileges on a T2-defended Mac. This allows you to alter macOS, loading arbitrary kernel extensions, and expose sensitive data.
According to Belgian security biz ironPeak, it also means that firmware passwords and remote device locking capabilities, instituted via MDM or the FindMy app, can be undone.
Compromising the T2 doesn’t dissolve macOS FileVault2 disk encryption but it would allow someone to install a keylogger to obtain the encryption key or to attempt to crack the key using a brute-force attack.
[…]
Unfortunately, it appears the T2 cannot be fixed. “Apple uses SecureROM in the early stages of boot,” explained Rick Mark in a blog post on Monday. “ROM cannot be altered after fabrication and is done so to prevent modifications. This usually prevents an attacker from placing malware at the beginning of the boot chain, but in this case also prevents Apple from fixing the SecureROM.”
Guardicore discovered a new attack vector on Comcast’s XR11 voice remote that would have allowed attackers to turn it into a listening device – potentially invading your privacy in your living room. Prior to its remediation by Comcast, the attack, dubbed WarezTheRemote, was a very real security threat: with more than 18 million units deployed across homes in the USA, the XR11 is one of the most widespread remote controls in existence.
WarezTheRemote used a man-in-the-middle attack to exploit remote’s RF communication with the set-top box and over-the-air firmware upgrades – by pushing a malicious firmware image back the remote, attackers could have used the remote to continuously record audio without user interaction.
The attack did not require physical contact with the targeted remote or any interaction from the victim – any hacker with a cheap RF transceiver could have used it to take over an XR11 remote. Using a 16dBi antenna, we were able to listen to conversations happening in a house from about 65 feet away. We believe this could have been amplified easily using better equipment.
We worked with Comcast’s security team after finding the vulnerability and they have released fixes that remediate the issues that made the attack possible.
You can download our full research paper for the technical details of the WarezTheRemote project. You’ll find much more information on the reverse-engineering process inside, as well as a more bits-and-bytes perspective on the vulnerability and the exploit.
Smart Bluetooth male chastity lock, designed for user to give remote control to a trusted 3rd party using mobile app/API
Multiple API flaws meant anyone could remotely lock all devices and prevent users from releasing themselves
Removal then requires an angle grinder or similar, used in close proximity to delicate and sensitive areas
Precise user location data also leaked by API, including personal information and private chats
Vendor initially responsive, then missed three remediation deadlines they set themselves over a 6 month period
Then finally refused to interact any further, even though majority of issues were resolved in migration to v2 API, yet API v1 inexcusably left available
We haven’t written about smart adult toys in a long time, but the Qiui Cellmate chastity cage was simply too interesting to pass by. We were tipped off about the adult chastity device, designed to lock-up the wearer’s appendage.
There are other male chastity devices available but this is a Bluetooth (BLE) enabled lock and clamp type mechanism with a companion mobile app. The idea is that the wearer can give control of the lock to someone else.
We are not in the business of kink shaming. People should be able to use these devices safely and securely without the risk of sensitive personal data being leaked.
The security of the teledildonics field is interesting in its own right. It’s worth noting that sales of smart adult toys has risen significantly during the recent lockdown.
What is the risk to users?
We discovered that remote attackers could prevent the Bluetooth lock from being opened, permanently locking the user in the device. There is no physical unlock. The tube is locked onto a ring worn around the base of the genitals, making things inaccessible. An angle grinder or other suitable heavy tool would be required to cut the wearer free.
Location, plaintext password and other personal data was also leaked, without need for authentication, by the API.
We had particular problems during the disclosure process, as we would usually ask the vendor to take down a leaky API whilst remediation was being implemented. However, anyone currently using the device when the API was taken offline would also be permanently locked in!
As you will see in the disclosure timeline at the bottom of this post, some issues were remediated but others were not, and the vendor simply stopped replying to us, journalists, and retailers. Given the trivial nature of finding some of these issues, and that the company is working on another device that poses even greater potential physical harm (an “internal” chastity device), we have felt compelled to publish these findings at this point.
Grindr, one of the world’s largest dating and social networking apps for gay, bi, trans, and queer people, has fixed a security vulnerability that allowed anyone to hijack and take control of any user’s account using only their email address.
Wassime Bouimadaghene, a French security researcher, found the vulnerability and reported the issue to Grindr. When he didn’t hear back, Bouimadaghene shared details of the vulnerability with security expert Troy Hunt to help.
The vulnerability was fixed a short time later.
Hunt tested and confirmed the vulnerability with help from a test account set up by Scott Helme, and shared his findings with TechCrunch.
Bouimadaghene found the vulnerability in how the app handles account password resets.
To reset a password, Grindr sends the user an email with a clickable link containing an account password reset token. Once clicked, the user can change their password and is allowed back into their account.
But Bouimadaghene found that Grindr’s password reset page was leaking password reset tokens to the browser. That meant anyone could trigger the password reset who had knowledge of a user’s registered email address, and collect the password reset token from the browser if they knew where to look.
Secret tokens used to reset Grindr account passwords, which are only supposed to be sent to a user’s inbox, were leaking to the browser. (Image: Troy Hunt/supplied)
The clickable link that Grindr generates for a password reset is formatted the same way, meaning a malicious user could easily craft their own clickable password reset link — the same link that was sent to the user’s inbox — using the leaked password reset token from the browser.
With that crafted link, the malicious user can reset the account owner’s password and gain access to their account and the personal data stored within, including account photos, messages, sexual orientation and HIV status and last test date.
“This is one of the most basic account takeover techniques I’ve seen,” Hunt wrote.
A newly discovered technique by a researcher shows how Google’s App Engine domains can be abused to deliver phishing and malware while remaining undetected by leading enterprise security products.
Google App Engine is a cloud-based service platform for developing and hosting web apps on Google’s servers.
While reports of phishing campaigns leveraging enterprise cloud domains are nothing new, what makes Google App Engine infrastructure risky in how the subdomains get generated and paths are routed.
Practically unlimited subdomains for one app
Typically scammers use cloud services to create a malicious app that gets assigned a subdomain. They then host phishing pages there. Or they may use the app as a command-and-control (C2) server to deliver malware payload.
But the URL structures are usually generated in a manner that makes them easy to monitor and block using enterprise security products, should there be a need.
For example, a malicious app hosted on Microsoft Azure services may have a URL structure like: https://example-subdomain.app123.web.core.windows.net/…
Therefore, a cybersecurity professional could block traffic to and from this particular app by simply blocking requests to and from this subdomain. This wouldn’t prevent communication with the rest of the Microsoft Azure apps that use other subdomains.
It gets a bit more complicated, however, in the case of Google App Engine.
Security researcher Marcel Afrahim demonstrated an intended design of Google App Engine’s subdomain generator, which can be abused to use the app infrastructure for malicious purposes, all while remaining undetected.
Google’s appspot.com domain, which hosts apps, has the following URL structure:
A subdomain, in this case, does not only represent an app, it represents an app’s version, the service name, project ID, and region ID fields.
But the most important point to note here is, if any of those fields are incorrect, Google App Engine won’t show a 404 Not Found page, but instead show the app’s “default” page (a concept referred to as soft routing).
“Requests are received by any version that is configured for traffic in the targeted service. If the service that you are targeting does not exist, the request gets Soft Routed,” states Afrahim, adding:
“If a request matches the PROJECT_ID.REGION_ID.r.appspot.com portion of the hostname, but includes a service, version, or instance name that does not exist, then the request is routed to the default service, which is essentially your default hostname of the app.”
Essentially, this means there are a lot of permutations of subdomains to get to the attacker’s malicious app. As long as every subdomain has a valid “project_ID” field, invalid variations of other fields can be used at the attacker’s discretion to generate a long list of subdomains, which all lead to the same app.
For example, as shown by Afrahim, both URLs below – which look drastically different, represent the same app hosted on Google App Engine.
“Verified by Google Trust Services” means trusted by everyone
The fact that a single malicious app is now represented by multiple permutations of its subdomains makes it hard for sysadmins and security professionals to block malicious activity.
But further, to a technologically unsavvy user, all of these subdomains would appear to be a “secure site.” After all, the appspot.com domain and all its subdomains come with the seal of “Google Trust Services” in their SSL certificates.
Google App Engine sites showing valid SSL certificate with “Verified by: Google Trust Services” text
Source: Afrahim
Even further, most enterprise security solutions such as Symantec WebPulse web filter automatically allow traffic to trusted category sites. And Google’s appspot.com domain, due to its reputation and legitimate corporate use cases, earns an “Office/Business Applications” tag, skipping the scrutiny of web proxies.
Automatically trusted by most enterprise security solutions
On top, a large number of subdomain variations renders the blocking approach based on Indicators of Compromise (IOCs) useless.
A screenshot of a test app created by Afrahim along with a detailed “how-to” demonstrates this behavior in action.
In the past, Cloudflare domain generation had a similar design flaw that Astaroth malware would exploit via the following command wheen fetching stage 2 payload:
This would essentially launch a Windows command prompt and put a random number replacing %RANDOM% making the payload URL truly dynamic.
“And now you have a script that downloads the payload from different URL hostnames each time is run and would render the network IOC of such hypothetical sample absolutely useless. The solutions that rely on single run on a sandbox to obtain automated IOC would therefore get a new Network IOC and potentially new file IOC if script is modified just a bit,” said the researcher.
Delivering malware via Google App Engine subdomain variations while bypassing IOC blocks
Actively exploited for phishing attacks
Security engineer and pentester Yusuke Osumi tweeted last week how a Microsoft phishing page hosted on the appspot.com subdomain was exploiting the design flaw Afrahim has detailed.
Osumi additionally compiled a list of over 2,000 subdomains generated dynamically by the phishing app—all of them leading to the same phishing page.
Active exploitation of Google App Engine subdomains in phishing attacks
Source: Twitter
This recent example has shifted the focus of discussion from how Google App Engine’s flaw can be potentially exploited to active phishing campaigns leveraging the design flaw in the wild.
“Use a Google Drive/Service phishing kit on Google’s App Engine and normal user would not just realize it is not Google which is asking for credentials,” concluded Afrahim in his blog post.
Twitter is notifying developers today about a possible security incident that may have impacted their accounts.
The incident was caused by incorrect instructions that the developer.twitter.com website sent to users’ browsers.
The developer.twitter.com website is the portal where developers manage their Twitter apps and attached API keys, but also the access token and secret key for their Twitter account.
In an email sent to developers today, Twitter said that its developer.twitter.com website told browsers to create and store copies of the API keys, account access token, and account secret inside their cache, a section of the browser where data is saved to speed up the process of loading the page when the user accessed the same site again.
This might not be a problem for developers using their own browsers, but Twitter is warning developers who may have used public or shared computers to access the developer.twitter.com website — in which case, their API keys are now most likely stored in those browsers.
“If someone who used the same computer after you in that temporary timeframe knew how to access a browser’s cache, and knew what to look for, it is possible they could have accessed the keys and tokens that you viewed,” Twitter said.
“Depending on what pages you visited and what information you looked at, this could have included your app’s consumer API keys, as well as the user access token and secret for your own Twitter account,” Twitter said.
Netgear has decided that users of some of its managed network switches don’t need access to the equipment’s full user interface – unless they register their details with Netgear first.
For instance, owners of its 64W Power-over-Ethernet eight-port managed gigabit switch GC108P, and its 126W variant GC108PP, need to hand over information about themselves to the Netgear Cloud to get full use out of the devices.
“Starting from firmware version 1.0.5.4, product registration is required to unlock full access to the local browser user interface,” said the manufacturer in a note on its website referencing a version released in April this year.
The latest build, 1.0.5.8, released last week, continues that registration requirement. These rules also appear to apply to a dozen or so models of Netgear’s kit, including its GS724TPP 24-port managed Ethernet switch.
“I recently bought a couple of Netgear Managed Switches for business, and in their datasheet they list local-only management as a feature. Only after they arrived we discovered that you only get limited functionality in the local-only management mode, you have to register the switches to your Netgear Cloud account to get access to the full functionality,” fumed one netizen on a Hacker News discussion thread. “I would not have bought the switches if I had knew I needed to register them to Netgear Cloud to have access to the full functionality specified in the data sheet.”
It appears the Silicon Valley giant is aware that not everyone will rush to create a cloud account to manage their network hardware because it has published a list of functions that one can freely access without said registration – for now, anyway.
We’ve asked Netgear to explain the move. The manufacturer most recently made the headlines when, after being informed of a security flaw in a large number of product lines, promptly abandoned half of them rather than issue a patch.
Professor Alan Woodward of the University of Surrey, England, opined: “It’s a conundrum because it is software and you do have only a licence to use it: you don’t own it so one might argue this helps protect intellectual property rights. However, that’s different for the hardware which is pretty useless without the software.”
Woodward pointed to Netgear’s online privacy policy, which, like every other company on the internet, states that data from customers and others can be hoovered up for marketing purposes, research and so on (see section 11).
Microsoft has released Sysmon 12, and it comes with a useful feature that logs and captures any data added to the Windows Clipboard.
This feature can help system administrators and incident responders track the activities of malicious actors who compromised a system.
Those not familiar with Sysmon, otherwise known as System Monitor, it is a Sysinternals tool that monitors Windows systems for malicious activity and logs it to the Windows event log.
Sysmon 12 adds clipboard capturing
With the release of Sysmon 12, users can now configure the utility to generate an event every time data is copied to the Clipboard. The Clipboard data is also saved to files that are only accessible to an administrator for later examination.
As most attackers will utilize the Clipboard when copying and pasting long commands, monitoring the data stored in the Clipboard can provide useful insight into how an attack was conducted.
Once downloaded, run it from an elevated command prompt, as it needs administrative privileges to run.
Simply running Sysmon.exe without any arguments will display a help screen, and for more detailed information, you can go to the Sysinternals’ Sysmon page.
Sysmon 12 help
Without any configuration, Sysmon will monitor basic events such as process creation and file time changes.
It is possible to configure it to log many other types of information by creating a Sysmon configuration file, which we will do to enable the new ‘CaptureClipboard’ directive.
For a very basic setup that will enable Clipboard logging and capturing, you can use the configuration file below:
Configuration file enabling the CaptureClipboard feature
To start Sysmon and direct it to use the above configuration file, you would enter the following command from an elevated command prompt:
sysmon -i sysmon.cfg.xml
Once started, Sysmon will install its driver and begin collecting data quietly in the background.
All Sysmon events will be logged to ‘Applications and Services Logs/Microsoft/Windows/Sysmon/Operational‘ in the Event Viewer.
With the CaptureClipboard feature enabled, when data is copied into the Clipboard it will generate an ‘Event 24 – Clipboard Changed’ entry in Event Viewer, as shown below.
Event 24 – Clipboard Changed
The event log entry will display what process stored the data in the clipboard, the user who copied it, and when it was done. It will not, though, show the actual data that was copied.
The copied data is instead saved to the protected C:\Sysmon C:\Sysmon folder in files named clip-SHA1_HASH, where the hash is provided in the event above.
For example, the event displayed above would have the Clipboard contents stored in the C:\Sysmon\CLIP-CC849193D18FF95761CD8A702B66857F329BE85B file.
This C:\Sysmon folder is protected with a System ACL, and to access it, you need to download the psexec.exe program and launch a cmd prompt with System privileges using the following command:
psexec -sid cmd
After the new System command prompt is launched, you can go into the C:\Sysmon folder to access the saved Clipboard data.
Protected C:\Sysmon folder
When opening the CLIP-CC849193D18FF95761CD8A702B66857F329BE85B file, you can see that it contains a PowerShell command that I copied into the clipboard from Notepad.exe.
Capture Clipboard data
This PowerShell command is used to clear Shadow Volume Copies in Windows, which can be used by an attacker who wants to make it harder to restore deleted data.
Having this information illustrates how useful this feature can be when performing incident response.
Another useful feature added in Sysmon 11 will automatically create backups of deleted files, allowing administrators to recover files used in an attack.
Last month, Microsoft patched a very interesting vulnerability that would allow an attacker with a foothold on your internal network to essentially become Domain Admin with one click. All that is required is for a connection to the Domain Controller to be possible from the attacker’s viewpoint.
Secura’s security expert Tom Tervoort previously discovered a less severe Netlogon vulnerability last year that allowed workstations to be taken over, but the attacker required a Person-in-the-Middle (PitM) position for that to work. Now, he discovered this second, much more severe (CVSS score: 10.0) vulnerability in the protocol. By forging an authentication token for specific Netlogon functionality, he was able to call a function to set the computer password of the Domain Controller to a known value. After that, the attacker can use this new password to take control over the domain controller and steal credentials of a domain admin.
The vulnerability stems from a flaw in a cryptographic authentication scheme used by the Netlogon Remote Protocol, which among other things can be used to update computer passwords. This flaw allows attackers to impersonate any computer, including the domain controller itself, and execute remote procedure calls on their behalf.
Secura urges everybody to install the patch on all their domain controllers as fast as possible. Please refer to Microsoft’s advisory. We published a test tool on Github, which you can download here: https://github.com/SecuraBV/CVE-2020-1472 that can tell you whether a domain controller is vulnerable or not.
If you are interested in the technical details behind this pretty unique vulnerability and how it was discovered,download the whitepaper here.
In August, security researcher Volodymyr Diachenko discovered a misconfigured Elasticsearch cluster, owned by gaming hardware vendor Razer, exposing customers’ PII (Personal Identifiable Information).
The cluster contained records of customer orders and included information such as item purchased, customer email, customer (physical) address, phone number, and so forth—basically, everything you’d expect to see from a credit card transaction, although not the credit card numbers themselves. The Elasticseach cluster was not only exposed to the public, it was indexed by public search engines.
[…]
One of the things Razer is well-known for—aside from their hardware itself—is requiring a cloud login for just about anything related to that hardware. The company offers a unified configuration program, Synapse, which uses one interface to control all of a user’s Razer gear.
Until last year, Synapse would not function—and users could not configure their Razer gear, for example change mouse resolution or keyboard backlighting—without logging in to a cloud account. Current versions of Synapse allow locally stored profiles for off-Internet use and what the company refers to as “Guest mode” to bypass the cloud login.
Many gamers are annoyed by the insistence on a cloud account for hardware configuration that doesn’t seem to really be enhanced by its presence. Their pique is understandable, because the pervasive cloud functionality comes with cloud vulnerabilities. Over the last year, Razer awarded a single HackerOne user, s3cr3tsdn, 28 separate bounties.
We applaud Razer for offering and paying bug bounties, of course, but it’s difficult to forget that those vulnerabilities wouldn’t have been there (and globally exploitable), if Razer hadn’t tied their device functionality so thoroughly to the cloud in the first place.
The database built by Shenzhen Zhenhua from a variety of sources is technically complex using very advanced language, targeting, and classification tools. Shenzhen Zhenhua claims to work with, and our research supports, Chinese intelligence, military, and security agencies use the open information environment we in open liberal democracies take for granted to target individuals and institutions. Our research broadly support their claims.
The information specifically targets influential individuals and institutions across a variety of industries. From politics to organized crime or technology and academia just to name a few, the database flows from sectors the Chinese state and linked enterprises are known to target.
The breadth of data is also staggering. It compiles information on everyone from key public individuals to low level individuals in an institution to better monitor and understand how to exert influence when needed.
Compiling public and non-public personal and institutional data, Shenzhen Zhenhua has likely broken numerous laws in foreign jurisdictions. Claiming to partner with state intelligence and security services in China, Shenzhen Zhenhua operates collection centers in foreign countries that should be considered for investigation in those jurisdictions.
s that should be considered for investigation in those jurisdictions.
The personal details of millions of people around the world have been swept up in a database compiled by a Chinese tech company with reported links to the country’s military and intelligence networks, according to a trove of leaked data.
About 2.4 million people are included in the database, assembled mostly based on public open-source data such as social media profiles, analysts said. It was compiled by Zhenhua Data, based in the south-eastern Chinese city of Shenzhen.
Internet 2.0, a cybersecurity consultancy based in Canberra whose customers include the US and Australian governments, said it had been able to recover the records of about 250,000 people from the leaked dataset, including about 52,000 Americans, 35,000 Australians and nearly 10,000 Britons. They include politicians, such as prime ministers Boris Johnson and Scott Morrison and their relatives, the royal family, celebrities and military figures.
When contacted by the Guardian for comment, a representative of Zhenhua said: “The report is seriously untrue.”
“Our data are all public data on the internet. We do not collect data. This is just a data integration. Our business model and partners are our trade secrets. There is no database of 2 million people,” said the representative surnamed Sun, who identified herself as head of business.
“We are a private company,” she said, denying any links to the Chinese government or military. “Our customers are research organisations and business groups.”
Three “grumpy old hackers” in the Netherlands managed to access Donald Trump’s Twitter account in 2016 by extracting his password from the 2012 Linkedin hack.
The pseudonymous, middle-aged chaps, named only as Edwin, Mattijs and Victor, told reporters they had lifted Trump’s particulars from a database that was being passed about hackers, and tried it on his account.
To their considerable surprise, the password – but not the email address associated with @realdonaldtrump – worked the first time they tried it, with Twitter’s login process confirming the password was correct.
The explosive allegations were made by Vrij Nederland (VN), a Dutch magazine founded during WWII as part of the Dutch resistance to Nazi German occupation.
“A digital treasure chest with 120 million usernames and hashes of passwords. It was the spoil of a 2012 digital break-in,” wrote VN journalist Gerard Janssen, describing the LinkedIn database hack. After the networking website for suits was hacked in 2012 by a Russian miscreant, the database found its way onto the public internet in 2016 when researchers eagerly pored over the hashes. Critically, the leaked database included 6.5 million hashed but unsalted passwords.
Poring through the database, the trio found an entry for Trump as well as the hash for Trump’s password: 07b8938319c267dcdb501665220204bbde87bf1d. Using John the Ripper, a hash-reversing tool, they were able to uncover one of the Orange One’s login credentials. Some considerable searching revealed the correct email address (twitter@donaldjtrump.com – a different one from the one Trump used on LinkedIn and which was revealed in the hack)… only for the “middle aged” hackers to be defeated by Twitter detecting that the man who would become the 45th president of the United States had logged in earlier from New York.
One open proxy server later, they were in.
VN published screenshots supplied by the three showing a browser seemingly logged into Trump’s Twitter account, displaying a tweet dating from 27 October 2016 referring to a speech Trump delivered in Charlotte, North Carolina, USA.
Despite trying to alert American authorities to just how insecure Trump’s account was (no multi-factor authentication, recycled password from an earlier breach) the hackers’ efforts got nowhere, until in desperation they tried Netherland’s National Cyber Security Centrum – which acknowledged receipt of their prepared breach report, which the increasingly concerned men had prepared immediately once they realised their digital trail was not particularly well covered.
“In short, the grumpy old hackers must set a good example. And to do it properly with someone they ‘may not really like’ they think this is a good example of a responsible disclosure, the unsolicited reporting of a security risk,” concluded VN’s Janssen.
Professor Alan Woodward of the University of Surrey added: “It’s password hygiene 101: use a different password for each account. And, if you know a password has been compromised in a previous breach (I think LinkedIn is well known) then for goodness sake, don’t use that one. [This is] a textbook example of credential stuffing.”