Avast’s and AVG AntiTrack promised to protect your privacy. Instead, it opened you to miscreant-in-the-middle snooping

Web researcher David Eade found and reported CVE-2020-8987 to Avast: this is a trio of blunders that, when combined, can be exploited by a snooper to silently intercept and tamper with an AntiTrack user’s connections to even the most heavily secured websites.

This is because when using AntiTrack, your web connections are routed through the proxy software so that it can strip out tracking cookies and similar stuff, enhancing your privacy. However, when AntiTack connects to websites on your behalf, it does not verify it’s actually talking to the legit sites. Thus, a miscreant-in-the-middle, between AntiTrack and the website you wish to visit, can redirect your webpage requests to a malicious server that masquerades as the real deal, and harvest your logins or otherwise snoop on you, and you’d never know.

The flaws affect both the Avast and AVG versions of AntiTrack, and punters are advised to update their software as a fix for both tools has been released.

Eade has been tracking the bug since August last year.

“The consequences are hard to overstate. A remote attacker running a malicious proxy could capture their victim’s HTTPS traffic and record credentials for later re-use,” he said. “If a site needs two factor authentication (such as a one-time password), then the attacker can still hijack a live session by cloning session cookies after the victim logs in.”

Source: Avast’s AntiTrack promised to protect your privacy. Instead, it opened you to miscreant-in-the-middle snooping • The Register

FYI: When Virgin Media said it leaked ‘limited contact info’, it meant p0rno filter requests, IP addresses, IMEIs as well as names, addresses and more

In fact, the marketing database also contained some subscribers’ requests to block or unblock access to X-rated and gambling websites, unique ID numbers of stolen cellphones, and records of whichever site they were visiting before arriving at the Virgin Media website.

This is according to British infosec shop Turgensec, which discovered the poorly secured Virgin Media info silo and privately reported it to the broadband-and-TV-and-phone provider. The research team today said the extent of the data spill was more extensive, and personal, than Virgin Media’s official disclosure seemed to suggest.

Here, in full, is what Turgensec said it found in the data cache that was exposed from mid-April to this month:

* Full names, addresses, date of birth, phone numbers, alternative contact phone numbers and IP addresses – corresponding to both customers and “friends” referred to the service by customers.

* Requests to block or unblock various pornographic, gore related and gambling websites, corresponding to full names and addresses. IMEI numbers associated with stolen phones.

* Subscriptions to the different aspects of their services, including premium components.

* The device type owned by the user, where relevant.

* The “Referrer” header taken seemingly from a users browser, containing what would appear to be the previous website that the user visited before accessing Virgin Media.

* Form submissions by users from their website.

Those website block and unblock requests were a result of Britain’s ruling class pressuring ISPs to implement filters to prevent kids viewing adult-only material via their parents’ home internet connections. The filters were also supposed to stop Brits from seeing any particularly nasty unlawful content.

Virgin Media today stressed the database held about a thousand subscribers’ filter request inquiries.

Source: FYI: When Virgin Media said it leaked ‘limited contact info’, it meant p0rno filter requests, IP addresses, IMEIs as well as names, addresses and more • The Register

Hackers Can Clone Millions of Toyota, Hyundai, and Kia Keys

Over the past few years, owners of cars with keyless start systems have learned to worry about so-called relay attacks, in which hackers exploit radio-enabled keys to steal vehicles without leaving a trace. Now it turns out that many millions of other cars that use chip-enabled mechanical keys are also vulnerable to high-tech theft. A few cryptographic flaws combined with a little old-fashioned hot-wiring—or even a well-placed screwdriver—lets hackers clone those keys and drive away in seconds.

Researchers from KU Leuven in Belgium and the University of Birmingham in the UK earlier this week revealed new vulnerabilities they found in the encryption systems used by immobilizers, the radio-enabled devices inside of cars that communicate at close range with a key fob to unlock the car’s ignition and allow it to start. Specifically, they found problems in how Toyota, Hyundai, and Kia implement a Texas Instruments encryption system called DST80. A hacker who swipes a relatively inexpensive Proxmark RFID reader/transmitter device near the key fob of any car with DST80 inside can gain enough information to derive its secret cryptographic value. That, in turn, would allow the attacker to use the same Proxmark device to impersonate the key inside the car, disabling the immobilizer and letting them start the engine.

The researchers say the affected car models include the Toyota Camry, Corolla, and RAV4; the Kia Optima, Soul, and Rio; and the Hyundai I10, I20, and I40. The full list of vehicles that the researchers found to have the cryptographic flaws in their immobilizers is below:

a chart of car models and makes
A list of the cars the researchers say are vulnerable to their immobilizer-disabling attack. Although the list includes the Tesla S, Tesla pushed out an update last year to address the vulnerability.

Courtesy of University of Birmingham and KU Leuven

Though the list also includes the Tesla S, the researchers reported the DST80 vulnerability to Tesla last year, and the company pushed out a firmware update that blocked the attack.

Toyota has confirmed that the cryptographic vulnerabilities the researchers found are real. But their technique likely isn’t as easy to pull off as the “relay” attacks that thieves have repeatedly used to steal luxury cars and SUVs. Those generally require only a pair of radio devices to extend the range of a key fob to open and start a victim’s car. You can pull them off from a fair distance, even through the walls of a building.

Source: Hackers Can Clone Millions of Toyota, Hyundai, and Kia Keys | WIRED

More than one billion Android devices at risk of malware threats, no longer being updated

Based on Google data, two in five of Android users worldwide may no longer be receiving updates, and while these devices won’t immediately have problems, without security support there is an increased risk to the user.

Our latest tests have shown how such phones and tablets, including handsets still available to buy from online marketplaces such as Amazon, could be affected by a range of malware and other threats. This could result in personal data being stolen, getting spammed by ads or even signed up to a premium rate phone service.

[…]

Generally speaking, the older the phone, the greater the risk. With the Android versions released in the past five years (Android 5.0 to 10.0), Google put more effort into enhancing security and privacy to give the user greater protection, transparency and control over their data. But smartphones can still be an attractive target, and it’s important to be aware of the threat.

Based on Google’s own data from May 2019, 42.1% of Android active users worldwide are on version 6.0 or earlier: Marshmallow (2015), Lollipop (2014), KitKat (2013), Jellybean (2012), Ice Cream Sandwich (2011) and Gingerbread (2010).

According to the Android Security Bulletin, there were no security patches issued for the Android system in 2019 that targeted Android versions below 7.0 Nougat.

That means more than one billion phones and tablets may be active around the world that are no longer receiving security updates.

[…]

We tasked expert antivirus lab, AV Comparatives, to try to infect them with malware, and it managed it on every phone, including multiple infections on some.

As you can see in the above chart, all the Android phones we used in our test lacked the more modern security features introduced by Google to the latest Android 9.0 or 10.

Source: More than one billion Android devices at risk of malware threats – Which? News

Virgin broadband ISP spills 900,000 punters’ records into wrong hands from insecure database

Virgin Media, one of the UK’s biggest ISPs, on Thursday admitted it accidentally spilled 900,000 of its subscribers’ personal information onto the internet via a poorly secured database.

The cableco said it “incorrectly configured” a storage system so that at least one miscreant was able to access it and potentially siphon off customer records. The now-secured marketing database – containing names, home and email addresses, and phone numbers, and some dates of birth, plus other info – had been left open since mid-April 2019.

Crucially, the information “was accessed on at least one occasion but we do not know the extent of the access,” Virgin Media’s CEO Lutz Schüler said in a statement this evening. Said access, we speculate, could have been from an automated bot scanning the internet, or someone prowling around looking for open gear; at this stage, we don’t know.

In a separate email to subscribers, shared with El Reg by dozens of readers, the telco expanded: “The database was used to manage information about our existing and potential customers in relation to some of our marketing activities. This included: contact details (such as name, home and email address and phone numbers), technical and product information, including any requests you may have made to us using forms on our website. In a very small number of cases, it included date of birth.”

The storage box, we understand, not only contained Virgin Media broadband and fixed-line subscriber records – some 15 per cent of that total customer base – but also info on some cellular users. If a punter referred a friend to Virgin Media, that pal’s details may be in the silo, too.

Source: Like a Virgin, hacked for the very first time… UK broadband ISP spills 900,000 punters’ records into wrong hands from insecure database • The Register

Enable MFA: 1.2 million Azure Active Directory (Office 365) accounts compromised every month, reckons Microsoft

Microsoft reckons 0.5 per cent of Azure Active Directory accounts as used by Office 365 are compromised every month.

The Window giant’s director of identity security, Alex Weinert, and IT identity and access program manager Lee Walker revealed the figures at the RSA conference last month in San Francisco.

“About a half of a per cent of the enterprise accounts on our system will be compromised every month, which is a really high number. If you have an organisation of 10,000 users, 50 will be compromised each month,” said Weinert.

It is an astonishing and disturbing figure. Account compromise means that a malicious actor or script has some access to internal resources, though the degree of compromise is not stated. The goal could be as simple as sending out spam or, more seriously, stealing secrets and trying to escalate access.

Password spray attacks account for 40% of compromised accounts

Password spray attacks account for 40% of compromised accounts

How do these attacks happen? About 40 per cent are what Microsoft calls password spray attacks. Attackers use a database of usernames and try logging in with statistically probable passwords, such as “123” or “p@ssw0rd”. Most fail but some succeed. A further 40 per cent are password replay attacks, where attackers mine data breaches on the assumption that many people reuse passwords and enterprise passwords in non-enterprise environments. That leaves 20 per cent for other kinds of attacks like phishing.

The key point, though, is that if an account is compromised, said Weinert, “there’s a 99.9 per cent chance that it did not have MFA [Multi Factor Authentication]”. MFA is where at least one additional identifier is required when logging in, such as a code on an authenticator application or a text message to a mobile phone. It is also possible (and preferable) to use FIDO2 security keys, a feature now in preview for Azure AD. Even just disabling legacy authentication helps, with a 67 per cent reduction in the likelihood of compromise.

Source: Enable that MF-ing MFA: 1.2 million Azure Active Directory accounts compromised every month, reckons Microsoft • The Register

Unfixable vulnerability in Intel CSME allows crypto key stealing and local access to files

An error in chipset read-only memory (ROM) could allow attackers to compromise platform encryption keys and steal sensitive information.

Intel has thanked Positive Technologies experts for their discovery of a vulnerability in Intel CSME. Most Intel chipsets released in the last five years contain the vulnerability in question.

By exploiting vulnerability CVE-2019-0090, a local attacker could extract the chipset key stored on the PCH microchip and obtain access to data encrypted with the key. Worse still, it is impossible to detect such a key breach. With the chipset key, attackers can decrypt data stored on a target computer and even forge its Enhanced Privacy ID (EPID) attestation, or in other words, pass off an attacker computer as the victim’s computer. EPID is used in DRM, financial transactions, and attestation of IoT devices.

One of the researchers, Mark Ermolov, Lead Specialist of OS and Hardware Security at Positive Technologies, explained: “The vulnerability resembles an error recently identified in the BootROM of Apple mobile platforms, but affects only Intel systems. Both vulnerabilities allow extracting users’ encrypted data. Here, attackers can obtain the key in many different ways. For example, they can extract it from a lost or stolen laptop in order to decrypt confidential data. Unscrupulous suppliers, contractors, or even employees with physical access to the computer can get hold of the key. In some cases, attackers can intercept the key remotely, provided they have gained local access to a target PC as part of a multistage attack, or if the manufacturer allows remote firmware updates of internal devices, such as Intel Integrated Sensor Hub.”

The vulnerability potentially allows compromising common data protection technologies that rely on hardware keys for encryption, such as DRM, firmware TPM, and Intel Identity Protection. For example, attackers can exploit the vulnerability on their own computers to bypass content DRM and make illegal copies. In ROM, this vulnerability also allows for arbitrary code execution at the zero level of privilege of Intel CSME. No firmware updates can fix the vulnerability.

Intel recommends that users of Intel CSME, Intel SPS, Intel TXE, Intel DAL, and Intel AMT contact their device or motherboard manufacturer for microchip or BIOS updates to address the vulnerability. Check the Intel website for the latest recommendations on mitigation of vulnerability CVE-2019-0090.

Since it is impossible to fully fix the vulnerability by modifying the chipset ROM, Positive Technologies experts recommend disabling Intel CSME based encryption of data storage devices or considering migration to tenth-generation or later Intel CPUs. In this context, retrospective detection of infrastructure compromise with the help of traffic analysis systems such as PT Network Attack Discovery becomes just as important.

Source: Positive Technologies: Unfixable vulnerability in Intel chipsets threatens users and content rightsholders

EU Commission to staff: Switch to Signal messaging app

The European Commission has told its staff to start using Signal, an end-to-end-encrypted messaging app, in a push to increase the security of its communications.

The instruction appeared on internal messaging boards in early February, notifying employees that “Signal has been selected as the recommended application for public instant messaging.”

The app is favored by privacy activists because of its end-to-end encryption and open-source technology.

“It’s like Facebook’s WhatsApp and Apple’s iMessage but it’s based on an encryption protocol that’s very innovative,” said Bart Preneel, cryptography expert at the University of Leuven. “Because it’s open-source, you can check what’s happening under the hood,” he added.

[…]

Privacy experts consider that Signal’s security is superior to other apps’. “We can’t read your messages or see your calls,” its website reads, “and no one else can either.”

[…]

The use of Signal was mainly recommended for communications between staff and people outside the institution. The move to use the application shows that the Commission is working on improving its security policies.

Promoting the app, however, could antagonize the law enforcement community.

Officials in Brussels, Washington and other capitals have been putting strong pressure on Facebook and Apple to allow government agencies to access to encrypted messages; if these agencies refuse, legal requirements could be introduced that force firms to do just that.

American, British and Australian officials have published an open letter to Facebook CEO Mark Zuckerberg in October, asking that he call off plans to encrypt the company’s messaging service. Dutch Minister for Justice and Security Ferd Grappehaus told POLITICO last April that the EU needs to look into legislation allowing governments to access encrypted data.

Cybersecurity officials have dismissed calls to weaken encryption for decades, arguing that it would put the confidentiality of communications at risk across the board.

Source: EU Commission to staff: Switch to Signal messaging app – POLITICO

Finally, an organisation showing some sense!

Wi-Fi of more than a billion PCs, phones, gadgets can be snooped on. But you’re using HTTPS, SSH, VPNs… right?

A billion-plus computers, phones, and other devices are said to suffer a chip-level security vulnerability that can be exploited by nearby miscreants to snoop on victims’ encrypted Wi-Fi traffic.

The flaw [PDF] was branded KrØØk by the bods at Euro infosec outfit ESET who discovered it. The design blunder is otherwise known as CVE-2019-15126, and is related to 2017’s KRACK technique for spying on Wi-Fi networks.

An eavesdropper doesn’t have to be logged into the target device’s wireless network to exploit KrØØk. If successful, the miscreant can take repeated snapshots of the device’s wireless traffic as if it were on an open and insecure Wi-Fi. These snapshots may contain things like URLs of requested websites, personal information in transit, and so on.

It’s not something to be totally freaking out over: someone exploiting this has to be physically near you, and you may notice your Wi-Fi being disrupted. But it’s worth knowing about.

Source: Wi-Fi of more than a billion PCs, phones, gadgets can be snooped on. But you’re using HTTPS, SSH, VPNs… right? • The Register

Your banks’ APIs are a major target for credential stuffing attacks

Automating connections from 3rd party providers makes it easy to access your financial data because people re-use their logins and these logins have been repeatedly leaked online.

New data from security and content delivery company Akamai shows that one in every five attempts to gain unauthorized access to user accounts is now done through application programming interfaces (APIs) instead of user-facing login pages. This trend is even more pronounced in the financial services industry where the use of APIs is widespread and in part fueled by regulatory requirements.

According to a report released today, between December 2017 and November 2019, Akamai observed 85.4 billion credential abuse attacks against companies worldwide that use its services. Of those attacks, around 16.5 billion, or nearly 20%, targeted hostnames that were clearly identified as API endpoints. However, in the financial industry, the percentage of attacks that targeted APIs rose sharply between May and September 2019, at times reaching 75%.

“API usage and widespread adoption have enabled criminals to automate their attacks,” the company said in its report. “This is why the volume of credential stuffing incidents has continued to grow year over year, and why such attacks remain a steady and constant risk across all market segments.”

The credential stuffing problem

Credential stuffing, a type of brute-force attack where criminals use lists of leaked username and password combinations to gain access to accounts, has become a major problem in recent years. This is a consequence of the large number of data breaches over the past decade that have resulted in billions of stolen credentials being released publicly on the internet or sold on underground markets as commodities.

Knowing that users reuse passwords across various websites, attackers have used the credentials exposed in data breaches to build so-called combo lists. These lists of username and password combinations are then loaded into botnets or automated tools and are used to flood websites with login requests in an attempt to gain access.

However, once access is gained, extracting information from the affected services by crawling the customer pages requires some effort and customization, whereas requesting and extracting information through APIs is standardized and well suited for automation. After all, the very purpose of an API is to facilitate applications talking to each other and exchanging data automatically.

Source: APIs are becoming a major target for credential stuffing attacks

Clearview AI, Creepy Facial Recognition Company That Stole Your Pictures from Social Media, Says Entire Client List Was Stolen by Hackers

A facial-recognition company that contracts with powerful law-enforcement agencies just reported that an intruder stole its entire client list, according to a notification the company sent to its customers.

In the notification, which The Daily Beast reviewed, the startup Clearview AI disclosed to its customers that an intruder “gained unauthorized access” to its list of customers, to the number of user accounts those customers had set up, and to the number of searches its customers have conducted. The notification said the company’s servers were not breached and that there was “no compromise of Clearview’s systems or network.” The company also said it fixed the vulnerability and that the intruder did not obtain any law-enforcement agencies’ search histories.

Source: Clearview AI, Facial Recognition Company That Works With Law Enforcement, Says Entire Client List Was Stolen

All that Samsung users found on UK website after weird Find my Mobile push notification was… other people’s details

In the early hours of this morning, a very large number of Samsung devices around the world received a push notification from the vendor’s Find my Mobile app. That notification simply read “1/1”.

[…]

A handful of Reg staffers also received the notification, which caused surprise and concern at Vulture Central – not least because Find my Mobile is disabled on two of those devices.

A pre-installed default Samsung OEM app regarded by some as bloatware, Find my Mobile cannot be fully uninstalled if you don’t plan to format the entire phone with a new third-party ROM – which is a profoundly technical process, and, with modern Samsung devices, requires the user to unlock the bootloader.

[…]

Ominously, some Register readers who received the unwanted notification immediately assumed the worst and went into their accounts to change their Samsung passwords only to be confronted by other people’s personal data on the Samsung UK website.

One told us that after seeing other people’s names, addresses and phone numbers displayed in his Samsung Account after logging in using his own details, he phoned the Samsung helpdesk. Our reader said: “When I explained to [the call centre worker] what I saw, he said, ‘Yes, we’ve had a few reports of that this morning’.”

Mark showed us screenshots he had taken, showing himself logged in and someone else’s details being displayed as if they were associated with his account.

Source: All that Samsung users found on UK website after weird Find my Mobile push notification was… other people’s details • The Register

Shipping is so insecure we could have driven off in an oil rig, says Pen Test Partners

Penetration testers looking at commercial shipping and oil rigs discovered a litany of security blunders and vulnerabilities – including one set that would have let them take full control of a rig at sea.

Pen Test Partners (PTP), an infosec consulting outfit that specialises in doing what its name says, reckoned that on the whole, not many maritime companies understand the importance of good infosec practices at sea. The most eye-catching finding from PTP’s year of maritime pentesting was that its researchers could have gained a “full compromise” of a deep sea drilling rig, as used for oil exploration.

PTP’s Ken Munro explained, when The Register asked the obvious question, that this meant “stop engine, fire up thrusters (dynamic positioning system), change rudder position, mess around with navigation, brick systems, switch them off, you name it.”

The firm’s Nigel Hearne explained that many maritime tech vendors have a “variable” approach to security.

Making heavy use of the word “poor” to summarise what he had seen over the past year, Hearne wrote that he and his colleagues had examined everything from a deep water exploration and the aforementioned drilling rig to a brand new cruise ship to a Panamax container vessel, and a few others in between.

Munro also published a related blog post this week.

Among other things the team found were clandestine Wi-Fi access points in non-Wi-Fi areas of ships (“they want to stream tunes/video in a work area that they can’t get crew Wi-Fi in,” said Munro), and crews bridging designed gaps between ships’ engineering control systems and human interface systems.

Why were seafarers doing something that seems so obviously silly to an infosec-minded person? Munro told us: “Someone needs to administrate or monitor systems from somewhere else in the vessel, saving a long walk. Ships are big!”

Another potential explanation proferred by Munro could apply to cruise ship crews where Wi-Fi is generally a paid-for, metered commodity: “Their personal satellite data allowance has been used up, so they put a rogue Wi-Fi AP on to the ship’s business network where there are no limits.”

A Panamax vessel (the largest size of ship that can pass through the Panama Canal, the vital central American shipping artery between the Atlantic and Pacific) can be up to 294 metres (PDF, page 8 gives the measurements) from stem to stern. A crew member needing to move from, say, bow thruster to main machinery control room in the aft part of the ship and back again will spend significant amounts of time doing so. It’s far easier to jury-rig remote access than do all that walking.

PTP also found that old infosec chestnut, default and easy-to-guess passwords – along with a smattering of stickers on PCs with passwords in plaintext.

Default passwords aboard ships. Pic: Pen Test Partners

Default passwords aboard ships. Pic: Pen Test Partners

“One of the biggest surprises (not that I should have been at all surprised in hindsight) is the number of installations we still find running default credentials – think admin/admin or blank/blank – even on public facing systems,” sighed Hearne, detailing all the systems he found that were using default creds – including an onboard CCTV system.

The pentesters also found “hard coded credentials” embedded in critical items including a ship’s satcom (satellite comms mast) unit, potentially allowing anyone aboard the ship to log in and piggyback off the owners’ paid-for internet connection – or to cut it off

Source: Shipping is so insecure we could have driven off in an oil rig, says Pen Test Partners • The Register

Facebook was repeatedly warned of security flaw that led to biggest data breach in its history

Facebook knew about a huge security flaw that let hackers to steal personal data from millions of its users almost one year before the crime, yet failed to fix it in time, the Telegraph can reveal.

Legal documents show that the company was repeatedly warned by its own employees as well as outsiders about a dangerous loophole that eventually led to the massive data breach in September 2018.

Despite this, the loophole remained open for nine months after it was first raised, leading employees to later speak of their “guilt” and “hurt” at knowing that the attack “could have been prevented”.

The breach, which involved stealing digital “access tokens” used by Facebook to verify users’ identity without needing their passwords, exposed the names, phone numbers and email addresses of 29 million people and a host of more intimate data for 14 million of them, putting users around the world at risk of identity theft….

Source: Facebook was repeatedly warned of security flaw that led to biggest data breach in its history

Plastic surgery images and invoices leak from unsecured database

Thousands of images, videos and records pertaining to plastic surgery patients were left on an unsecured database where they could be viewed by anyone with the right IP address, researchers said Friday. The data included about 900,000 records, which researchers say could belong to thousands of different patients.

The data was generated at clinics around the world using software made by French imaging company NextMotion. Images in the database included before-and-after photos of cosmetic procedures. Those photos often contained nudity, the researchers said. Other records included images of invoices that contained information that would identify a patient. The database is now secured.

Researchers Noam Rotem and Ran Locar found the exposed database. They published their research with vpnMentor, a security website that rates VPN services and earns commissions when readers make purchases. Rotem said he sees exposed health care databases all too often as part of his web-mapping project, which looks for exposed data.

“The state of privacy protection, especially in health care, is really abysmal,” Rotem said.

NextMotion, which says on its website that it has 170 clinics as customers in 35 countries, said in a statement to its clients that it had addressed the problem.”We immediately took corrective steps and this same company formally guaranteed that the security flaw had completely disappeared,” said NextMotion CEO Emmanuel Elard in the statement. “This incident only reinforced our ongoing concern to protect your data and your patients’ data when you use the Nextmotion application.”

Elard went to apologize for the “fortunately minor incident.”

While NextMotion said the photos and videos don’t include names or other identifying information, many of the images show patients’ faces, according to vpnMonitor. Some of the invoices detail the types of procedures patients received, such as acne scar removal and abdominoplasty, and contain patients’ names and other identifying information.

Source: Plastic surgery images and invoices leak from unsecured database – CNET

Apple’s Mac computers now outpace Windows in malware and virus

Think your Apple product is safe from malware? That only people using Windows machines have to take precautions? According to cybersecurity software company Malwarebytes’ latest State of Malware report, it’s time to think again. The amount of malware on Macs is outpacing PCs for the first time ever, and your complacency could be your worst enemy.

“People need to understand that they’re not safe just because they’re using a Mac,” Thomas Reed, Malwarebytes’ director of Mac and mobile and contributor to the report, told Recode.

Windows machines still dominate the market share and tend to have more security vulnerabilities, which has for years made them the bigger and easier target for hackers. But as Apple’s computers have grown in popularity, hackers appear to be focusing more of their attention on the versions of macOS that power them. Malwarebytes said there was a 400 percent increase in threats on Mac devices from 2018 to 2019, and found an average of 11 threats per Mac devices, which about twice the 5.8 average on Windows.

“There is a rising tide of Mac threats hitting a population that still believes that ‘Macs don’t get viruses,’” Reed said. “I still frequently encounter people who firmly believe this, and who believe that using any kind of security software is not necessary, or even harmful. This makes macOS a fertile ground for the influx of new threats, whereas it’s common knowledge that Windows PCs need security software.”

Now, this isn’t quite as bad as it may appear. First of all, as Malwarebytes notes, the increase in threats could be attributable to an increase in Mac devices running its software. That makes the per-device statistic a better barometer. In 2018, there were 4.8 threats per Mac device, which means the per-device number has more than doubled. That’s not great, but it’s not as bad as that 400 percent increase.

Source: Apple’s Mac computers now outpace Windows in malware and virus – Vox

Tens of millions of biz Dell PCs smacked by privilege-escalation bug in bundled troubleshooting tool

Dell has copped to a flaw in SupportAssist – a Windows-based troubleshooting program preinstalled on nearly every one of its newer devices running the OS – that allows local hackers to load malicious files with admin privileges.

The company has issued an advisory about the flaw, warning that a locally authenticated low-privilege user could exploit the vuln to load arbitrary DLLs by the SupportAssist binaries, resulting in the privileged execution of malware.

SupportAssist scans the system’s hardware and software, and when an issue is detected, it sends the necessary system state information to Dell for troubleshooting to begin.

This type of vulnerability is fairly common, but typically requires admin privileges to exploit, so isn’t generally considered a serious security threat. But Cyberark’s Eran Shimony, who discovered the bug, said that in this case, SupportAssist attempts to load a DLL from a directory that a regular (non-admin) user can write into.

“Therefore, a malicious non-privileged user can write a DLL that would be loaded by DellSupportAssist, effectively gaining code execution inside software that runs with NT AUTHORITY\System privileges,” Shimony told The Reg.

“This is because you can write a code entry inside a function called DLLMain (in the malicious DLL) that would be called immediately upon loading. This code piece would run in the privilege level of the host process.”

The flaw (CVE-2020-5316), which has a severity rating of “high”, affects Dell SupportAssist for business PCs version 2.1.3 or earlier and for home PCs version 3.4 or earlier.

Business users need to update to version 2.1.4 for and home desk jockeys should roll over to version 3.4.1 to get the fixes.

Source: Tens of millions of biz Dell PCs smacked by privilege-escalation bug in bundled troubleshooting tool • The Register

Software error exposes the ID numbers, birthdays and genders for 1.26 million Danish citizens, 1/5th of the population

A software error in Denmark’s government tax portal has accidentally exposed the personal identification (CPR) numbers for 1.26 million Danish citizens, a fifth of the country’s total population.

The error lasted for five years (between February 2, 2015, and January 24, 2020) before it was discovered, Danish media reported last week.

The software error and the subsequent leak was discovered following an audit by the Danish Agency for Development and Simplification (Udviklings-og Forenklingsstyrelsen, or UFST).

According to the UFST, the error occurred on TastSelv Borger, the Danish tax administration’s official self-service portal where Danish citizens go to file and pay taxes online.

Government officials said the portal contained a software bug that every time a user updated account details in the portal’s settings section, their CPR number would be added to the URL.

The URL would then be collected by analytics services running on the site — in this case, Adobe and Google.

According to the UFST, details for more than 1.2 million Danish tax-payers were exposed by this bug and were inadvertently collected by the analytics providers.

CPR numbers are important in Denmark. They are mandatory for opening bank accounts, getting phone numbers, and many other basic operations.

CPR numbers also leak details about a user. They consist of ten digits, where the first six are a citizen’s birth date. They also leak details about an owner’s gender (if the last digit is odd, the owner is male, if the last digit is even, then the owner is a female).

[…]

Denmark is the third Scandinavian government to suffer a security incident in the last few years. In 2015, the Swedish Transport Agency (STA) allowed several sensitive databases to be uploaded to the cloud and accessed by unvetted Serbian IT professionals. In 2018, a hacker group stole healthcare data for more than half of Norway’s population.

Source: Software error exposes the ID numbers for 1.26 million Danish citizens | ZDNet

Israeli Voters: Data of All 6.5 Million Voters Leaked

A software flaw exposed the personal data of every eligible voter in Israel — including full names, addresses and identity card numbers for 6.5 million people — raising concerns about identity theft and electoral manipulation, three weeks before the country’s national election.

The security lapse was tied to a mobile app used by Prime Minister Benjamin Netanyahu and his Likud party to communicate with voters, offering news and information about the March 2 election. Until it was fixed, the flaw made it possible, without advanced technical skills, to view and download the government’s entire voter registry, though it was unclear how many people did so.

[…]

It came less than a week after another app helped make a fiasco of the Democratic presidential caucuses in Iowa, casting serious doubts on the figures that were belatedly reported. That app had been privately developed for the party, had not been tested by independent experts, and had been kept secret by the party until weeks before the caucuses.

The personal information of almost every adult in Bulgaria was stolen last year from a government database by hackers suspected of being Russian, and there were cyberattacks in 2017 on Britain’s health care system and the government of Bangladesh that the United States and others have blamed on North Korea. Cyberattacks on companies like the credit agency Equifax, the Marriott International hotel company and Yahoo have exposed the personal data of vast numbers of people.

[…]

Explaining the ease with which the voter information could be accessed, Ran Bar-Zik, the programmer who revealed the breach, explained that visitors to the Elector app’s website could right-click to “view source,” an action that reveals the code behind a web page.

That page of code included the user names and passwords of site administrators with access to the voter registry, and using those credentials would allow anyone to view and download the information. Mr. Bar-Zik, a software developer for Verizon Media who wrote the Sunday article in Haaretz, said he chose the name and password of the Likud party administrator and logged in.

“Jackpot!” he said in an interview on Monday. “Everything was in front of me!”

Source: Israeli Voters: Data of All 6.5 Million Voters Leaked – The New York Times

So – yes, centralised databases. What a great idea. Not.

Sorry to be blunt about this… Open AWS S3 storage bucket just made 30,000 potheads’ privacy go up in smoke

Personal records, including scans of ID cards and purchase details, for more than 30,000 people were exposed to the public internet from this unsecured cloud silo, we’re told. In addition to full names and pictures of customer ID cards, the 85,000 file collection is said to include email and mailing address, phone numbers, dates of birth, and the maximum amount of cannabis an individual is allowed to purchase. All available to download, unencrypted, if you knew where to look.

Because many US states have strict record-keeping requirements written into their marijuana legalization laws, dispensaries have to manage a certain amount of customer and inventory information. In the case of THSuite, those records were put into an S3 bucket that was left accessible to the open internet – including the Shodan.io search engine.

The bucket was taken offline last week after it was discovered on December 24, and its insecure configuration was reported to THSuite on December 26 and Amazon on January 7, according to vpnMentor. The S3 bucket’s data belonged to dispensaries in Maryland, Ohio, and Colorado, we’re told.

Source: Sorry to be blunt about this… Open AWS S3 storage bucket just made 30,000 potheads’ privacy go up in smoke • The Register

Netgear leaves admin interface’s TLS cert and private key router firmware

Netgear left in its router firmware key ingredients needed to intercept and tamper with secure connections to its equipment’s web-based admin interfaces.

Specifically, valid, signed TLS certificates with private keys were embedded in the software, which was available to download for free by anyone, and also shipped with Netgear devices. This data can be used to create HTTPS certs that browsers trust, and can be used in miscreant-in-the-middle attacks to eavesdrop on and alter encrypted connections to the routers’ built-in web-based control panel.

In other words, the data can be used to potentially hijack people’s routers. It’s partly an embarrassing leak, and partly indicative of manufacturers trading off security, user friendliness, cost, and effort.

Security mavens Nick Starke and Tom Pohl found the materials on January 14, and publicly disclosed their findings five days later, over the weekend.

The blunder is a result in Netgear’s approach to security and user convenience. When configuring their kit, owners of Netgear equipment are expected to visit https://routerlogin.net or https://routerlogin.com. The network’s router tries to ensure those domain names resolve to the device’s IP address on the local network. So, rather than have people enter 192.168.1.1 or similar, they can just use that memorable domain name.

To establish an HTTPS connection, and avoid complaints from browsers about using insecure HTTP and untrusted certs, the router has to produce a valid HTTPS cert for routerlogin.net or routerlogin.com that is trusted by browsers. To cryptographically prove the cert is legit when a connection is established, the router needs to use the certificate’s private key. This key is stored unsecured in the firmware, allowing anyone to extract and abuse it.

Netgear doesn’t want to provide an HTTP-only admin interface, to avoid warnings from browsers of insecure connections and to thwart network eavesdroppers, we presume. But if it uses HTTPS, the built-in web server needs to prove its cert is legit, and thus needs its private key. So either Netgear switches to using per-device private-public keys, or stores the private key in a secure HSM in the router, or just uses HTTP, or it has to come up with some other solution. You can follow that debate here.

Source: Leave your admin interface’s TLS cert and private key in your router firmware in 2020? Just Netgear things • The Register

BlackVue dashcam shows anyone everywhere you are in real time and where you have been in the past

An app that is supposed to be a fun activity for dashcam users to broadcast their camera feeds and drives is actually allowing people to scrape and store the real-time location of drivers across the world.

BlackVue is a dashcam company with its own social network. With a small, internet-connected dashcam installed inside their vehicle, BlackVue users can receive alerts when their camera detects an unusual event such as someone colliding with their parked car. Customers can also allow others to tune into their camera’s feed, letting others “vicariously experience the excitement and pleasure of driving all over the world,” a message displayed inside the app reads.

Users are invited to upload footage of their BlackVue camera spotting people crashing into their cars or other mishaps with the #CaughtOnBlackVue hashtag. It’s kind of like Amazon’s Ring cameras, but for cars. BlackVue exhibited at CES earlier this month, and was previously featured on Innovations with Ed Begley Jr. on the History Channel.

But what BlackVue’s app doesn’t make clear is that it is possible to pull and store users’ GPS locations in real-time over days or even weeks. Motherboard was able to track the movements of some of BlackVue’s customers in the United States.

The news highlights privacy issues that some BlackVue customers or other dashcam users may not be aware of, and more generally the potential dangers of adding an internet and GPS enabled device into your vehicle. It also shows how developers may have one use case for an app, while people can discover others: although BlackVue wanted to create an entertaining app where users could tap into each others’ feeds, they may not have realized that it would be trivially easy to track its customers’ movements in granular detail, at scale, and over time.

BlackVue acts as another example of how surveillance products that are nominally intended to protect a user have been designed in such a way that can end up in a user being spied on, too.

“I don’t think people understand the risk,” Lee Heath, an information security professional and BlackVue user told Motherboard. “I knew about some of the cloud features which I wanted. You can have it automatically connect and upload when events happen. But I had no idea about the sharing” before receiving the device as a gift, he added.

Ordinarily, BlackVue lets anyone create an account and then view a map of cameras that are broadcasting their location and live feed. This broadcasting is not enabled by default, and users have to select the option to do so when setting up or configuring their own camera. Motherboard tuned into live feeds from users in Hong Kong, China, Russia, the U.K, Germany, and elsewhere. BlackVue spokesperson Jeremie Sinic told Motherboard in an email that the users on the map only represent a tiny fraction of BlackVue’s overall customers.

But the actual GPS data that drives the map is available and publicly accessible.

1579127170434-blackvue-user-gps
A screenshot of the location data of one BlackVue user that Motherboard tracked throughout New York. Motherboard has heavily obfuscated the data to protect the individual’s privacy. Image: Motherboard

By reverse engineering the iOS version of the BlackVue app, Motherboard was able to write scripts that pull the GPS location of BlackVue users over a week long period and store the coordinates and other information like the user’s unique identifier. One script could collect the location data of every BlackVue user who had mapping enabled on the eastern half of the United States every two minutes. Motherboard collected data on dozens of customers.

With that data, we were able to build a picture of several BlackVue users’ daily routines: one drove around Manhattan during the day, perhaps as a rideshare driver, before then leaving for Queens in the evening. Another BlackVue user regularly drove around Brooklyn, before parking on a specific block in Queens overnight. The user did this for several different nights, suggesting this may be where the owner lives or stores their vehicle. A third showed someone driving a truck all over South Carolina.

Some customers may use BlackVue as part of a fleet of vehicles; an employer wanting to keep tabs on their delivery trucks as they drive around, for instance. But BlackVue also markets its products to ordinary consumers who want to protect their cars.

1579127955288-blackvue-live-feed
A screenshot of Motherboard accessing someone’s public live feed as the user is driving in public away from their apparent home. Motherboard has redacted the user information to protect individual privacy. Image: Motherboard

BlackVue’s Sinic said that collecting GPS coordinates of multiple users over an extended period of time is not supposed to be possible.

“Our developers have updated the security measures following your report from yesterday that I forwarded,” Sinic said. After this, several of Motherboard’s web requests that previously provided user data stopped working.

In 2018 the company did make some privacy-related changes to its app, meaning users were not broadcasting their camera feeds by default.

“I think BlackVue has decent ideas as far as leaving off by default but allows people to put themselves at risk without understanding,” Heath, the BlackVue user, said.

Motherboard has deleted all of the data collected to preserve individuals’ privacy.

Source: This App Lets Us See Everywhere People Drive – VICE

PGP keys, software security, and much more threatened by new SHA1 exploit

Three years ago, Ars declared the SHA1 cryptographic hash algorithm officially dead after researchers performed the world’s first known instance of a fatal exploit known as a “collision” on it. On Tuesday, the dead SHA1 horse got clobbered again as a different team of researchers unveiled a new attack that’s significantly more powerful.

The new collision gives attackers more options and flexibility than were available with the previous technique. It makes it practical to create PGP encryption keys that, when digitally signed using SHA1 algorithm, impersonate a chosen target. More generally, it produces the same hash for two or more attacker-chosen inputs by appending data to each of them. The attack unveiled on Tuesday also costs as little as $45,000 to carry out. The attack disclosed in 2017, by contrast, didn’t allow forgeries on specific predetermined document prefixes and was evaluated to cost from $110,000 to $560,000 on Amazon’s Web Services platform, depending on how quickly adversaries wanted to carry it out.

The new attack is significant. While SHA1 has been slowly phased out over the past five years, it remains far from being fully deprecated. It’s still the default hash function for certifying PGP keys in the legacy 1.4 version branch of GnuPG, the open-source successor to PGP application for encrypting email and files. Those SHA1-generated signatures were accepted by the modern GnuPG branch until recently, and were only rejected after the researchers behind the new collision privately reported their results.

Git, the world’s most widely used system for managing software development among multiple people, still relies on SHA1 to ensure data integrity. And many non-Web applications that rely on HTTPS encryption still accept SHA1 certificates. SHA1 is also still allowed for in-protocol signatures in the Transport Layer Security and Secure Shell protocols.

In a paper presented at this week’s Real World Crypto Symposium in New York City, the researchers warned that even if SHA1 usage is low or used only for backward compatibility, it will leave users open to the threat of attacks that downgrade encrypted connections to the broken hash function. The researchers said their results underscore the importance of fully phasing out SHA1 across the board as soon as possible.

“This work shows once and for all that SHA1 should not be used in any security protocol where some kind of collision resistance is to be expected from the hash function,” the researchers wrote. “Continued usage of SHA1 for certificates or for authentication of handshake messages in TLS or SSH is dangerous, and there is a concrete risk of abuse by a well-motivated adversary. SHA1 has been broken since 2004, but it is still used in many security systems; we strongly advise users to remove SHA1 support to avoid downgrade attacks.”

Source: PGP keys, software security, and much more threatened by new SHA1 exploit | Ars Technica

More than 600 million users installed Android ‘fleeceware’ apps from the Play Store – where they don’t cancel your trial after uninstalling

Security researchers from Sophos say they’ve discovered a new set of “fleeceware” apps that appear to have been downloaded and installed by more than 600 million Android users.

The term fleeceware is a recent addition to the cyber-security jargon. It was coined by UK cyber-security firm Sophos last September following an investigation that discovered a new type of financial fraud on the official Google Play Store.

It refers to apps that abuse the ability for Android apps to run trial periods before a payment is charged to the user’s account.

By default, all users who sign up for an Android app trial period, have to cancel the trial period manually to avoid being charged. However, most users just uninstall an app when they don’t like it.

The vast majority of app developers interpret this action — a user uninstalling their app — as a trial period cancelation and don’t follow through with a charge.

But last year, Sophos discovered that some Android app developers didn’t cancel an app’s trial period once the app is uninstalled and they don’t receive a specific request from the user.

Sophos said it initially discovered 24 Android apps that were charging obscene fees (between $100 and $240 per year) for the most basic and simplistic apps, such as QR/barcode readers and calculators.

Sophos researchers called these apps “fleeceware.”

In a new report published yesterday, Sophos said it discovered another set of Android “fleeceware” apps that have continued to abuse the app trial mechanism to impose charges to users after they uninstalled an app.

Source: More than 600 million users installed Android ‘fleeceware’ apps from the Play Store | ZDNet

Skype and Cortana audio listened in on by workers in China with ‘no security measures’

A Microsoft programme to transcribe and vet audio from Skype and Cortana, its voice assistant, ran for years with “no security measures”, according to a former contractor who says he reviewed thousands of potentially sensitive recordings on his personal laptop from his home in Beijing over the two years he worked for the company.

The recordings, both deliberate and accidentally invoked activations of the voice assistant, as well as some Skype phone calls, were simply accessed by Microsoft workers through a web app running in Google’s Chrome browser, on their personal laptops, over the Chinese internet, according to the contractor.

Workers had no cybersecurity help to protect the data from criminal or state interference, and were even instructed to do the work using new Microsoft accounts all with the same password, for ease of management, the former contractor said. Employee vetting was practically nonexistent, he added.

“There were no security measures, I don’t even remember them doing proper KYC [know your customer] on me. I think they just took my Chinese bank account details,” he told the Guardian.

While the grader began by working in an office, he said the contractor that employed him “after a while allowed me to do it from home in Beijing. I judged British English (because I’m British), so I listened to people who had their Microsoft device set to British English, and I had access to all of this from my home laptop with a simple username and password login.” Both username and password were emailed to new contractors in plaintext, he said, with the former following a simple schema and the latter being the same for every employee who joined in any given year.

“They just give me a login over email and I will then have access to Cortana recordings. I could then hypothetically share this login with anyone,” the contractor said. “I heard all kinds of unusual conversations, including what could have been domestic violence. It sounds a bit crazy now, after educating myself on computer security, that they gave me the URL, a username and password sent over email.”

As well as the risks of a rogue employee saving user data themselves or accessing voice recordings on a compromised laptop, Microsoft’s decision to outsource some of the work vetting English recordings to companies based in Beijing raises the additional prospect of the Chinese state gaining access to recordings. “Living in China, working in China, you’re already compromised with nearly everything,” the contractor said. “I never really thought about it.”

Source: Skype audio graded by workers in China with ‘no security measures’ | Technology | The Guardian