Apple has disabled the Apple Watch Walkie Talkie app due to an unspecified vulnerability that could allow a person to listen to another customer’s iPhone without consent, the company told TechCrunch this evening.
Apple has apologized for the bug and for the inconvenience of being unable to use the feature while a fix is made.
[…]
Earlier this year a bug was discovered in the group calling feature of FaceTime that allowed people to listen in before a call was accepted. It turned out that the teen who discovered the bug, Grant Thompson, had attempted to contact Apple about the issue but was unable to get a response. Apple fixed the bug and eventually rewarded Thompson a bug bounty. This time around, Apple appears to be listening more closely to the reports that come in via its vulnerability tips line and has disabled the feature.
Earlier today, Apple quietly pushed a Mac update to remove a feature of the Zoom conference app that allowed it to work around Mac restrictions to provide a smoother call initiation experience — but that also allowed emails and websites to add a user to an active video call without their permission.
A publicly accessible and unsecured ElasticSearch server owned by the Jiangsu Provincial Public Security Department of the Chinese province Jiangsu leaked two databases containing over 90 million people and business records.
Jiangsu (江苏省) is an eastern-central coastal Chinese province with a population of over 80 million and an urban population of more than 55 million accounting for 68.76% of its total population according to a 2018 population census from the National Bureau of Statistics, which makes it the fifth most populous province in China.
Provincial public security departments are “functional organization under the dual leadership of Provincial Government and the Ministry of Public Security in charge of the whole province’s public security work.”
The two now secured databases contained than 26 GB of data in the form of personally identifiable information (PII) names, birth dates, genders, identity card numbers, location coordinates, as well as info on city_relations, city_open_id, and province_open_id for individuals.
In the case of businesses, the records included business IDs, business types, location coordinates, city_open_id, and memos designed to track if the owner of the business is known.
Besides the two exposed ElasticSearch databases, the Jiangsu Provincial Public Security Department also had a Public Security Network admin console that required a valid user/password combo for access, as well as a publicly-accessible Kibana installation running on the server which would help browse and analyze the stored data using a GUI-based interface.
However, unlike other cases of exposed Kibana installations, this one was not fully configured seeing that, once loaded in a web browser, it would go straight to the “Create index pattern page.”
A large-scale payment card skimming campaign that successfully breached 962 e-commerce stores was discovered today by Magento security research company Sanguine Security.
The campaign seems to be automated according to Sanguine Security researcher Willem de Groot who told BleepingComputer that the card skimming script was added within a 24-hour timeframe. “It would be nearly impossible to breach 960+ stores manually in such a short time,” he added.
Even though no information on how such automated Magecart attacks against e-commerce websites would work was shared by Sanguine Security, the procedure would most likely entail scanning for and exploiting security flaws in the stores’ software platform.
“Have not gotten confirmation yet, but it seems that several victims were missing patches against PHP object injection exploits,” also said de Groot.
While details on how the online stores were breached are still scarce given that the logs are still being analyzed, the JavaScript-based payment data skimmer script was decoded and uploaded by the security company to GitHub Gist.
As shown from its source code, the skimmer was used by the attackers to collect e-commerce customers’ payment info on breached stores, including full credit card data, names, phones, and addresses.
On Monday, security researcher Jonathan Leitschuh publicly disclosed a serious zero-day vulnerability in conferencing software Zoom—which apparently achieves its click-to-join feature, which allows users to go directly to a video meeting from a browser link, on Mac computers by installing a local web server running as a background process that “accepts requests regular browsers wouldn’t,” per the Verge. As a result, Zoom could be hijacked by any website to force a Mac user to join a call without their permission, and with webcams activated unless a specific setting was enabled.
Worse, Leitschuh wrote that the local web server persists even if Zoom is uninstalled and is capable of reinstalling the app on its own, and that when he contacted the company they did little to resolve the issues.
In a Medium post on Monday, Leitschuh provided a demo in the form of a link that, when clicked, took Mac users who have ever installed the app to a conference room with their video cameras activated (it’s here, if you must try yourself). Leitschuh noted that the code to do this can be embedded in any website as well as “in malicious ads, or it could be used as a part of a phishing campaign.” Additionally, Leitschuh wrote that even if users uninstall Zoom, the insecure local web server persists and “will happily re-install the Zoom client for you, without requiring any user interaction on your behalf besides visiting a webpage.”
This implementation leaves open other nefarious ways to abuse the local web server, per the Verge:
Turning on your camera is bad enough, but the existence of the web server on their computers could open up more significant problems for Mac users. For example, in an older version of Zoom (since patched), it was possible to enact a denial of service attack on Macs by constantly pinging the web server: “By simply sending repeated GET requests for a bad number, Zoom app would constantly request ‘focus’ from the OS,” Leitschuh writes.
According to Leitschuh, he contacted Zoom on March 26, saying he would disclose the exploit in 90 days. Zoom did issue a “quick fix” patch that only disabled “a meeting creator’s ability to automatically enable a participants video by default,” he added, though this was far from a complete solution (and did nothing to negate the “ability for an attacker to forcibly join to a call anyone visiting a malicious site”) and only came in mid-June.
On July 7, he wrote, a “regression in the fix” caused it to no longer work, and though Zoom issued another patch on Sunday, he was able to create a workaround.
Permissions on Android apps are intended to be gatekeepers for how much data your device gives up. If you don’t want a flashlight app to be able to read through your call logs, you should be able to deny that access. But even when you say no, many apps find a way around: Researchers discovered more than 1,000 apps that skirted restrictions, allowing them to gather precise geolocation data and phone identifiers behind your back.
[…]
Researchers from the International Computer Science Institute found up to 1,325 Android apps that were gathering data from devices even after people explicitly denied them permission. Serge Egelman, director of usable security and privacy research at the ICSI, presented the study in late June at the Federal Trade Commission’s PrivacyCon.
“Fundamentally, consumers have very few tools and cues that they can use to reasonably control their privacy and make decisions about it,” Egelman said at the conference. “If app developers can just circumvent the system, then asking consumers for permission is relatively meaningless.”
[…]
Egelman said the researchers notified Google about these issues last September, as well as the FTC. Google said it would be addressing the issues in Android Q, which is expected to release this year.
The update will address the issue by hiding location information in photos from apps and requiring any apps that access Wi-Fi to also have permission for location data, according to Google.
[…]
Researchers found that Shutterfly, a photo-editing app, had been gathering GPS coordinates from photos and sending that data to its own servers, even when users declined to give the app permission to access location data.
[…]
Some apps were relying on other apps that were granted permission to look at personal data, piggybacking off their access to gather phone identifiers like your IMEI number. These apps would read through unprotected files on a device’s SD card and harvest data they didn’t have permission to access. So if you let other apps access personal data, and they stored it in a folder on the SD card, these spying apps would be able to take that information.
While there were only about 13 apps doing this, they were installed more than 17 million times, according to the researchers. This includes apps like Baidu’s Hong Kong Disneyland park app, researchers said.
Over ten million users have been duped in installing a fake Samsung app named “Updates for Samsung” that promises firmware updates, but, in reality, redirects users to an ad-filled website and charges for firmware downloads.
The app takes advantage of the difficulty in getting firmware and operating system updates for Samsung phones, hence the high number of users who have installed it.
“It would be wrong to judge people for mistakenly going to the official application store for the firmware updates after buying a new Android device,” the security researcher said. “Vendors frequently bundle their Android OS builds with an intimidating number of software, and it can easily get confusing.”
“A user can feel a bit lost about the [system] update procedure. Hence can make a mistake of going to the official application store to look for system update.”
The “Updates for Samsung” app promises to solve this problem for non-technical users by providing a centralized location where Samsung phone owners can get their firmware and OS updates.
But according to Kuprins, this is a ruse. The app, which has no affiliation to Samsung, only loads the updato[.]com domain in a WebView (Android browser) component.
Rummaging through the app’s reviews, one can see hundreds of users complaining that the site is an ad-infested hellhole where most of them can’t find what they’re looking — and that’s only when the app works and doesn’t crash.
The site does offer both free and paid (legitimate) Samsung firmware updates, but after digging through the app’s source code, Kuprins said the website limits the speed of free downloads to 56 KBps, and some free firmware downloads eventually end up timing out.
“During our tests, we too have observed that the downloads don’t finish, even when using a reliable network,” Kuprins said.
But by crashing all free downloads, the app pushes users to purchase a $34.99 premium package to be able to download any files.
There’s an interesting and troubling attack happening to some people involved in the OpenPGP community that makes their certificates unusable and can essentially break the OpenPGP implementation of anyone who tries to import one of the certificates.
The attack is quite simple and doesn’t exploit any technical vulnerabilities in the OpenPGP software, but instead takes advantage of one of the inherent properties of the keyserver network that’s used to distribute certificates. Keyservers are designed to allow people to discover the public certificates of other people with them they want to communicate over a secure channel. One of the properties of the network is that anyone who has looked at a certificate and verified that it belongs to another specific person can add a signature, or attestation, to the certificate. That signature basically serves as the public stamp of approval from one user to another.
In general, people add signatures to someone’s certificate in order to give other users more confidence that the certificate is actually owned and controlled by the person who claims to own it. However, the OpenPGP specification doesn’t have any upper limit on the number of signatures that a certificate can have, so any user or group of users can add signatures to a given certificate ad infinitum. That wouldn’t necessarily be a problem, except for the fact that GnuPG, one of the more popular packages that implements the OpenPGP specification, doesn’t handle certificates with extremely large numbers of signatures very well. In fact, GnuPG will essentially stop working when it attempts to import one of those certificates.
Last week, two people involved in the OpenPGP community discovered that their public certificates had been spammed with tens of thousands of signatures–one has nearly 150,000–in an apparent effort to render them useless. The attack targeted Robert J. Hansen and Daniel Kahn Gillmor, but the root problem may end up affecting many other people, too.
“This attack exploited a defect in the OpenPGP protocol itself in order to ‘poison’ rjh and dkg’s OpenPGP certificates. Anyone who attempts to import a poisoned certificate into a vulnerable OpenPGP installation will very likely break their installation in hard-to-debug ways. Poisoned certificates are already on the SKS keyserver network. There is no reason to believe the attacker will stop at just poisoning two certificates. Further, given the ease of the attack and the highly publicized success of the attack, it is prudent to believe other certificates will soon be poisoned,” Hansen wrote in a post explaining the incident.
Spotted by the always excellent Windows Latest, Microsoft has told tens of millions of Windows 10 users that the latest KB4501375 update may break the platform’s Remote Access Connection Manager (RASMAN). And this can have serious repercussions.
The big one is VPNs. RASMAN handles how Windows 10 connects to the internet and it is a core background task for VPN services to function normally. Given the astonishing growth in VPN usage for everything from online privacy and important work tasks to unlocking Netflix and YouTube libraries, this has the potential to impact heavily on how you use your computer.
Interestingly, in detailing the issue Microsoft states that it only affects Windows 10 1903 – the latest version of the platform. The problem is Windows 10 1903 accounts for a conservative total of at least 50M users.
Why conservative? Because Microsoft states Windows 10 has been installed on 800M computers worldwide, but that figure is four months old. Meanwhile, the ever-reliable AdDuplex reports Windows 10 1903 accounted for 6.3% of all Windows 10 computers in June (50.4M), but that percentage was achieved in just over a month and their report is 10 days old. Microsoft has listed a complex workaround, but no timeframe has been announced for an actual fix.
In the meantime, Microsoft is stepping up its attempts to push Windows 7 users to Windows 10. Those users must be looking at Windows 10 right now and thinking they will resist to the very end.
Facebook resolves day-long outages across Instagram, WhatsApp, and Messenger
Facebook had problems loading images, videos, and other
data across its apps today, leaving some people unable to load photos in
the Facebook News Feed, view stories on Instagram, or send messages in
WhatsApp. Facebook said earlier today it was aware of the issues and was
“working to get things back to normal as quickly as possible.” It
blamed the outage on an error that was triggered during a “routine
maintenance operation.”
As of 7:49PM ET, Facebook posted a message to its
official Twitter account saying the “issue has since been resolved and
we should be back at 100 percent for everyone. We’re sorry for any
inconvenience.” Instagram similarly said its issues were more or less
resolved, too.
Earlier today, some people and businesses
experienced trouble uploading or sending images, videos and other files
on our apps. The issue has since been resolved and we should be back at
100% for everyone. We’re sorry for any inconvenience.— Facebook Business (@FBBusiness) July 3, 2019
We’re back! The issue has been resolved and we should be back at 100% for everyone. We’re sorry for any inconvenience. pic.twitter.com/yKKtHfCYMA— Instagram (@instagram) July 3, 2019
The issues started around 8AM ET and began slowly
clearing up after a couple hours, according to DownDetector, which
monitors website and app issues. The errors aren’t affecting all images;
many pictures on Facebook and Instagram still load, but others are
appearing blank. DownDetector has also received reports of people being
unable to load messages in Facebook Messenger.
The outage persisted through mid-day, with Facebook releasing a second statement, where it apologized “for any inconvenience.” Facebook’s platform status website still lists a “partial outage,” with a note saying that the company is “working on a fix that will go out shortly.”
Apps and websites are always going to experience
occasional disruptions due to the complexity of services they’re
offering. But even when they’re brief, they can become a real problem
due to the huge number of users many of these services have. A Facebook
outage affects a suite of popular apps, and those apps collectively have
billions of users who rely on them. That’s a big deal when those
services have become critical for businessand communications, and every hour they’re offline or acting strange can mean real inconveniences or lost money.
We’re aware that some people are having trouble
uploading or sending images, videos and other files on our apps. We’re
sorry for the trouble and are working to get things back to normal as
quickly as possible. #facebookdown— Facebook (@facebook) July 3, 2019
The issue caused some images and features to break across all of Facebook’s apps
Well, folks, Facebook and its “family of apps” has experienced yet
another crash. A nice respite moving into the long holiday weekend if
you ask me.
Problems that appear to have started early Wednesday
morning were still being reported as of the afternoon, with Instagram,
Facebook, WhatsApp, Oculus, and Messenger all experiencing issues.
According to DownDetector, issues first started cropping up on Facebook at around 8am ET.
“We’re
aware that some people are having trouble uploading or sending images,
videos and other files on our apps. We’re sorry for the trouble and are
working to get things back to normal as quickly as possible,” Facebook tweeted just after noon on Wednesday. A similar statement was shared from Instagram’s Twitter account.
You know what we definitely need more of on social media? Influencers and ads. And lucky for us,…Read more
Oculus, Facebook’s VR property, separately tweeted that it was experiencing “issues around downloading software.”
Facebook’s
crash was still well underway as of 1pm ET on Wednesday, primarily
affecting images. Where users typically saw uploaded images, such as
their profile pictures or in their photo albums, they instead saw a
string of terms describing Facebook’s interpretation of the image. Like
this:
TechCrunch’s
Zack Whittaker noted on Twitter that all of those image tags you may
have seen were Facebook’s machine learning at work.
This week’s crash is just the latest in what has become a near semi-frequent occurrence of outages. The first occurred back in March in an incident that Facebook later blamed on “a server configuration change.” Facebook and its subsidiaries went down again about a month later, though the previous incident was much worse, with millions of reports on DownDetector.
Two weeks ago, Instagram was bricked
and experienced ongoing issues with refreshing feeds, loading profiles,
and liking images. While the feed refresh issue was quickly patched, it
was hours before the company confirmed that Instagram had been fully
restored.
We’ve reached out to Facebook for more information about the issues and will update this post if we hear back.
Code crash? Russian hackers? Nope. Good ol’ broken fiber cables borked Google Cloud’s networking today
Fiber-optic cables linking Google Cloud servers in its us-east1
region physically broke today, slowing down or effectively cutting off
connectivity with the outside world.
For at least the past nine hours, and counting,
netizens and applications have struggled to connect to systems and
services hosted in the region, located on America’s East Coast.
Developers and system admins have been forced to migrate workloads to
other regions, or redirect traffic, in order to keep apps and websites
ticking over amid mitigations deployed by the Silicon Valley giant.
By 0900 PDT, Google revealed the extent of the
blunder: its cloud platform had “lost multiple independent fiber links
within us-east1 zone.” The fiber provider, we’re told, “has been
notified and are currently investigating the issue. In order to restore
service, we have reduced our network usage and prioritised customer
workloads.”
By that, we understand, Google means it redirected
traffic destined for its Google.com services hosted in the data center
region, to other locations, allowing the remaining connectivity to carry
customer packets.
By midday, Pacific Time, Google updated its status
pages to note: “Mitigation work is currently underway by our engineering
team to address the issue with Google Cloud Networking and Load
Balancing in us-east1. The rate of errors is decreasing, however some
users may still notice elevated latency.”
However, at time of writing, the physically damaged
cabling is not yet fully repaired, and US-east1 networking is thus still
knackered. In fact, repairs could take as much as 24 hours to complete.
The latest update, posted 1600 PDT, reads as follows:
The disruptions with Google Cloud Networking and Load Balancing have
been root caused to physical damage to multiple concurrent fiber bundles
serving network paths in us-east1, and we expect a full resolution
within the next 24 hours.
In the meantime, we are electively rerouting traffic to ensure that
customers’ services will continue to operate reliably until the affected
fiber paths are repaired. Some customers may observe elevated latency
during this period.
Customers using Google Cloud’s Load Balancing service
will automatically fall over to other regions, if configured,
minimizing impact on their workloads, it is claimed. They can also migrate to, say US-east4, though they may have to rejig their code and scripts to reference the new region.
The Register asked Google for more details
about the damaged fiber, such as how it happened. A spokesperson told us
exactly what was already on the aforequoted status pages.
Meanwhile, a Google Cloud subscriber wrote a little ditty about the outage to the tune of Pink Floyd’s Another Brick in the Wall. It starts: “We don’t need no cloud computing…” ®
This major Cloudflare internet routing blunder took A WEEK to fix. Why so long? It was IPv6 – and no one really noticed
Last week, an internet routing screw-up propagated by Verizon for three hours sparked havoc online, leading to significant press attention and industry calls for greater network security.
A few weeks before that, another packet routing blunder,
this time pushed by China Telecom, lasted two hours, caused significant
disruption in Europe and prompted some to wonder whether Beijing’s
spies were abusing the internet’s trust-based structure to carry out
surveillance.
In both cases, internet engineers were shocked at how
long it took to fix traffic routing errors that normally only last
minutes or even seconds. Well, that was nothing compared to what
happened this week.
Cloudflare’s director of network engineering Jerome
Fleury has revealed that the routing for a big block of IP addresses was
wrongly announced for an ENTIRE WEEK and, just as amazingly, the
company that caused it didn’t notice until the major blunder was pointed
out by another engineer at Cloudflare. (This cock-up is completely
separate to today’s Cloudflare outage.)
How is it even possible for network routes to remain completely wrong for several days? Because, folks, it was on IPv6.
“So Airtel AS9498 announced the entire IPv6 block
2400::/12 for a week and no-one notices until Tom Strickx finds out and
they confirm it was a typo of /127,” Fleury tweeted over the weekend, complete with graphic showing the massive routing error.
That /12 represents 83 decillion IP addresses, or
four quadrillion /64 networks. The /127 would be 2. Just 2 IP addresses.
Slight difference. And while this demonstrates the expansiveness of
IPv6’s address space, and perhaps even its robustness seeing as nothing
seems to have actually broken during the routing screw-up, it also hints
at just how sparse IPv6 is right now.
To be fair to Airtel, it often takes someone else to
notice a network route error – typically caused by simple typos like
failing to add a “7” – because the organization that messes up the
tables tends not to see or feel the impact directly.
But if ever there was a symbol of how miserably the
transition from IPv4 to IPv6 is going, it’s in the fact that a fat IPv6
routing error went completely unnoticed for a week while an IPv4 error will usually result in phone calls, emails, and outcry on social media within minutes.
And sure, IPv4 space is much, much more dense than IPv6 so obviously people will spot errors much faster. But no one at all noticed the advertisement of a /12 for days? That may not bode well for the future, even though, yes, this particular /127 typo had no direct impact.
I got 502 problems, and Cloudflare sure is one: Outage interrupts your El Reg-reading pleasure for almost half an hour
Updated Cloudflare, the outfit noted
for the slogan “helping build a better internet”, had another wobble
today as “network performance issues” rendered websites around the globe
inaccessible.
The US tech biz updated its status page at 1352 UTC
to indicate that it was aware of issues, but things began tottering
quite a bit earlier. Since Cloudflare handles services used by a good
portion of the world’s websites, such as El Reg, including
content delivery, DNS and DDoS protection, when it sneezes, a chunk of
the internet has to go and have a bit of a lie down. That means netizens
were unable to access many top sites globally.
A stumble last week was attributed to the antics of Verizon by CTO John Graham-Cumming. As for today’s shenanigans? We contacted the company, but they’ve yet to give us an explanation.
While Cloudflare implemented a fix by 1415 UTC and declared things resolved by 1457 UTC, a good portion of internet users noticed things had gone very south for many, many sites.
The company’s CEO took to Twitter to proffer an
explanation for why things had fallen over, fingering a colossal spike
in CPU usage as the cause while gently nudging the more wild conspiracy
theories away from the whole DDoS thing.
However, the outage was a salutary reminder of the
fragility of the internet as even Firefox fans found their beloved
browser unable to resolve URLs.
Ever keen to share in the ups and downs of life, even Cloudflare’s site also reported the dread 502 error.
As with the last incident, users who endured the
less-than-an-hour of disconnection would do well to remember that the
internet is a brittle thing. And Cloudflare would do well to remember
that its customers will be pondering if maybe they depend on its
services just a little too much.
Updated to add at 1702 BST
Following publication of this article, Cloudflare released a blog post
stating the “CPU spike was caused by a bad software deploy that was
rolled back. Once rolled back the service returned to normal operation
and all domains using Cloudflare returned to normal traffic levels.”
Naturally it then added….
“We are incredibly sorry that this incident occurred. Internal teams are meeting as I write performing a full post-mortem to understand how this occurred and how we prevent this from ever occurring again.” ®
Cloudflare gave everyone a 30-minute break from a chunk of the internet yesterday: Here’s how they did it
Internet services outfit Cloudflare took careful aim and unloaded
both barrels at its feet yesterday, taking out a large chunk of the
internet as it did so.
In an impressive act of openness, the company posted a distressingly detailed post-mortem on the cockwomblery that led to the outage. The Register also spoke to a weary John Graham-Cumming, CTO of the embattled company, to understand how it all went down.
This time it wasn’t Verizon wot dunnit; Cloudflare engineered this outage all by itself.
In a nutshell, what happened was that Cloudflare
deployed some rules to its Web Application Firewall (WAF). The gang
deploys these rules to servers in a test mode – the rule gets fired but
doesn’t take any action – in order to measure what happens when real
customer traffic runs through it.
We’d contend that an isolated test environment into
which one could direct traffic would make sense, but Graham-Cumming told
us: “We do this stuff all the time. We have a sequence of ways in which
we deploy stuff. In this case, it didn’t happen.”
In a frank admission that should send all DevOps
enthusiasts scurrying to look at their pipelines, Graham-Cumming told
us: “We’re really working on understanding how the automated test suite
which runs internally didn’t pick up the fact that this was going to
blow up our service.”
The CTO elaborated: “We push something out, it gets
approved by a human, and then it goes through a testing procedure, and
then it gets pushed out to the world. And somehow in that testing
procedure, we didn’t spot that this was going to blow things up.”
“And that didn’t happen in this instance. This should have been caught easily.”
Alas, two things went wrong. Firstly, one of the
rules (designed to block nefarious inline JavaScript) contained a
regular expression that would send CPU usage sky high. Secondly, the new
rules were accidentally deployed globally in one go.
The result? “One of these rules caused the CPU spike to 100 per cent, on all of our machines.” And because Cloudflare’s products are distributed over all its servers, every service was starved of CPU while the offending regular expression did its thing.
In yet another example of absent security controls, troves of police body camera footage were left open to the world for anyone to siphon off, according to an infosec biz.
Jasun Tate, CEO of Black Alchemy Solutions Group, told The Register on Monday he and his team had identified about a terabyte of officer body cam videos, stored in unprotected internet-facing databases, belonging to the Miami Police Department, and cops in other US cities as well as places aboard. The operators of these databases – Tate suggests there are five service providers involved – work with various police departments. The footage apparently dates from 2018 to present.
“Vendors that provide services to police departments are insecure,” said Tate, adding that he could not at present identify the specific vendors responsible for leaving the archive freely accessible to the public. Below is an example body-cam video from the internet-facing data silo Tate shared on Twitter.
Tate said he came across the files while doing online intelligence work for a client. While searching the internet, he said his firm came across a dark-web hacker forum thread that pointed out the body cam material sitting prone on the internet. Following the forum’s links led Tate to police video clips that had been stored insecurely in what he described as a few open MongoDB and mySQL databases.
For at least the past few days, the footage was publicly accessible, we’re told. Tate reckons the videos will have been copied from the databases by the hacker forum’s denizens, and potentially sold on by now.
According to Tate, the Miami Police Department was notified of the findings. A spokesperson for Miami PD said the department is still looking into these claims, and won’t comment until the review is completed.
Tate posted about his findings on Saturday via Twitter. The links to databases he provided to The Register as evidence of his findings now return errors, indicating the systems’ administrators have taken steps to remove the files from public view.
The incident echoes the hacking of video surveillance biz Perceptics in terms of the sensitivity of the exposed data. The Perceptics hack appears to be more severe because so much of its internal data was stolen and posted online. But that could change if it turns out that much of the once accessible Miami body cam footage was copied and posted on other servers.
ProPublica recently reported that two U.S. firms, which professed to use their own data recovery methods to help ransomware victims regain access to infected files, instead paid the hackers.
Now there’s new evidence that a U.K. firm takes a similar approach. Fabian Wosar, a cyber security researcher, told ProPublica this month that, in a sting operation he conducted in April, Scotland-based Red Mosquito Data Recovery said it was “running tests” to unlock files while actually negotiating a ransom payment. Wosar, the head of research at anti-virus provider Emsisoft, said he posed as both hacker and victim so he could review the company’s communications to both sides.
Red Mosquito Data Recovery “made no effort to not pay the ransom” and instead went “straight to the ransomware author literally within minutes,” Wosar said.
[…]
On its website, Red Mosquito Data Recovery calls itself a “one-stop data recovery and consultancy service” and says it has dealt with hundreds of ransomware cases worldwide in the past year. It advertised last week that its “international service” offers “experts who can offer honest, free advice.” It said it offers a “professional alternative” to paying a ransom, but cautioned that “paying the ransom may be the only viable option for getting your files decrypted.”
It does “not recommend negotiating directly with criminals since this can further compromise security,” it added.
Red Mosquito Data Recovery did not respond to emailed questions, and hung up when we called the number listed on its website. After being contacted by ProPublica, the company removed the statement from its website that it provides an alternative to paying hackers. It also changed “honest, free advice” to “simple free advice,” and the “hundreds” of ransomware cases it has handled to “many.”
[…]
documents show, Lairg wrote to Wosar’s victim email address, saying he was “pleased to confirm that we can recover your encrypted files” for $3,950 — four times as much as the agreed-upon ransom.
Eight of the world’s biggest technology service providers were hacked by Chinese cyber spies in an elaborate and years-long invasion, Reuters found. The invasion exploited weaknesses in those companies, their customers, and the Western system of technological defense.
[…]
The hacking campaign, known as “Cloud Hopper,” was the subject of a U.S. indictment in December that accused two Chinese nationals of identity theft and fraud. Prosecutors described an elaborate operation that victimized multiple Western companies but stopped short of naming them. A Reuters report at the time identified two: Hewlett Packard Enterprise and IBM.
Yet the campaign ensnared at least six more major technology firms, touching five of the world’s 10 biggest tech service providers.
Also compromised by Cloud Hopper, Reuters has found: Fujitsu, Tata Consultancy Services, NTT Data, Dimension Data, Computer Sciences Corporation and DXC Technology. HPE spun-off its services arm in a merger with Computer Sciences Corporation in 2017 to create DXC.
Waves of hacking victims emanate from those six plus HPE and IBM: their clients. Ericsson, which competes with Chinese firms in the strategically critical mobile telecoms business, is one. Others include travel reservation system Sabre, the American leader in managing plane bookings, and the largest shipbuilder for the U.S. Navy, Huntington Ingalls Industries, which builds America’s nuclear submarines at a Virginia shipyard.
“This was the theft of industrial or commercial secrets for the purpose of advancing an economy,” said former Australian National Cyber Security Adviser Alastair MacGibbon. “The lifeblood of a company.”
[…]
The corporate and government response to the attacks was undermined as service providers withheld information from hacked clients, out of concern over legal liability and bad publicity, records and interviews show. That failure, intelligence officials say, calls into question Western institutions’ ability to share information in the way needed to defend against elaborate cyber invasions. Even now, many victims may not be aware they were hit.
The campaign also highlights the security vulnerabilities inherent in cloud computing, an increasingly popular practice in which companies contract with outside vendors for remote computer services and data storage.
[…]
For years, the company’s predecessor, technology giant Hewlett Packard, didn’t even know it had been hacked. It first found malicious code stored on a company server in 2012. The company called in outside experts, who found infections dating to at least January 2010.
Hewlett Packard security staff fought back, tracking the intruders, shoring up defenses and executing a carefully planned expulsion to simultaneously knock out all of the hackers’ known footholds. But the attackers returned, beginning a cycle that continued for at least five years.
The intruders stayed a step ahead. They would grab reams of data before planned eviction efforts by HP engineers. Repeatedly, they took whole directories of credentials, a brazen act netting them the ability to impersonate hundreds of employees.
The hackers knew exactly where to retrieve the most sensitive data and littered their code with expletives and taunts. One hacking tool contained the message “FUCK ANY AV” – referencing their victims’ reliance on anti-virus software. The name of a malicious domain used in the wider campaign appeared to mock U.S. intelligence: “nsa.mefound.com”
Then things got worse, documents show.
After a 2015 tip-off from the U.S. Federal Bureau of Investigation about infected computers communicating with an external server, HPE combined three probes it had underway into one effort called Tripleplay. Up to 122 HPE-managed systems and 102 systems designated to be spun out into the new DXC operation had been compromised, a late 2016 presentation to executives showed.
[…]
According to Western officials, the attackers were multiple Chinese government-backed hacking groups. The most feared was known as APT10 and directed by the Ministry of State Security, U.S. prosecutors say. National security experts say the Chinese intelligence service is comparable to the U.S. Central Intelligence Agency, capable of pursuing both electronic and human spying operations.
[…]
It’s impossible to say how many companies were breached through the service provider that originated as part of Hewlett Packard, then became Hewlett Packard Enterprise and is now known as DXC.
[…]
HP management only grudgingly allowed its own defenders the investigation access they needed and cautioned against telling Sabre everything, the former employees said. “Limiting knowledge to the customer was key,” one said. “It was incredibly frustrating. We had all these skills and capabilities to bring to bear, and we were just not allowed to do that.”
[…]
The threat also reached into the U.S. defense industry.
In early 2017, HPE analysts saw evidence that Huntington Ingalls Industries, a significant client and the largest U.S. military shipbuilder, had been penetrated by the Chinese hackers, two sources said. Computer systems owned by a subsidiary of Huntington Ingalls were connecting to a foreign server controlled by APT10.
BIG WORRY: Huntington Ingalls feared hackers accessed data from its biggest operation, the Newport News shipyard where it builds nuclear-powered subs. It’s not clear if data was stolen. U.S. Navy/John Whalen/Huntington Ingalls Industries/Handout via REUTERS
During a private briefing with HPE staff, Huntington Ingalls executives voiced concern the hackers could have accessed data from its biggest operation, the Newport News, Va., shipyard where it builds nuclear-powered submarines, said a person familiar with the discussions. It’s not clear whether any data was stolen.
[…]
Like many Cloud Hopper victims, Ericsson could not always tell what data was being targeted. Sometimes, the attackers appeared to seek out project management information, such as schedules and timeframes. Another time they went after product manuals, some of which were already publicly available.
[…]
much of Cloud Hopper’s activity has been deliberately kept from public view, often at the urging of corporate victims.
In an effort to keep information under wraps, security staff at the affected managed service providers were often barred from speaking even to other employees not specifically added to the inquiries.
In 2016, HPE’s office of general counsel for global functions issued a memo about an investigation codenamed White Wolf. “Preserving confidentiality of this project and associated activity is critical,” the memo warned, stating without elaboration that the effort “is a sensitive matter.” Outside the project, it said, “do not share any information about White Wolf, its effect on HPE, or the activities HPE is taking.”
The secrecy was not unique to HPE. Even when the government alerted technology service providers, the companies would not always pass on warnings to clients, Jeanette Manfra, a senior cybersecurity official with the U.S. Department of Homeland Security, told Reuters.
Verizon sent a big chunk of the internet down a black hole this morning – and caused outages at Cloudflare, Facebook, Amazon, and others – after it wrongly accepted a network misconfiguration from a small ISP in Pennsylvania, USA.
For nearly three hours, web traffic that was supposed to go to some of the biggest names online was instead accidentally rerouted through a steel giant based in Pittsburgh.
It all started when new internet routes for more than 20,000 IP address prefixes – roughly two per cent of the internet – were wrongly announced by regional US ISP DQE Communications: this announcement informed the sprawling internet’s backbone equipment to thread netizens’ traffic through DQE and one of its clients, steel giant Allegheny Technologies, a redirection that was then, mindbogglingly, accepted and passed on to the world by Verizon, a trusted major authority on the internet’s highways and byways. This happened because Allegheny is also a customer of Verizon: it too announced the route changes to Verizon, which disseminated them further.
And so, systems around the planet were automatically updated, and connections destined for Facebook, Cloudflare, and others, ended up going through DQE and Allegheny, which buckled under the strain, causing traffic to disappear into a black hole.
Diagram showing how network routes were erroneously announced to Verizon via DQE and Allegheny … Click to enlarge. Source: Cloudflare
Internet engineers blamed a piece of automated networking software – a BGP optimizer built by Noction – that was used by DQE to improve its connectivity. And even though these kinds of misconfigurations happen every day, there is significant frustration and even disbelief that a US telco as large as Verizon would pass on this amount of incorrect routing information.
During the social network’s heyday, multiple Myspace employees abused an internal company tool to spy on users, in some cases including ex-partners, Motherboard has learned.
Named ‘Overlord,’ the tool allowed employees to see users’ passwords and their messages, according to multiple former employees. While the tool was originally designed to help moderate the platform and allow MySpace to comply with law enforcement requests, multiple sources said the tool was used for illegitimate purposes by employees who accessed Myspace user data without authorization to do so.
“It was basically an entire backdoor to the Myspace platform,” one of the former employees said of Overlord. (Motherboard granted five former Myspace employees anonymity to discuss internal Myspace incidents.)
[…]
The existence and abuse of Overlord, which was not previously reported, shows that since the earliest days of social media, sensitive user data and communication has been vulnerable to employees of huge platforms. In some cases, user data has been maliciously accessed, a problem that companies like Facebookand Snapchat have also faced.
[…]
“Every company has it,” Hemanshu Nigam, who was Myspace’s Chief Security Officer from 2006 to 2010, said in a phone interview referring to such administration tools. “Whether it’s for dealing with abuse, or responding to law enforcement or civil requests, or for managing a user’s account because they’re raising some type of issue with it.”
[…]
Even though social media platforms may need a tool like this for legitimate law enforcement purposes, four former Myspace workers said the company fired employees for abusing Overlord.
“The tool was used to gain access to a boyfriend/girlfriend’s login credentials,” one of the sources added. A second source wasn’t sure if the abuse did target ex-partners, but said they assumed so.
“Myspace, the higher ups, were able to cross reference the specific policy enforcement agent with their friends on their Myspace page to see if they were looking up any of their contacts or ex-boyfriends/girlfriends,” that former employee said, explaining how Myspace could identify employees abusing their Overlord access.
[…]
“Misuse of user data will result in termination of employment,” the spokesperson wrote.
The Myspace spokesperson added that, today, access is limited to a “very small number of employees,” and that all access is logged and reviewed.
Several of the former employees emphasised the protections in place to mitigate against insider abuse.
“The account access would be searched to see which agents accessed the account. Managers would then take action. Unless the account was previously associated with a support case, that employee was terminated immediately. This was a zero tolerance policy,” one former employee, who worked in a management role, said.
Another former employee said Myspace “absolutely” warned employees about abusing Overlord.
“There were strict access controls; there was training before you were allowed to use the tools; there was also managerial monitoring of how tools were being used; and there was a strict no-second-chance policy, that if you did violate any of the capabilities given to you, you were removed from not only your position, but from the company completely,” Nigam, the former CSO, said.
Nonetheless, the former employees said the tool was still abused.
“Any tool that is written for a specific, very highly privileged purpose can be misused,” Wendy Nather, head of advisory chief information security officers at cybersecurity firm Duo, said in a phone call. “It’s the responsibility of the designer and the developer to put in controls when it’s being built to assume that it could be abused, and to put checks on that.”
[…]
Several tech giants and social media platforms have faced their own malicious employee issues. Motherboard previously reported Facebook has fired multiple employees for abusing their data access, including one as recently as last year. Last month, Motherboard revealed Snapchat employees abused their own access to spy on users, and described an internal tool called SnapLion. That tool was also designed to respond to legitimate law enforcement requests before being abused.
A MongoDB database was left open on the internet without a password, and by doing so, exposed the personal details and prescription information for more than 78,000 US patients.
The leaky database was discovered by the security team at vpnMentor, led by Noam Rotem and Ran Locar, who shared their findings exclusively with ZDNet earlier this week.
The database contained information on 391,649 prescriptions for a drug named Vascepa; used for lowering triglycerides (fats) in adults that are on a low-fat and low-cholesterol diet.
Additionally, the database also contained the collective information of over 78,000 patients who were prescribed Vascepa in the past.
Leaked information included patient data such as full names, addresses, cell phone numbers, and email addresses, but also prescription info such as prescribing doctor, pharmacy information, NPI number (National Provider Identifier), NABP E-Profile Number (National Association of Boards of Pharmacy), and more.
Image: vpnMentor
According to the vpnMentor team, all the prescription records were tagged as originating from PSKW, the legal name for a company that provides patient and provider messaging, co-pay, and assistance programs for healthcare organizations via a service named ConntectiveRX.
Even as Homeland Security officials have attempted to downplay the impact of a security intrusion that reached deep into the network of a federal surveillance contractor, secret documents, handbooks, and slides concerning surveillance technology deployed along U.S. borders are being widely and openly shared online.
A terabyte of torrents seeded by Distributed Denial of Secrets (DDOS)—journalists dispersing records that governments and corporations would rather nobody read—are as of writing being downloaded daily. As of this week, that includes more than 400 GB of data stolen by an unknown actor from Perceptics, a discreet contractor based in Knoxville, Tennessee, that works for Customs and Border Protection (CBP) and is, regardless of whatever U.S. officials say, right now the epicenter of a major U.S. government data breach.
The files include powerpoint presentations, manuals, marketing materials, budgets, equipment lists, schematics, passwords, and other documents detailing Perceptics’ work for CBP and other government agencies for nearly a decade. Tens of thousands of surveillance photographs taken of travelers and their vehicles at the U.S. border are among the first tranches of data to be released. Reporters are digging through the dump and already expanding our understanding of the enormous surveillance apparatus that is being erected on our border.
In a statement last week, CBP insisted that none of the image data had been identified online, even as one headline declared, “Here Are Images of Drivers Hacked From a U.S. Border Protection Contractor.”
“The breach covers a huge amount of data which has, until now, been protected by dozens of Non-Disclosure Agreements and the (b)(4) trade-secrets exemption which Perceptics has demanded DHS apply to all Perceptics information,” DDOS team member Emma Best, who often reports for the Freedom of Information site MuckRock, told Gizmodo.
Despite the government’s attempt to downplay the breach, the Perceptics files, she said, “include schematics, plans, and reports for DHS, the DEA, and the Pentagon as well as foreign clients.”
While the files can be viewed online, according to Best, DDOS has experienced nearly a 50 percent spike in traffic from users who’ve opted to download the entire dataset.
“We’re making these files available for public review because they provide an unprecedented and intimate look at the mass surveillance of legal travel, as well as more local surveillance of turnpike and secure facilities,” Best said. “Most importantly they provide a glimpse of how the government and these companies protect our information—or, in some cases, how they fail to.”
Neither CBP nor Perceptics immediately responded to a request for comment.
Millions of PCs made by Dell and other OEMs are vulnerable to a flaw stemming from a component in pre-installed SupportAssist software. The flaw could enable a remote attacker to completely takeover affected devices.
The high-severity vulnerability (CVE-2019-12280) stems from a component in SupportAssist, a proactive monitoring software pre-installed on PCs with automatic failure detection and notifications for Dell devices. That component is made by a company called PC-Doctor, which develops hardware-diagnostic software for various PC and laptop original equipment manufacturers (OEMs).
“According to Dell’s website, SupportAssist is preinstalled on most of Dell devices running Windows, which means that as long as the software is not patched, this vulnerability probably affects many Dell users,” Peleg Hadar, security researcher with SafeBreach Labs – who discovered the breach – said in a Friday analysis.
Google Calendar was down for users around the world for nearly three hours earlier today. Calendar users trying to access the service were met with a 404 error message through a browser from around 10AM ET until around 12:40PM ET. Google’s Calendar service dashboard now reveals that issues should be resolved for everyone within the next hour.
“We expect to resolve the problem affecting a majority of users of Google Calendar at 6/18/19, 1:40 PM,” the message reads. “Please note that this time frame is an estimate and may change.” Google Calendar appears to have returned for most users, though. Other Google services such as Gmail and Google Maps appeared to be unaffected during the calendar outage, although Hangouts Meet reportedly experiencing some difficulties.
Google Calendar is currently experiencing a service disruption. Please stay tuned for updates or follow here: https://t.co/2SGW3X1cQn
Google Calendar’s issues come in the same month as another massive Google outage which saw YouTube, Gmail, and Snapchat taken offline because of problems with the company’s overall Cloud service. At the time, Google blamed “high levels of network congestion in the eastern USA” for the issues.
The outage also came just over an hour after Google’s G Suite twitter account sent out a tweet promoting Google Calendar’s ability to making scheduling simpler.
As far back as 2015, major companies like Sony and Intel have sought to crowdsource efforts to secure their systems and applications through the San Francisco startup HackerOne. Through the “bug bounty” program offered by the company, hackers once viewed as a nuisance—or worse, as criminals—can identify security vulnerabilities and get paid for their work.
On Tuesday, HackerOne published a wealth of anonymized data to underscore not only the breadth of its own program but highlight the leading types of bugs discovered by its virtual army of hackers who’ve reaped financial rewards through the program. Some $29 million has been paid out so far with regards to the top 10 most rewarded types of security weakness alone, according to the company.
HackerOne markets the bounty program as a means to safely mimic an authentic kind of global threat. “It’s one of the best defenses you can have against what you’re actually protecting against,” said Miju Han, HackerOne’s director of product management. “There are a lot of security tools out there that have theoretically risks—and we definitely endorse those tools as well. But what we really have in bug bounty programs is a real-world security risk.”
The program, of course, has its own limitations. Participants have the ability to define the scope of engagement and in some cases—as with the U.S. Defense Department, a “hackable target”—place limits on which systems and methods are authorized under the program. Criminal hackers and foreign adversaries are, of course, not bound by such rules.
“Bug bounties can be a helpful tool if you’ve already invested in your own security prevention and detection,” said Katie Moussouris, CEO of Luta Security, “in terms of secure development if you publish code, or secure vulnerability management if your organization is mostly just trying to keep up with patching existing infrastructure.”
“It isn’t suitable to replace your own preventative measures, nor can it replace penetration testing,” she said.
Not surprisingly, HackerOne’s data shows that overwhelmingly cross-site scripting (XSS) attacks—in which malicious scripts are injected into otherwise trusted sites—remain the top vulnerability reported through the program. Of the top 10 types of bugs reported, XSS makes up 27 percent. No other type of bug comes close. Through HackerOne, some $7.7 million has been paid out to address XSS vulnerabilities alone.
Cloud migration has also led to a rise in exploits such as server-side request forgery (SSRF). “The attacker can supply or modify a URL which the code running on the server will read or submit data to, and by carefully selecting the URLs, the attacker may be able to read server configuration such as AWS metadata, connect to internal services like http-enabled databases or perform post requests towards internal services which are not intended to be exposed,” HackerOne said.
Currently, SSRF makes up only 5.9 percent of the top bugs reported. Nevertheless, the company says, these server-side exploits are trending upward as more and more companies find homes in the cloud.
Other top bounties include a range of code injection exploits or misconfigurations that allow improper access to systems that should be locked down. Companies have paid out over $1.5 million alone to address improper access control.
“Companies that pay more for bounties are definitely more attractive to hackers, especially more attractive to top hackers,” Han said. “But we know that bounties paid out are not the only motivation. Hackers like to hack companies that they like using, or that are located in their country.” In other words, even though a company is spending more money to pay hackers to find bugs, it doesn’t necessarily mean that they have more security.
“Another factor is how fast a company is changing,” she said. “If a company is developing very rapidly and expanding and growing, even if they pay a lot of bounties, if they’re changing up their code base a lot, then that means they are not necessary as secure.”
According to an article this year in TechRepublic, some 300,000 hackers are currently signed up with HackerOne; though only 1-in-10 have reportedly claimed a bounty. The best of them, a group of roughly 100 hackers, have earned over $100,000. Only a couple of elite hackers have attained the highest-paying ranks of the program, reaping rewards close to, or in excess of, $1 million.
View a full breakdown of HackerOne’s “most impactful and rewarded” vulnerability types here.
The well-known and respected data breach notification website “Have I Been Pwned” is up for sale.
Troy Hunt, its founder and sole operator, announced the sale on Tuesday in a blog post where he explained why the time has come for Have I Been Pwned to become part of something bigger and more organized.
“To date, every line of code, every configuration and every breached record has been handled by me alone. There is no ‘HIBP team’, there’s one guy keeping the whole thing afloat,” Hunt wrote. “it’s time for HIBP to grow up. It’s time to go from that one guy doing what he can in his available time to a better-resourced and better-funded structure that’s able to do way more than what I ever could on my own.”
Over the years, Have I Been Pwned has become the repository for data breaches on the internet, a place where users can search for their email address and see whether they have been part of a data breach. It’s now also a service where people can sign up to get notified whenever their accounts get breached. It’s perhaps the most useful, free, cybersecurity service in the world.
On June 6, more than 70,000 BGP routes were leaked from Swiss colocation company Safe Host to China Telecom in Frankfurt, Germany, which then announced them on the global internet. This resulted in a massive rerouting of internet traffic via China Telecom systems in Europe, disrupting connectivity for netizens: a lot of data that should have gone to European cellular networks was instead piped to China Telecom-controlled boxes.
BGP leaks are common – they happen every hour of every day – though the size of this one and particularly the fact it lasted for two hours, rather than seconds or minutes, has prompted more calls for ISPs to join an industry program that adds security checks to the routing system.
The fact that China Telecom, which peers with Safe House, was again at the center of the problem – with traffic destined for European netizens routed through its network – has also made internet engineers suspicious, although they have been careful not to make any accusations without evidence.
“China Telecom, a major international carrier, has still implemented neither the basic routing safeguards necessary both to prevent propagation of routing leaks nor the processes and procedures necessary to detect and remediate them in a timely manner when they inevitably occur,” noted Oracle Internet Intelligence’s (OII) director of internet analysis Doug Madory in a report. “Two hours is a long time for a routing leak of this magnitude to stay in circulation, degrading global communications.”
A team at network security outfit vpnMentor was scanning cyber-space as part of a web-mapping project when they happened upon a Graylog management server belonging to Tech Data that had been left freely accessible to the public. Within that database, we’re told, was a 264GB cache of information including emails, payment and credit card details, and unencrypted usernames and passwords. Pretty much everything you need to ruin someone’s day (or year).
The exposure, vpnMentor told The Register today, is particularly bad due to the nature of Tech Data’s customers. The Fortune 500 distie provides everything from financing and marketing services to IT management and user training courses. Among the clients listed on its site are Apple, Symantec, and Cisco.
“This is a serious leak as far as we can see, so much so that all of the credentials needed to log in to customer accounts are available,” a spokesperson for vpnMentor told El Reg. “Because of the size of the database, we could not go through all of it and there may be more sensitive information available to the public than what we have disclosed here.”
In addition to the login credentials and card information, the researchers said they were able to find private API keys and logs in the database, as well as customer profiles that included full names, job titles, phone numbers, and email and postal addresses. All available to anyone who could find it.
vpnMentor says it discovered and reported the open database on June 2 to Tech Data, and by June 4 the distie had told the team it had secured the database and hidden it from public view. Tech Data did not respond to a request for comment from The Register. The US-based company did not mention the incident in its most recent SEC filings.
Google suffered major outages with its Cloud Platform on Sunday, causing widespread access issues with both its own services and third party apps ranging from Snapchat to Discord.
As of early Sunday evening, issues had persisted for hours; according to the Google Cloud Status Dashboard, the outages began at roughly 3:25 p.m. ET and were related to “high levels of network congestion in the eastern USA.” Outage-tracking service Down Detector indicated that access to YouTube was severely disrupted across the country, with the northeastern U.S. particularly having a rough go of it. Finally, the G Suite Status Dashboard listed virtually every one of its cloud-based productivity and collaboration tools—including Gmail, Drive, Docs, Hangouts, and Voice—as experiencing service outages. Amazingly enough, largely defunct social network Google+ was listed as experiencing no issues.
As the Verge noted, third-party services Discord, Snapchat, and Vimeo all use Google Cloud in their backends, with the outages preventing users from logging in. (However, issues were far from universal, with some users reporting no impact at all.)
All of the current versions of Docker have a vulnerability that can allow an attacker to get read-write access to any path on the host server. The weakness is the result of a race condition in the Docker software and while there’s a fix in the works, it has not yet been integrated.
The bug is the result of the way that the Docker software handles some symbolic links, which are files that have paths to other directories or files. Researcher Aleksa Sarai discovered that in some situations, an attacker can insert his own symlink into a path during a short time window between the time that the path has been resolved and the time it is operated on. This is a variant of the time of check to time of use (TOCTOU) problem, specifically with the “docker cp” command, which copies files to and from containers.
“The basic premise of this attack is that FollowSymlinkInScope suffers from a fairly fundamental TOCTOU attack. The purpose of FollowSymlinkInScope is to take a given path and safely resolve it as though the process was inside the container. After the full path has been resolved, the resolved path is passed around a bit and then operated on a bit later (in the case of ‘docker cp’ it is opened when creating the archive that is streamed to the client),” Sarai said in his advisory on the problem.
“If an attacker can add a symlink component to the path after the resolution but beforeit is operated on, then you could end up resolving the symlink path component on the host as root. In the case of ‘docker cp’ this gives you read and write access to any path on the host.”
Sarai notified the Docker security team about the vulnerability and, after talks with them, the two parties agreed that public disclosure of the issue was legitimate, even without a fix available, in order to make customers aware of the problem. Sarai said researchers were aware that this kind of attack might be possible against Docker for a couple of years. He developed exploit code for the vulnerability and said that a potential attack scenario could come through a cloud platform.
“The most likely case for this particular vector would be a managed cloud which let you (for instance) copy configuration files into a running container or read files from within the container (through “docker cp”),,” Sarai said via email.
“However it should be noted that while this vulnerability only has exploit code for “docker cp”, that’s because it’s the most obvious endpoint for me to exploit. There is a more fundamental issue here — it’s simply not safe to take a path, expand all the symlinks within it, and assume that path is safe to use.”
In a series of emails seen by ZDNet that the company sent out to impacted users, Flipboard said hackers gained access to databases the company was using to store customer information.
Most passwords are secure
Flipboard said these databases stored information such as Flipboard usernames, hashed and uniquely salted passwords, and in some cases, emails or digital tokens that linked Flipboard profiles to accounts on third-party services.
The good news appears to be that the vast majority of passwords were hashed with a strong password-hashing algorithm named bcrypt, currently considered very hard to crack.
The company said that some passwords were hashed with the weaker SHA-1 algorithm, but they were not many.
“If users created or changed their password after March 14, 2012, it is hashed with a function called bcrypt. If users have not changed their password since then, it is uniquely salted and hashed with SHA-1,” Flipboard said.
[…]
In its email, Flipboard said it is now resetting all customer passwords, regardless if users were impacted or not, out of an abundance of caution.
Furthermore, the company has already replaced all digital tokens that customers used to connect Flipboard with third-party services like Facebook, Twitter, Google, and Samsung.
“We have not found any evidence the unauthorized person accessed third-party account(s) connected to your Flipboard accounts,” the company said.
Extensive breach
But despite some good news for users, the breach appears to be quite extensive, at least for the company’s IT staff.
According to Flipboard, hackers had access to its internal systems for almost nine months, first between June 2, 2018, and March 23, 2019, and then for a second time between April 21 and April 22, 2019.
The company said it detected the breach the day after this second intrusion, on April 23, while investigating suspicious activity on its database network.