ThinkPad’s keyboards have a fiercely loyal following, and for $100 you can keep using the design that time forgot with this detached wireless version that will work any other laptop or computer.
The ThinkPad TrackPoint Keyboard II is now available on Lenovo’s website, and it looks like a piece of hardware that dates back over 25 years to the early ‘90s. In 1992, IBM, the company that created the ThinkPad laptop, introduced the TrackPoint which was a small rubber nub embedded in the middle of the keyboard that was used to move the cursor around. There are those who hated it, but more than enough that loved it for Lenovo (who purchased IBM’s PC division in 2005) to continue to offer the TrackPoint on its current laptop lineup, alongside a touchpad.
But you won’t find a touchpad on the ThinkPad TrackPoint Keyboard II—it’s TouchPoint only, with a trio of mouse buttons located just below the space bar. There’s nothing stopping you from using a mouse alongside it, but the small nub means you can still navigate a cursor-driven user interface if you don’t have a lot of desk space at your disposal or you are using the keyboard on your lap.
It connects to other devices using an included wireless USB dongle or Bluetooth, meaning it can be used with mobile devices as well. But unlike previous versions, it can’t be tethered to another device with a cord. Its USB-C port is used for charging only, which really only has to be done about every two months, depending on usage. Keyboard snobs might still want to pass on this one, however, because hidden beneath the contoured chiclet-style keys you’ll find scissor-switches instead of a more complex mechanical switch.
Oh look, here’s another cautionary tale about buying cloud-based IoT kit. On 29 May, global peripheral giant Belkin will flick the “off” switch on its Wemo NetCam IP cameras, turning the popular security devices into paperweights.
It’s not unusual for a manufacturer to call time on physical hardware. Like software, it has a lifespan where, afterwards, it’s deemed not economically viable for the vendor to continue providing support.
But this is a little different, because Belkin isn’t merely ending support. It also plans to decommission the cloud services required for its Wemo NetCam devices to actually work.
“Although your Wemo NetCam will still connect to your Wi-Fi network, without these servers you will not be able to view the video feed or access the security features of your Wemo NetCam, such as Motion Clips and Motion Notifications,” Belkin said on its official website.
“If you use your Wemo NetCam as a motion sensor for your Wemo line of products, it will no longer provide this functionality and will be removed as an option from your Wemo app,” the company added.
Adding insult to injury, the ubiquitous consumer network gear maker only plans to refund customers with active warranties, which excludes anyone who bought their device more than two years ago. The window to submit requests is open from now until 30 June.
Apple has agreed to settle a class-action lawsuit brought by folks upset the iGiant broke FaceTime overnight on millions of iPhones. The settlement amounts to a few bucks a device, meaning the Cupertino giant almost certainly made a net profit in the process.
This week the Tim Cook-led corporation said it would pay $18m [PDF] into a fund to compensate the estimated 3.6 million people living in California for whom the video-conferencing app suddenly stopped working on their iOS 6 smartphones in April 2014.
The $18m sum is a third of the fair compensation the lawsuit’s claimants had calculated. Apple had made it plain it would aggressively fight the case for years, though, and so a decision was taken to settle for a lower sum. After all, Apple has been battling for more than a decade a separate legal claim that ultimately led to the FaceTime breakage, and is still firing away even after the US Supreme Court snubbed it.
About half of the settlement money will foot lawyers’ bills and pay a company to disburse tiny checks to people, possibly as low as $2.44 to $3 per Californian, depending on how many claim. If there is any good news, it’s the fact those eligible won’t have to apply for it, but should receive e-checks to their email addresses: Apple estimates that it has the details for 90 per cent of those eligible, and we suspect the remaining 10 per cent won’t bother to collect.
The two people who brought the case, Christina Grace and Ken Potter, had four in-person mediation sessions and spent three years and countless hours trying to drag compensation out of Apple for killing FaceTime. They will get $7,500 apiece.
Meanwhile, the lawyers – Steyer Lowenthal Boodrookas and Smith in San Francisco and Pearson, Smith and Warshaw in Los Angeles – will get up to $7.9m, and the check disbursement company Epiq Systems will get $1.4m. No surprises there.
Apple changed the way FaceTime worked in 2014 because a court found the software infringed VirnetX’s patents, and Apple had been ordered to pay $368m. FaceTime was revised to avoid those patents, and a new version was pushed out in an operating system update, iOS 7.
Go slow
However, millions of iPhone owners chose not to update their smartphones because iOS 7 was resource hungry and slowed down their handsets, so they stayed on iOS 6. In order to avoid continuing to infringe VirnetX’s patents before iOS 7 was released, Apple had stopped using a peer-to-peer technique for routing calls, and instead put some FaceTime calls through a relay run by Akamai. But that relay cost Apple money.
And so, after iOS 7 was released, Apple let a digital certificate expire that killed FaceTime for anyone using iOS version 6 or lower, and thus there was no longer a need to operate and pay for the relay. Everyone was expected to upgrade to the non-infringing FaceTime in iOS 7, which didn’t need the Akamai’s system.
Apple claimed at the time this sudden loss of connectivity was a “bug,” and that users should upgrade to iOS 7 to fix the knackered chat app. But internal documents suggest that Apple knowingly broke FaceTime because it was costing it money. “Our users on [iOS 6] are basically screwed,” an Apple engineer noted in an internal email quoted in the lawsuit.
Zoom has admitted it doesn’t have 300 million daily active users. The admission came after The Verge noticed the company had quietly edited a blog post making the claim earlier this month. Zoom originally stated it had “more than 300 million daily users” and that “more than 300 million people around the world are using Zoom during this challenging time.” Zoom later deleted these references from the original blog post, and now claims “300 million daily Zoom meeting participants.”
The difference between a daily active user (DAU) and “meeting participant” is significant. Daily meeting participants can be counted multiple times: if you have five Zoom meetings in a day then you’re counted five times. A DAU is counted once per day, and is commonly used by companies to measure service usage. Only counting meeting participants is an easy, somewhat misleading, way to make your platform usage seem larger than it is.
The misleading blog was edited on April 24th, a day after the numbers made headlines worldwide. After The Verge reached out for comment from Zoom, the company added a note to the blog post admitting the error yesterday, and provided the following statement:
“We are humbled and proud to help over 300 million daily meeting participants stay connected during this pandemic. In a blog post on April 22, we unintentionally referred to these participants as “users” and “people.” When we realized this error, we adjusted the wording to “participants.” This was a genuine oversight on our part.”
Zoom’s growth has been impressive, but the company has not actually provided a daily active user count. Zoom usage has soared from 10 million daily meeting participants back in December to 300 million this month. Rivals like Microsoft Teams and Google Meet appear to be closing the gap, though. Microsoft said yesterday it now has 75 million daily active users of Teams, a jump from 70 percent in a month. Microsoft also recorded 200 million meeting participants in a single day this month.
Google Meet is adding roughly 3 million new users each day, and hit over 100 million daily Meet meeting participants recently. Cisco also revealed earlier this month that it has a total of 300 million Webex users, and saw sign-ups close to 240,000 in a 24-hour period. Cisco has not yet provided daily meeting participant numbers, or daily active user counts.
Google, Microsoft, Facebook, and others are still chasing Zoom with new features and free services. Google made its Meet service free this week, and both Microsoft and Google have increased how many people you can see simultaneously in response to Zoom’s popular gallery view.
When he looked around the Web on the device’s default Xiaomi browser, it recorded all the websites he visited, including search engine queries whether with Google or the privacy-focused DuckDuckGo, and every item viewed on a news feed feature of the Xiaomi software. That tracking appeared to be happening even if he used the supposedly private “incognito” mode.
The device was also recording what folders he opened and to which screens he swiped, including the status bar and the settings page. All of the data was being packaged up and sent to remote servers in Singapore and Russia, though the Web domains they hosted were registered in Beijing.
Meanwhile, at Forbes’ request, cybersecurity researcher Andrew Tierney investigated further. He also found browsers shipped by Xiaomi on Google Play—Mi Browser Pro and the Mint Browser—were collecting the same data. Together, they have more than 15 million downloads, according to Google Play statistics.
[…]
And there appear to be issues with how Xiaomi is transferring the data to its servers. Though the Chinese company claimed the data was being encrypted when transferred in an attempt to protect user privacy, Cirlig found he was able to quickly see just what was being taken from his device by decoding a chunk of information that was hidden with a form of easily crackable encoding, known as base64. It took Cirlig just a few seconds to change the garbled data into readable chunks of information.
“My main concern for privacy is that the data sent to their servers can be very easily correlated with a specific user,” warned Cirlig.
[…]
But, as pointed out by Cirlig and Tierney, it wasn’t just the website or Web search that was sent to the server. Xiaomi was also collecting data about the phone, including unique numbers for identifying the specific device and Android version. Cirlig said such “metadata” could “easily be correlated with an actual human behind the screen.”
Xiaomi’s spokesperson also denied that browsing data was being recorded under incognito mode. Both Cirlig and Tierney, however, found in their independent tests that their web habits were sent off to remote servers regardless of what mode the browser was set to, providing both photos and videos as proof.
[…]
Both Cirlig and Tierney said Xiaomi’s behavior was more invasive than other browsers like Google Chrome or Apple Safari. “It’s a lot worse than any of the mainstream browsers I have seen,” Tierney said. “Many of them take analytics, but it’s about usage and crashing. Taking browser behavior, including URLs, without explicit consent and in private browsing mode, is about as bad as it gets.”
[…]
Cirlig also suspected that his app use was being monitored by Xiaomi, as every time he opened an app, a chunk of information would be sent to a remote server. Another researcher who’d tested Xiaomi devices, though was under an NDA to discuss the matter openly, said he’d seen the manufacturer’s phone collect such data. Xiaomi didn’t respond to questions on that issue.
[…]
Late in his research, Cirlig also discovered that Xiaomi’s music player app on his phone was collecting information on his listening habits: what songs were played and when.
It’s a bit of a puff piece, as American software also records all this data and sends it home. The article also seems to suggest that the whole phone is always sending data home, but only really talks about the browser and a music player app. So yes, you should have installed Firefox and used that as a browser as soon as you got the phone, but that goes for any phone that comes with Safari or Chrome as a browser too. A bit of anti Chinese storm in a teacup
ICANN has vetoed the proposed $1.1bn sale of the .org registry to an unknown private equity firm, saying this was “the right thing to do.”
The DNS overseer has been under growing pressure to use its authority to refuse the planned transfer of the top-level domain from the Internet Society to Ethos Capital, most recently from the California Attorney General who said the deal “puts profits above the public interest.”
ICANN ultimately bowed to the US state’s top lawyer when it concluded today it “finds the public interest is better served in withholding consent.”
It gave several factors, all of which were highlighted by Attorney General Xavier Becerra as reasons to reject it: the fact that the sale would see the registry – which has long served non-profit organizations – turn from a non-profit itself into a for-profit vehicle; that Ethos Capital was a “wholly different form of entity” to the Internet Society; that the $360m in debt that was being used to finance the deal “raises further question about how the .org registrants will be protected”; and that the measures that Ethos Capital had put in place following an outcry were “untested.”
The decision will likely spark a mixture of relief and celebration from millions of .org domain holders, including some of the world’s largest non-profit organizations, many of which were certain that their long-standing online addresses were going to be milked for profit by an organization that never fully revealed who its directors or investors were.
Netsweeper’s internet filter has a nasty security vulnerability that can be exploited to hijack the host server and tamper with lists of blocked websites. There are no known fixes right now.
For those unfamiliar, Netsweeper makes software that monitors and blocks connections to undesirable websites and servers. It’s aimed at parents, schools, government offices, and companies. It has a lot of customers in the Middle East, where it’s used to prevent access to content not meant for the local populace, according to investigative Canadian non-profit Citizen Lab.
The flaw, yet to be given a CVE number, was discovered by an anonymous researcher, and documented this week by SecuriTeam Secure Disclosure team leader Noam Rathaus. The bug is present in the web-based Netsweeper administration tool versions 6.4.3 and earlier. It doesn’t require any authentication to exploit: if you can reach the software over the local network or public internet, you can compromise it.
What Rathaus’s source found was that the control panel’s login script, /webadmin/tools/unixlogin.php, fails to fully sanitize user-supplied data, allowing miscreants to commandeer the machine. The login script accepts three parameters: timeout, login, and password. If you set the HTTP request referer header to a specific string, such as webadmin/admin/service_manager_data.php, the login script will execute a shell script that ultimately uses the password parameter unsafely in a Python invocation.
The second parameter, $2, below is derived from the original user-supplied password, in this line in the wonky shell script:
…you inject and execute a command that stores the Netsweeper software’s user ID to the file /tmp/pwnd. It’s left as an exercise for the reader to turn this remote-code execution into something malicious.
Rathaus told The Register that, in the worst case scenario, a hacker could exploit the bug to not only take over the host server, but also manipulate how users have their content filtered and delivered by Netsweeper.
“[You can] control what data they receive when they access sites and download files,” he said. “This is the worst part – as they can be made to unintentionally download malware and viruses.”
Existing rules for deploying AI in clinical settings, such as the standards for FDA clearance in the US or a CE mark in Europe, focus primarily on accuracy. There are no explicit requirements that an AI must improve the outcome for patients, largely because such trials have not yet run. But that needs to change, says Emma Beede, a UX researcher at Google Health: “We have to understand how AI tools are going to work for people in context—especially in health care—before they’re widely deployed.”
[…]
Google’s first opportunity to test the tool in a real setting came from Thailand. The country’s ministry of health has set an annual goal to screen 60% of people with diabetes for diabetic retinopathy, which can cause blindness if not caught early. But with around 4.5 million patients to only 200 retinal specialists—roughly double the ratio in the US—clinics are struggling to meet the target. Google has CE mark clearance, which covers Thailand, but it is still waiting for FDA approval. So to see if AI could help, Beede and her colleagues outfitted 11 clinics across the country with a deep-learning system trained to spot signs of eye disease in patients with diabetes.
In the system Thailand had been using, nurses take photos of patients’ eyes during check-ups and send them off to be looked at by a specialist elsewhere—a process that can take up to 10 weeks. The AI developed by Google Health can identify signs of diabetic retinopathy from an eye scan with more than 90% accuracy—which the team calls “human specialist level”—and, in principle, give a result in less than 10 minutes. The system analyzes images for telltale indicators of the condition, such as blocked or leaking blood vessels.
Sounds impressive. But an accuracy assessment from a lab goes only so far. It says nothing of how the AI will perform in the chaos of a real-world environment, and this is what the Google Health team wanted to find out. Over several months they observed nurses conducting eye scans and interviewed them about their experiences using the new system. The feedback wasn’t entirely positive.
When it worked well, the AI did speed things up. But it sometimes failed to give a result at all. Like most image recognition systems, the deep-learning model had been trained on high-quality scans; to ensure accuracy, it was designed to reject images that fell below a certain threshold of quality. With nurses scanning dozens of patients an hour and often taking the photos in poor lighting conditions, more than a fifth of the images were rejected.
Patients whose images were kicked out of the system were told they would have to visit a specialist at another clinic on another day. If they found it hard to take time off work or did not have a car, this was obviously inconvenient. Nurses felt frustrated, especially when they believed the rejected scans showed no signs of disease and the follow-up appointments were unnecessary. They sometimes wasted time trying to retake or edit an image that the AI had rejected.
Because the system had to upload images to the cloud for processing, poor internet connections in several clinics also caused delays. “Patients like the instant results, but the internet is slow and patients then complain,” said one nurse. “They’ve been waiting here since 6 a.m., and for the first two hours we could only screen 10 patients.”
The Google Health team is now working with local medical staff to design new workflows. For example, nurses could be trained to use their own judgment in borderline cases. The model itself could also be tweaked to handle imperfect images better.
Of course the anti ML people are using this as some sort of AI will never work kind of way, but as far as I can see these kinds of tests are necessary and seemed to have been performed with oversight, meaning there was no real risk to patients involved. Lessons were learned and will be implemented, as with all new technologies. And going public with the lessons is incredibly useful for everyone in the field.
An employee of controversial surveillance vendor NSO Group abused access to the company’s powerful hacking technology to target a love interest, Motherboard has learned.
The previously unreported news is a serious abuse of NSO’s products, which are typically used by law enforcement and intelligence agencies. The episode also highlights that potent surveillance technology such as NSO’s can ultimately be abused by the humans who have access to it.
“There’s not [a] real way to protect against it. The technical people will always have access,” a former NSO employee aware of the incident told Motherboard. A second former NSO employee confirmed the first source’s account, another source familiar confirmed aspects of it, and a fourth source familiar with the company said an NSO employee abused the company’s system. Motherboard granted multiple sources in this story anonymity to speak about sensitive NSO deliberations and to protect them from retaliation from the company.
NSO sells a hacking product called Pegasus to government clients. With Pegasus, users can remotely break into fully up-to-date iPhone or Android devices with either an attack that requires the target to click on a malicious link once, or sometimes not even click on anything at all. Pegasus takes advantage of multiple so-called zero day exploits, which use vulnerabilities that manufacturers such as Apple are unaware of.
This latest case of abuse is different though. Rather than a law enforcement body, intelligence agency, or government using the tool, an NSO employee abused it for their own personal ends.
[…]
“It’s nice to see evidence that NSO Group is committed to preventing unauthorized use of their surveillance products where ‘unauthorized’ means ‘unpaid for.’ I wish we had evidence that they cared anywhere near as much when their products are used to enable human rights violations.”
“You have to ask, who else may have been targeted by NSO using customer equipment?” John Scott Railton, a senior researcher from University of Toronto’s Citizen Lab, which has extensively researched NSO’s proliferation, told Motherboard. “It also suggests that NSO, like any organisation, struggles with unprofessional employees. It is terrifying that such people can wield NSA-style hacking tools,” he said.
If you’ve been wondering why the free space on your Mac keeps getting smaller, and smaller, and smaller—even if you haven’t been using your Mac all that much—there’s a quirky bug with Apple’s Image Capture app that could be to blame.
According to a recent blog post from NeoFinder, you should resist the urge to use the Image Capture app to transfer photos from connected devices to your desktop or laptop. If you do, and you happen to uncheck the “keep originals” button because you want the app to convert your .HEIC images to friendlier .JPEGs, the bug kicks in:
Apples Image Capture will then happily convert the HEIF files to JPG format for you, when they are copied to your Mac. But what is also does is to add 1.5 MB of totally empty data to every single photo file it creates! We found that massive bug by pure chance when working on further improving the metadata editing capabilities in NeoFinder, using a so-called Hex-Editor “Hex Fiend”.
They continue:
Of course, this is a colossal waste of space, especially considering that Apple is seriously still selling new Macs with a ridiculously tiny 128 GB internal SSD. Such a small disk is quickly filled with totally wasted empty data.
With just 1000 photos, for example, this bug eats 1.5 GB off your precious and very expensive SSD disk space.
We have notified Apple of this new bug that was already present in macOS 10.14.6, and maybe they will fix it this time without adding yet additional new bugs in the process.
So, what are your options? First off, you don’t have to use the Image Capture app. Unless you’re transferring a huge batch of photos over, you could just sync your iPhone or iPad’s photo library to iCloud, and do the same on your Mac, to view anything you’ve shot. If that’s not an option, you could always just AirDrop your photos over to your Mac, too, or simply use Photos instead of Image Capture (if possible).
At a remote virtual version of its annual Security Analyst Summit, researchers from the Russian security firm Kaspersky today plan to present research about a hacking campaign they call PhantomLance, in which spies hid malware in the Play Store to target users in Vietnam, Bangladesh, Indonesia, and India. Unlike most of the shady apps found in Play Store malware, Kaspersky’s researchers say, PhantomLance’s hackers apparently smuggled in data-stealing apps with the aim of infecting only some hundreds of users; the spy campaign likely sent links to the malicious apps to those targets via phishing emails. “In this case, the attackers used Google Play as a trusted source,” says Kaspersky researcher Alexey Firsh. “You can deliver a link to this app, and the victim will trust it because it’s Google Play.”
The first hints of PhantomLance’s campaign focusing on Google Play came to light in July of last year. That’s when Russian security firm Dr. Web found a sample of spyware in Google’s app store that impersonated a downloader of graphic design software but in fact had the capability to steal contacts, call logs, and text messages from Android phones. Kaspersky’s researchers found a similar spyware app, impersonating a browser cache-cleaning tool called Browser Turbo, still active in Google Play in November of that year. (Google removed both malicious apps from Google Play after they were reported.) While the espionage capabilities of those apps was fairly basic, Firsh says that they both could have expanded. “What’s important is the ability to download new malicious payloads,” he says. “It could extend its features significantly.”
Kaspersky went on to find tens of other, similar spyware apps dating back to 2015 that Google had already removed from its Play Store, but which were still visible in archived mirrors of the app repository. Those apps appeared to have a Vietnamese focus, offering tools for finding nearby churches in Vietnam and Vietnamese-language news. In every case, Firsh says, the hackers had created a new account and even Github repositories for spoofed developers to make the apps appear legitimate and hide their tracks. In total, Firsh says, Kaspersky’s antivirus software detected the malicious apps attempting to infect around 300 of its customers phones.
In most instances, those earlier apps hid their intent better than the two that had lingered in Google Play. They were designed to be “clean” at the time of installation and only later add all their malicious features in an update. “We think this is the main strategy for these guys,” says Firsh. In some cases, those malicious payloads also appeared to exploit “root” privileges that allowed them to override Android’s permission system, which requires apps to ask for a user’s consent before accessing data like contacts and text messages. Kaspersky says it wasn’t able to find the actual code that the apps would use to hack Android’s operating system and gain those privileges.
In 2019, the U.S. Air Force (USAF) asked the RAND Corporation to independently analyze the heavy lift space launch market to assess how potential USAF decisions in the near term could affect domestic launch providers and the market in general. RAND’s analysis was published as Assessing the Impact of U.S. Air Force National Security Space Launch Acquisition Decisions: An Independent Analysis of the Global Heavy Lift Launch Market. As part of their analysis, RAND researchers gathered open-source launch data that describes “addressable launches” of heavy lift vehicles — the commercial portion of the launch market over which launch firms compete. This tool charts the size of the total heavy lift launch market, as well as the addressable launch market for heavy lift vehicles, and offers filters to examine launches by comparisons of interest (such as vehicle, geographic region, and others).
A vulnerability existed in Microsoft’s Slack for Suits tool, Teams, that could have let a remote attacker take over accounts by simply sending a malicious GIF, infosec researchers claim.
The pwn-with-GIF vuln was possible, said Cyberark, thanks to two compromisable Microsoft subdomains along with a carefully crafted animated image file.
Although it was a responsibly disclosed theoretical vuln, and was not abused in the wild as far as is known, it illustrates that not all online collaboration platforms are as secure as one might hope.
“Even if an attacker doesn’t gather much information from a Teams’ account, they could use the account to traverse throughout an organization (just like a worm),” mused Cyberark researcher Omer Tsarfati.
The Israeli infosec outfit said it had alerted Redmond to the two subdomains, resulting in their DNS entries being tweaked. The rest of the Teams vuln was patched last Monday, 20 April.
In a blunder described as “astonishing and worrying,” Sheffield City Council’s automatic number-plate recognition (ANPR) system exposed to the internet 8.6 million records of road journeys made by thousands of people, The Register can reveal.
The ANPR camera system’s internal management dashboard could be accessed by simply entering its IP address into a web browser. No login details or authentication of any sort was needed to view and search the live system – which logs where and when vehicles, identified by their number plates, travel through Sheffield’s road network.
Britain’s Surveillance Camera Commissioner Tony Porter described the security lapse as “both astonishing and worrying,” and demanded a full probe into the snafu.
Financial Times reporter Mark Di Stefano allegedly spied on Zoom meetings at rival newspapers the Independent and the Evening Standard to get scoops on staff cuts and furloughs due to the coronavirus pandemic, according to a report from the UK’s Independent. And Di Stefano he did a comedically bad job of covering his tracks.
Di Stefano reportedly logged in to a Zoom meeting being held by the Independent last week using his Financial Times email address, causing his name to appear for everyone else on the call, though his own video camera was disabled. Di Stefano logged out after “16 seconds,” according to the Independent, but a few minutes later, another login was recorded that was connected to Di Stefano’s phone number. That user stayed on the call until the end of the meeting, according to journalists in the Zoom meeting.
How do we know it was probably Di Stefano? It’s not like he made his knowledge of the call’s contents secret. After the call, he tweeted about the changes at the two news outlets on April 23, including the fact that ad revenue is down between 30 and 50 percent. The FT reporter also tweeted that the Independent’s website had just experienced its biggest traffic month ever.
Di Stefano’s tweets were apparently going out before some people at the two news outlets even knew what was going on at their own workplaces, according to the Independent.
[…]
Di Stefano caught plenty of flak from Twitter users over the past two days, making fun of his less-than-perfect deception on Zoom, with plenty of Simpsons references—like the time that Mr. Burns put on a bad mustache to appear as “Mr. Snrub.”
The design of Australia’s COVIDSafe contact-tracing app creates some unintended surveillance opportunities, according to a group of four security pros who unpacked its .APK file.
Penned by independent security researcher Chris Culnane, University of Melbourne tutor, cryptography researcher and masters student Eleanor McMurtry, developer Robert Merkel and Australian National University associate professor and Thinking Security CEO Vanessa Teague and posted to GitHub, the analysis notes three concerning design choices.
The first-addressed is the decision to change UniqueIDs – the identifier the app shares with other users – once every two hours and for devices to only accept a new UniqueID if the app is running. The four researchers say this will make it possible for the government to understand if users are running the app.
“This means that a person who chooses to download the app, but prefers to turn it off at certain times of the day, is informing the Data Store of this choice,” they write.
The authors also suggest that persisting with a UniqueID for two hours “greatly increases the opportunities for third-party tracking.”
“The difference between 15 minutes’ and two hours’ worth of tracking opportunities is substantial. Suppose for example that the person has a home tracking device such as a Google home mini or Amazon Alexa, or even a cheap Bluetooth-enabled IoT device, which records the person’s UniqueID at home before they leave. Then consider that if the person goes to a shopping mall or other public space, every device that cooperates with their home device can share the information about where they went.”
The analysis also notes that “It is not true that all the data shared and stored by COVIDSafe is encrypted. It shares the phone’s exact model in plaintext with other users, who store it alongside the corresponding Unique ID.”
That’s worrisome as:
“The exact phone model of a person’s contacts could be extremely revealing information. Suppose for example that a person wishes to understand whether another person whose phone they have access to has visited some particular mutual acquaintance. The controlling person could read the (plaintext) logs of COVIDSafe and detect whether the phone models matched their hypothesis. This becomes even easier if there are multiple people at the same meeting. This sort of group re-identification could be possible in any situation in which one person had control over another’s phone. Although not very useful for suggesting a particular identity, it would be very valuable in confirming or refuting a theory of having met with a particular person.”
The authors also worry that the app shares all UniqueIDs when users choose to report a positive COVID-19 test.
“COVIDSafe does not give them the option of deleting or omitting some IDs before upload,” they write. “This means that users consent to an all-or-nothing communication to the authorities about their contacts. We do not see why this was necessary. If they wish to help defeat COVID-19 by notifying strangers in a train or supermarket that they may be at risk, then they also need to share with government a detailed picture of their day’s close contacts with family and friends, unless they have remembered to stop the app at those times.”
The analysis also calls out some instances of UniqueIDs persisting for up to eight hours, for unknown reasons.
The authors conclude the app is not an immediate danger to users. But they do say it presents “serious privacy problems if we consider the central authority to be an adversary.”
None of which seems to be bothering Australians, who have downloaded it more than two million times in 48 hours and blown away adoption expectations.
Atlassian co-founder Mike Cannon-Brookes may well have helped things along, by suggestingit’s time to “turn the … angry mob mode off. He also offered the following advice:
When asked by non technical people “Should I install this app? Is my data / privacy safe? Is it true it doesn’t track my location?” – say “Yes” and help them understand. Fight the misinformation. Remind them how little time they think before they download dozens of free, adware crap games that are likely far worse for their data & privacy than this ever would be!
Yes, we’ve seen lots of folks using COVID-19 to push their specific agendas forward, but this one is just bizarre. UNESCO (the United Nations Educational, Scientific and Cultural Organization) is an organization that is supposed to be focused on developing education and culture around the globe. From any objective standpoint, you’d think it would be in favor of things like more open licensing and sharing of culture, but, in practice, the organization has long been hijacked by copyright maximalist interests. Almost exactly a decade ago, we were perplexed at the organization’s decision to launch an anti-piracy organization. After all, “piracy” (or sharing of culture) is actually how culture and ideas frequently spread in the developing countries where UNESCO focuses.
In our #ResiliArt launch debate on how to support culture during #COVID19, #UNESCO’s Goodwill Ambassador @jeanmicheljarre suggested eternal copyright. What do you think?
We’ve started the conversation, now we count on you to join it.
They phrase this as “just started the conversation,” but that’s a trollish setup for a terrible, terrible idea. In case you can’t see the video, it’s electronic music creator Jean-Michel Jarre suggesting eternal copyright as a way to support future artists:
Why not going to the other way around, and to create the concept of eternal copyright. And I mean by this that after a certain period of time, the rights of movies, of music, of everything, would go to a global fund to help artists, and especially artists in emerging countries.
First, we can all agree that helping to enable and support artists in emerging countries is a good general idea. I’ve seen a former RIAA executive screaming about how everyone criticizing this idea is showing their true colors in how they don’t want to support artists. But that’s just silly. The criticism of this idea is that it doesn’t “support” artists at all, and will almost certainly make creativity and supporting artists more difficult. And that’s because art and creativity has always relied on building upon the works of those who came before — and locking up everything for eternity would make that cost prohibitive for all but the wealthiest of creators. Indeed, the idea that we need copyright and copyright alone to support artists shows (yet again) just how uncreative the people who claim to support copyright can be.
There appears to be a new character-linked bug in Messages, Mail, and other apps that can cause the iPhone, iPad, Mac, and Apple Watch to crash when receiving a specific string of characters.
Image from Twitter
In this particular case, the character string involves the Italian flag emoji along with characters in the Sindhi language, and it appears the system crash happens when an incoming notification is received with the problem-causing characters.
Based on information shared on Reddit, the character string began circulating on Telegram, but has also been found on Twitter.
These kind of device-crashing character bugs surface every so often and sometimes become widespread, leading to a significant number of people ending up with a malfunctioning iPhone, iPad, or Mac. In 2018, for example, a character string in the Telugu language circulated around the internet, crashing thousands of devices before Apple addressed the problem in an iOS update.
There is often no way to prevent these characters from causing crashes and freezes when received from a malicious person, and crashes caused through notifications often cause operating system re-springs and in some cases, a need to restore a device in DFU mode.
MacRumors readers should be aware that such a bug is circulating, and for those who are particularly concerned, as this bug appears to impact notifications, turning off notifications may mitigate the effects. Apple typically fixes these character bugs within a few days to a week.
Update: According to MacRumors reader Adam, who tested the bug on a device running iOS 13.4.5, the issue is fixed in the second beta of that update.
As users complain of blue screens of death, deleted files and reboot loops, here’s what you need to know about this Windows 10 update.
There’s a lot of truth in the notion that you can’t please all the people all of the time, as Microsoft knows only too well. With Windows 10 now installed on more than one billion devices, there will always be a wide variation in terms of user satisfaction. One area where this variation can be seen perhaps most clearly is that of updates.
[…]
The problems those users are reporting to the Microsoft support forums and on social media have included the installation failing and looping back to restart again, the dreaded Blue Screen of Death (BSOD) following a “successful” update and computers that simply refuse to boot again afterward. Among the more common issues, in terms of complaints after a Windows 10 update, were Bluetooth and Wi-Fi connectivity related ones. But there were have also been users complaining that after a restart, all files from the C drive had been deleted.
[…]
Microsoft asks that any users experiencing problems use the Windows + F keyboard shortcut, or select Feedback Hub from the Start menu, to provide feedback so it can investigate.
More practically speaking, if you are experiencing any Windows Update issues, I would always suggest you head for the Windows Update Troubleshooter. This, more often than not, fixes any error code problems, Be warned, though, I have known it take more than one running of the troubleshooter before updates are all successfully installed, so do persevere
In an extraordinary reversal, the U.S. Navy has recommended reinstating the fired captain of the coronavirus-hit aircraft carrier Theodore Roosevelt, whose crew hailed him as their hero for risking his job to safeguard their lives, officials said on Friday.
The Navy’s leadership made the recommendation to reinstate Captain Brett Crozier to Defense Secretary Mark Esper on Friday, just three weeks after Crozier was relieved of command after the leak of a letter he wrote calling on the Navy for stronger measures to protect the crew, the officials said, speaking on condition of anonymity.
[…]
sper’s deliberations raised questions about whether political or other considerations might override the Navy’s recommendations in a case that has seen Democrats vocally critical of the Trump administration’s handling of the matter.
Sources say Crozier is one of the 856 sailors from the Roosevelt’s 4,800-member crew who have tested positive for the coronavirus, effectively taking one of the Navy’s most powerful ships out of operation.
Crozier was fired by the Navy’s top civilian, then-acting Navy Secretary Thomas Modly, against the recommendations of uniformed leaders, who suggested he wait for an investigation into the letter’s leak.
Modly’s decision backfired badly, as members of the crew hailed their captain as a hero in an emotional sendoff captured on video that went viral on social media.
Embarrassed, Modly then compounded his problems by flying out to the carrier to ridicule Crozier over the leak and question his character in a speech to the Roosevelt’s crew, which also leaked to the media. Modly then resigned.
News of the Navy’s recommendations could boost morale among sailors on the Roosevelt, who were caught between the Navy’s desire to keep the ship operational and its duty to shield them from unnecessary risk in peacetime.
[…]
The disclosure of the Navy’s recommendation, which was first reported by the New York Times, came just hours after the Pentagon announced that at least 18 sailors aboard a U.S. Navy destroyer – the Kidd – had tested positive for the new coronavirus.
It was another blow to the military as it faces fallout over its handling of the Roosevelt, raising additional questions about whether the revamped safeguards in place to protect U.S. troops are sufficient.
The crisis being triggered by the coronavirus is the biggest facing Navy leadership since two crashes in the Asia Pacific region in 2017 that killed 17 sailors.
Those incidents raised questions about Navy training and the pace of operations, prompting a congressional hearing and the removal of a number of officers.
There are more than 2,000 active satellites orbiting Earth. At the end of their useful lives, many will simply burn up as they reenter the atmosphere. But some will continue circling as “zombie” satellites — neither alive nor quite dead.
“Most zombie satellites are satellites that are no longer under human control, or have failed to some degree,” says Scott Tilley.
Tilley, an amateur radio operator living in Canada, has a passion for hunting them down.
In 2018, he found a signal from a NASA probe called IMAGE that the space agency had lost track of in 2005. With Tilley’s help, NASA was able to reestablish contact.
But he has tracked down zombies even older than IMAGE.
“The oldest one I’ve seen is Transit 5B-5. And it launched in 1965,” he says, referring to a nuclear-powered U.S. Navy navigation satellite that still circles the Earth in a polar orbit, long forgotten by all but a few amateurs interested in hearing it “sing” as it passes overhead.
Recently, Tilley got interested in a communications satellite he thought might still be alive — or at least among the living dead. LES-5, built by the Massachusetts Institute of Technology’s Lincoln Laboratory, was launched in 1967.
By scouring the Internet, he found a paper describing the radio frequency that LES-5, an experimental military UHF communications satellite, should be operating on — if it was still alive. So he decided to have a look.
“This required the building of an antenna, erecting a new structure to support it. Pre-amps, filters, stuff that takes time to gather and put all together,” he says.
“When you have a family and a busy business, you don’t really have a lot of time for that,” he says.
But then came the COVID-19 pandemic.
Well folks, here’s what appears to be a new ZOMBIE SAT!
LES-5 [2866, 1967-066E] in a GEO graveyard orbit.
Confirmation will occur at ~0445 UTC this evening when the satellite should pass through eclipse.
British Columbia, where Tilley lives, was on lockdown. Like many of us, suddenly Tilley had time on his hands. He used it to look for LES-5, and on March 24, he hit the ham radio equivalent of pay dirt.
He’s been making additional measurements ever since.
“The reason this one is kind of intriguing is its telemetry beacon is still operating,” Tilley says.
In other words, says Tilley, even though the satellite was supposed to shut down in 1972, it’s still going. As long as the solar panels are in the sun, the satellite’s radio continues to operate. Tilley thinks it may even be possible to send commands to the satellite.
The MIT lab that built LES-5 still does a lot of work on classified projects for the military. NPR contacted its news office to ask if someone could say more about LES-5 and whether it really could still receive commands.
But after repeated requests, Lincoln Laboratory finally answered with a “no comment.”
It seems that even a 50-year-old zombie satellite might still have secrets.
In a filing released on Thursday in federal court in Oakland, California, lawyers representing the social media giant alleged that NSO Group had used a network of remote servers in California to hack into phones and devices that were used by attorneys, journalists, human rights activists, government officials and others.
NSO Group has argued that Facebook’s case against it should be thrown out on the grounds that the court has no jurisdiction over its operations. In a 13 May legal document, lawyers representing NSO Group said that the company had no offices or employees in California and “do no business of any kind there.”
NSO has also argued that it has no role in operating the spyware and is limited to “providing advice and technical support to assist customers in setting up” the technology.
John Scott-Railton, a senior researcher at the Citizen Lab at the University Of Toronto’s Munk School, said evidence presented by Facebook on Thursday indicated NSO Group was in a position to “look over its customer’s shoulders” and monitor who its government clients were targeting.
“This is a gut punch to years of NSO’s claims that it can’t see what its customers are doing,” said Scott-Railton. He said it also shows that the Israeli company “probably knows a lot more about what its customers do than it would like to admit.”
NSO’s spyware, known as Pegasus, can gather information about a mobile phone’s location, access its camera, microphone and internal hard drive, and covertly record emails, phone calls and text messages. Researchers have accused the company of supplying its technology to countries that have used it to spy on dissidents, journalists and other critics.
A representative for NSO Group said its products are “used to stop terrorism, curb violent crime, and save lives.”
“NSO Group does not operate the Pegasus software for its clients, nor can it be used against U.S. mobile phone numbers, or against a device within the geographic bounds of the United States,” the representative said, adding that a response to Facebook’s legal filing was forthcoming.
In its filing, Facebook alleged that NSO had rented a Los Angeles-based server from a U.S. company, QuadraNet, that it used to launch 720 hacks on people’s smartphones or other devices. It’s unclear whether NSO Group’s software was used to target people within the U.S.. The company has previously stated that its technology “cannot be used on U.S. phone numbers.”
Facebook accused NSO Group of reverse-engineering WhatsApp, using an unauthorized program to access WhatsApp’s servers and deploying its spyware against approximately 1,400 targets. NSO Group was then able to “covertly transmit malicious code through WhatsApp servers and inject” spyware onto people’s devices without their knowledge, according to the Facebook’s legal filings.
“Defendants had no authority to access WhatsApp’s servers with an imposter program, manipulate network settings, and commandeer the servers to attack WhatsApp users,” Facebook alleged in the Thursday filing. “That invasion of WhatsApp’s servers and users’ devices constitutes unlawful computer hacking” under the Computer Fraud and Abuse Act.
It has been called the “most extreme surveillance in the history of Western democracy.” It has not once but twice been found to be illegal. It sparked the largest ever protest of senior lawyers who called it “not fit for purpose.”
And now the UK’s Investigatory Powers Act of 2016 – better known as the Snooper’s Charter – is set to expand to allow government agencies you may never have heard of to trawl through your web histories, emails, or mobile phone records.
In a memorandum [PDF] first spotted by The Guardian, the British government is asking that five more public authorities be added to the list of bodies that can access data scooped up under the nation’s mass-surveillance laws: the Civil Nuclear Constabulary, the Environment Agency, the Insolvency Service, the UK National Authority for Counter Eavesdropping (UKNACE), and the Pensions Regulator.
The memo explains why each should be given the extraordinary powers, in general and specifically. In general, the five agencies “are increasingly unable to rely on local police forces to investigate crimes on their behalf,” and so should be given direct access to the data pipe itself.
Five Whys
The Civil Nuclear Constabulary (CNC) is a special armed police force that does security at the UK’s nuclear sites and when nuclear materials are being moved. It should be given access even though “the current threat to nuclear sites in the UK is assessed as low” because “it can also be difficult to accurately assess risk without the full information needed.”
The Environment Agency investigates “over 40,000 suspected offences each year,” the memo stated. Which is why it should also be able to ask ISPs to hand over people’s most sensitive communications information, in order “to tackle serious and organised waste crime.”
The Insolvency Service investigates breaches of company director disqualification orders. Some of those it investigates get put in jail so it is essential that the service be allowed “to attribute subscribers to telephone numbers and analyse itemised billings” as well as be able to see what IP addresses are accessing specific email accounts.
UKNACE, a little known agency that we have taken a look at in the past, is home of the real-life Qs, and one of its jobs is to detect attempts to eavesdrop on UK government offices. It needs access to the nation’s communications data “in order to identify and locate an attacker or an illegal transmitting device”, the memo claimed.
And lastly, the Pensions Regulator, which checks that companies have added their employees to their pension schemes, need to be able to delve into anyone’s emails so it can “secure compliance and punish wrongdoing.”
Taken together, the requests reflect exactly what critics of the Investigatory Powers Act feared would happen: that a once-shocking power that was granted on the back of terrorism fears is being slowly extended to even the most obscure government agency for no reason other that it will make bureaucrats’ lives easier.
None of the agencies would be required to apply for warrants to access people’s internet connection data, and they would be added to another 50-plus agencies that already have access, including the Food Standards Agency, Gambling Commission, and NHS Business Services Authority.
Safeguards
One of the biggest concerns remains that there are insufficient safeguards in place to prevent the system being abused; concerns that only grow as the number of people that have access to the country’s electronic communications grows.
It is also still not known precisely how all these agencies access the data that is accumulated, or what restrictions are in place beyond a broad-brush “double lock” authorization process that requires a former judge (a judicial commissioner, or JCs) to approve a minister’s approval.
The colors divide the map into geologic units; scientists divide the Moon’s geologic history into a different eras, so a color represents the kind of rock and its era. For example, yellow on the map represents Copernican craters—the rim, wall, and floor of bright material from the Moon’s Copernican period, which lasted from a billion years ago to today. Shading represents topographical information.
Lunar maps have various uses to scientists. Skinner explained that they can show hazards as well as resources and where we might be able to develop the Moon, though mapping an extraterrestrial body to that level of detail is far off. Given this map’s scale, its main purpose is to serve as a summary of what scientists know about the Moon today. The map is available in a GIS (geographical information system) format that allows researchers to overlay their own scientific results on top of it in order to better put discoveries into context.
This isn’t the final version of the map, Skinner told Gizmodo. As scientists learn more about the Moon, we’ll start to see more tweaks. But ultimately, this map is a high-level overview, and higher-resolution maps will be needed to elucidate smaller sections of the Moon.
The team hopes their map will reach the broadest audience possible, and to be honest, I think it looks good enough to be framed on a wall. You can download the full map here.
Among startups and tech companies, Stripe seems to be the near-universal favorite for payment processing. When I needed paid subscription functionality for my new web app, Stripe felt like the natural choice. After integration, however, I discovered that Stripe’s official JavaScript library records all browsing activity on my site and reports it back to Stripe. This data includes:
Every URL the user visits on my site, including pages that never display Stripe payment forms
Telemetry about how the user moves their mouse cursor while browsing my site
Unique identifiers that allow Stripe to correlate visitors to my site against other sites that accept payment via Stripe
This post shares what I found, who else it affects, and how you can limit Stripe’s data collection in your web applications.