Google’s medical AI was super accurate in a lab. Real life was a different story, so they need to tweak

The covid-19 pandemic is stretching hospital resources to the breaking point in many countries in the world. It is no surprise that many people hope  AI could speed up patient screening and ease the strain on clinical staff. But a study from Google Health—the first to look at the impact of a deep-learning tool in real clinical settings—reveals that even the most accurate AIs can actually make things worse if not tailored to the clinical environments in which they will work.

Existing rules for deploying AI in clinical settings, such as the standards for FDA clearance in the US or a CE mark in Europe, focus primarily on accuracy. There are no explicit requirements that an AI must improve the outcome for patients, largely because such trials have not yet run. But that needs to change, says Emma Beede, a UX researcher at Google Health: “We have to understand how AI tools are going to work for people in context—especially in health care—before they’re widely deployed.”

[…]

Google’s first opportunity to test the tool in a real setting came from Thailand. The country’s ministry of health has set an annual goal to screen 60% of people with diabetes for diabetic retinopathy, which can cause blindness if not caught early. But with around 4.5 million patients to only 200 retinal specialists—roughly double the ratio in the US—clinics are struggling to meet the target. Google has CE mark clearance, which covers Thailand, but it is still waiting for FDA approval. So to see if AI could help, Beede and her colleagues outfitted 11 clinics across the country with a deep-learning system trained to spot signs of eye disease in patients with diabetes.

In the system Thailand had been using, nurses take photos of patients’ eyes during check-ups and send them off to be looked at by a specialist elsewhere­—a process that can take up to 10 weeks. The AI developed by Google Health can identify signs of diabetic retinopathy from an eye scan with more than 90% accuracy—which the team calls “human specialist level”—and, in principle, give a result in less than 10 minutes. The system analyzes images for telltale indicators of the condition, such as blocked or leaking blood vessels.

Sounds impressive. But an accuracy assessment from a lab goes only so far. It says nothing of how the AI will perform in the chaos of a real-world environment, and this is what the Google Health team wanted to find out. Over several months they observed nurses conducting eye scans and interviewed them about their experiences using the new system. The feedback wasn’t entirely positive.

When it worked well, the AI did speed things up. But it sometimes failed to give a result at all. Like most image recognition systems, the deep-learning model had been trained on high-quality scans; to ensure accuracy, it was designed to reject images that fell below a certain threshold of quality. With nurses scanning dozens of patients an hour and often taking the photos in poor lighting conditions, more than a fifth of the images were rejected.

Patients whose images were kicked out of the system were told they would have to visit a specialist at another clinic on another day. If they found it hard to take time off work or did not have a car, this was obviously inconvenient. Nurses felt frustrated, especially when they believed the rejected scans showed no signs of disease and the follow-up appointments were unnecessary. They sometimes wasted time trying to retake or edit an image that the AI had rejected.

Because the system had to upload images to the cloud for processing, poor internet connections in several clinics also caused delays. “Patients like the instant results, but the internet is slow and patients then complain,” said one nurse. “They’ve been waiting here since 6 a.m., and for the first two hours we could only screen 10 patients.”

The Google Health team is now working with local medical staff to design new workflows. For example, nurses could be trained to use their own judgment in borderline cases. The model itself could also be tweaked to handle imperfect images better.

[…]

Source: Google’s medical AI was super accurate in a lab. Real life was a different story. | MIT Technology Review

Of course the anti ML people are using this as some sort of AI will never work kind of way, but as far as I can see these kinds of tests are necessary and seemed to have been performed with oversight, meaning there was no real risk to patients involved. Lessons were learned and will be implemented, as with all new technologies. And going public with the lessons is incredibly useful for everyone in the field.

NSO Employee Abused Phone Hacking Tech to Target a Love Interest

An employee of controversial surveillance vendor NSO Group abused access to the company’s powerful hacking technology to target a love interest, Motherboard has learned.

The previously unreported news is a serious abuse of NSO’s products, which are typically used by law enforcement and intelligence agencies. The episode also highlights that potent surveillance technology such as NSO’s can ultimately be abused by the humans who have access to it.

“There’s not [a] real way to protect against it. The technical people will always have access,” a former NSO employee aware of the incident told Motherboard. A second former NSO employee confirmed the first source’s account, another source familiar confirmed aspects of it, and a fourth source familiar with the company said an NSO employee abused the company’s system. Motherboard granted multiple sources in this story anonymity to speak about sensitive NSO deliberations and to protect them from retaliation from the company.

NSO sells a hacking product called Pegasus to government clients. With Pegasus, users can remotely break into fully up-to-date iPhone or Android devices with either an attack that requires the target to click on a malicious link once, or sometimes not even click on anything at all. Pegasus takes advantage of multiple so-called zero day exploits, which use vulnerabilities that manufacturers such as Apple are unaware of.

[…]

esearchers have previously tracked installations of Pegasus to Saudi Arabia, the United Arab Emirates, Mexico, and dozens of other countries. NSO says its tool should exclusively be used to fight terrorism or serious crime, but researchers, journalists, and tech companies have found multiple instances of NSO customers using the tool to spy on dissidents and political opponents. David Kaye, the United Nations special rapporteur on the promotion and protection of the right to freedom of opinion and expression, has noted that there is a “legacy of harm” caused by Pegasus.

This latest case of abuse is different though. Rather than a law enforcement body, intelligence agency, or government using the tool, an NSO employee abused it for their own personal ends.

[…]

“It’s nice to see evidence that NSO Group is committed to preventing unauthorized use of their surveillance products where ‘unauthorized’ means ‘unpaid for.’ I wish we had evidence that they cared anywhere near as much when their products are used to enable human rights violations.”

“You have to ask, who else may have been targeted by NSO using customer equipment?” John Scott Railton, a senior researcher from University of Toronto’s Citizen Lab, which has extensively researched NSO’s proliferation, told Motherboard. “It also suggests that NSO, like any organisation, struggles with unprofessional employees. It is terrifying that such people can wield NSA-style hacking tools,” he said.

Source: NSO Employee Abused Phone Hacking Tech to Target a Love Interest – VICE

Mac Image Capture App Eats up your space

If you’ve been wondering why the free space on your Mac keeps getting smaller, and smaller, and smaller—even if you haven’t been using your Mac all that much—there’s a quirky bug with Apple’s Image Capture app that could be to blame.

According to a recent blog post from NeoFinder, you should resist the urge to use the Image Capture app to transfer photos from connected devices to your desktop or laptop. If you do, and you happen to uncheck the “keep originals” button because you want the app to convert your .HEIC images to friendlier .JPEGs, the bug kicks in:

Apples Image Capture will then happily convert the HEIF files to JPG format for you, when they are copied to your Mac. But what is also does is to add 1.5 MB of totally empty data to every single photo file it creates! We found that massive bug by pure chance when working on further improving the metadata editing capabilities in NeoFinder, using a so-called Hex-Editor “Hex Fiend”.

They continue:

Of course, this is a colossal waste of space, especially considering that Apple is seriously still selling new Macs with a ridiculously tiny 128 GB internal SSD. Such a small disk is quickly filled with totally wasted empty data.

With just 1000 photos, for example, this bug eats 1.5 GB off your precious and very expensive SSD disk space.

We have notified Apple of this new bug that was already present in macOS 10.14.6, and maybe they will fix it this time without adding yet additional new bugs in the process.

So, what are your options? First off, you don’t have to use the Image Capture app. Unless you’re transferring a huge batch of photos over, you could just sync your iPhone or iPad’s photo library to iCloud, and do the same on your Mac, to view anything you’ve shot. If that’s not an option, you could always just AirDrop your photos over to your Mac, too, or simply use Photos instead of Image Capture (if possible).

Source: How to Keep the Image Capture App From Eating Up Space on Your Mac

How Spies Snuck Malware Into the Google Play Store—Again and Again: by upgrading a vetted app

At a remote virtual version of its annual Security Analyst Summit, researchers from the Russian security firm Kaspersky today plan to present research about a hacking campaign they call PhantomLance, in which spies hid malware in the Play Store to target users in Vietnam, Bangladesh, Indonesia, and India. Unlike most of the shady apps found in Play Store malware, Kaspersky’s researchers say, PhantomLance’s hackers apparently smuggled in data-stealing apps with the aim of infecting only some hundreds of users; the spy campaign likely sent links to the malicious apps to those targets via phishing emails. “In this case, the attackers used Google Play as a trusted source,” says Kaspersky researcher Alexey Firsh. “You can deliver a link to this app, and the victim will trust it because it’s Google Play.”

Kaspersky says it has tied the PhantomLance campaign to the hacker group OceanLotus, also known as APT32, widely believed to be working on behalf of the Vietnamese government. That suggests the PhantomLance campaign likely mixed spying on Vietnam’s Southeast Asian neighbors with domestic surveillance of Vietnamese citizens. Security firm FireEye, for instance, has linked OceanLotus to previous operations that targeted Vietnamese dissidents and bloggers. FireEye also recently spotted the group targeting China’s Ministry of Emergency Management as well as the government of the Chinese province of Wuhan, apparently searching for information related to Covid-19.

The first hints of PhantomLance’s campaign focusing on Google Play came to light in July of last year. That’s when Russian security firm Dr. Web found a sample of spyware in Google’s app store that impersonated a downloader of graphic design software but in fact had the capability to steal contacts, call logs, and text messages from Android phones. Kaspersky’s researchers found a similar spyware app, impersonating a browser cache-cleaning tool called Browser Turbo, still active in Google Play in November of that year. (Google removed both malicious apps from Google Play after they were reported.) While the espionage capabilities of those apps was fairly basic, Firsh says that they both could have expanded. “What’s important is the ability to download new malicious payloads,” he says. “It could extend its features significantly.”

Kaspersky went on to find tens of other, similar spyware apps dating back to 2015 that Google had already removed from its Play Store, but which were still visible in archived mirrors of the app repository. Those apps appeared to have a Vietnamese focus, offering tools for finding nearby churches in Vietnam and Vietnamese-language news. In every case, Firsh says, the hackers had created a new account and even Github repositories for spoofed developers to make the apps appear legitimate and hide their tracks. In total, Firsh says, Kaspersky’s antivirus software detected the malicious apps attempting to infect around 300 of its customers phones.

In most instances, those earlier apps hid their intent better than the two that had lingered in Google Play. They were designed to be “clean” at the time of installation and only later add all their malicious features in an update. “We think this is the main strategy for these guys,” says Firsh. In some cases, those malicious payloads also appeared to exploit “root” privileges that allowed them to override Android’s permission system, which requires apps to ask for a user’s consent before accessing data like contacts and text messages. Kaspersky says it wasn’t able to find the actual code that the apps would use to hack Android’s operating system and gain those privileges.

Source: How Spies Snuck Malware Into the Google Play Store—Again and Again | WIRED

Space Launch Market for Heavy Lift Vehicles: Charts and Data Set of Addressable Launches 2007–2018

In 2019, the U.S. Air Force (USAF) asked the RAND Corporation to independently analyze the heavy lift space launch market to assess how potential USAF decisions in the near term could affect domestic launch providers and the market in general. RAND’s analysis was published as Assessing the Impact of U.S. Air Force National Security Space Launch Acquisition Decisions: An Independent Analysis of the Global Heavy Lift Launch Market. As part of their analysis, RAND researchers gathered open-source launch data that describes “addressable launches” of heavy lift vehicles — the commercial portion of the launch market over which launch firms compete. This tool charts the size of the total heavy lift launch market, as well as the addressable launch market for heavy lift vehicles, and offers filters to examine launches by comparisons of interest (such as vehicle, geographic region, and others).

launch market heavy lift vehicles

Source: Space Launch Market for Heavy Lift Vehicles: Charts and Data Set of Addressable Launches 2007–2018 | RAND