Google (G00G) Urges EU Judges to Slash ‘Staggering’ $5 Billion Fine

Google called on European Union judges to cut or cancel a “staggering” 4.3 billion euro ($5 billion) antitrust fine because the search giant never intended to harm rivals.

The company “could not have known its conduct was an abuse” when it struck contracts with Android mobile phone makers that required them to take its search and web-browser apps, Google lawyer Genevra Forwood told the EU’s General Court in Luxembourg.

[…]

The European Commission’s lawyer, Anthony Dawes, scoffed at Google’s plea, saying the fine was a mere 4.5% of the company’s revenue in 2017, well below a 10% cap.

[…]

Source: Google (G00G) Urges EU Judges to Slash ‘Staggering’ $5 Billion Fine – Bloomberg

Because Google had never ever heard of Microsoft and the antitrust lawsuits around Internet Explorer? Come on!

Lawsuit prepped against Google for using Brit patients’ data

A UK law firm is bringing legal action on behalf of patients it says had their confidential medical records obtained by Google and DeepMind Technologies in breach of data protection laws.

Mishcon de Reya said today it planned a representative action on behalf of Mr Andrew Prismall and the approximately 1.6 million individuals whose data was used as part of a testing programme for medical software developed by the companies.

It told The Register the claim had already been issued in the High Court.

DeepMind, acquired by Google in 2014, worked with the search software giant and Royal Free London NHS Foundation Trust under an arrangement formed in 2015.

The law firm said that the tech companies obtained approximately 1.6 million individuals’ confidential medical records without their knowledge or consent.

The Register has contacted Google, DeepMind and the Royal Free Hospital for their comments.

“Given the very positive experience of the NHS that I have always had during my various treatments, I was greatly concerned to find that a tech giant had ended up with my confidential medical records,” lead claimant Prismall said in a statement.

“As a patient having any sort of medical treatment, the last thing you would expect is your private medical records to be in the hands of one of the world’s biggest technology companies.

[…]

In April 2016, it was revealed that the web giant had signed a deal with the Royal Free Hospital in London to build an application called Streams, which can analyse patients’ details and identify those who have acute kidney damage. The app uses a fixed algorithm, developed with the help of doctors, so not technically AI.

The software – developed by DeepMind, Google’s AI subsidiary – was first tested with simulated data. But it was tested again using 1.6 million sets of real NHS medical files provided by the London hospital. However, not every patient was aware that their data was being given to Google to test the Streams software. Streams had been deployed inwards, and thus now handles real people’s details, but during development, it also used live medical records as well as simulated inputs.

Dame Caldicott told the hospital’s medical director, Professor Stephen Powis, that he overstepped the mark, and that there was no consent given by people to have their information used in this way pre-deployment.

[…]

In a data-sharing agreement uncovered by the New Scientist, Google and its DeepMind artificial intelligence wing were granted access to current and historic patient data at three London hospitals run by the Royal Free NHS Trust.

Source: Lawsuit prepped against Google for using Brit patients’ data • The Register

New GriftHorse malware has infected more than 10 million Android phones

Security researchers have found a massive malware operation that has infected more than 10 million Android smartphones across more than 70 countries since at least November 2020 and is making millions of dollars for its operators on a monthly basis.

Discovered by mobile security firm Zimperium, the new GriftHorse malware has been distributed via benign-looking apps uploaded on the official Google Play Store and on third-party Android app stores.

Malware subscribes users to premium SMS services

If users install any of these malicious apps, GriftHorse starts peppering users with popups and notifications that offer various prizes and special offers.

Users who tap on these notifications are redirected to an online page where they are asked to confirm their phone number in order to access the offer. But, in reality, users are subscribing themselves to premium SMS services that charge over €30 ($35) per month, money that are later redirected into the GriftHorse operators’ pockets.

[…]

the two Zimperium researchers said that besides numbers, the GriftHorse coders also invested in their malware’s code quality, using a wide spectrum of websites, malicious apps, and developer personas to infect users and avoid detection for as much as possible.

“The level of sophistication, use of novel techniques, and determination displayed by the threat actors allowed them to stay undetected for several months,” Yaswant and Gupta explained.

“In addition to a large number of applications, the distribution of the applications was extremely well-planned, spreading their apps across multiple, varied categories, widening the range of potential victims,”

GriftHorse-app-category
Image: Zimperium

GriftHorse is making millions in monthly profits

Based on what they’ve seen until now, the researchers estimated that the GriftHorse gang is currently making between €1.2 million and €3.5 million per month from their scheme ($1.5 million to $4 million per month).

[…]

Source: New GriftHorse malware has infected more than 10 million Android phones – The Record by Recorded Future

Unpatched flaw creates ‘weaponised’ Apple AirTags

[…]

Should your AirTag-equipped thing not be where you thought it was, you can enable Lost Mode. When in Lost Mode, an AirTag scanned via NFC provides a unique URL which lets the finder get in contact with the loser – and it’s this page where security researcher Bobby Rauch discovered a concerning vulnerability.

“An attacker can carry out Stored XSS on this https://found.apple.com page by injecting a malicious payload into the AirTag ‘Lost Mode’ phone number field,” Rauch wrote in an analysis of the issue. “A victim will believe they are being asked to sign into iCloud so they can get in contact with the owner of the AirTag, when in fact, the attacker has redirected them to a credential hijacking page.

“Other XSS exploits can be carried out as well like session token hijacking, clickjacking, and more. An attacker can create weaponised AirTags and leave them around, victimising innocent people who are simply trying to help a person find their lost AirTag.”

Apple has not commented publicly on the vulnerability nor does it seem to be taking the issue particularly seriously. Speaking to Brian Krebs, Rauch claimed that Apple sat on the flaw for three months – and that while it confirmed it planned to resolve the vulnerability in a future update, the company has not yet done so. Apple also refused to confirm whether Rauch’s discovery would qualify for its bug bounty programme and a potential cash payout – a final insult which led to his public release of the flaw.

It’s not the first time Apple has stood accused of failing to respond to security researchers. Earlier this month a pseudonymous researcher known as “IllusionOfChaos” dropped three zero-day vulnerabilities affecting Apple’s iOS 15 – six months after originally reporting them to the company. A fourth flaw had been fixed in an earlier iOS release, the researcher noted, “but Apple decided to cover it up and not list it on the security content page.”

The company has also been experiencing a few problems with the patches it does release. An update released to fix a vulnerability in the company’s Finder file manager, capable of bypassing the Quarantine and Gatekeeper security functions built into macOS, only worked for lowercase URLs – although emergency patches released two weeks ago appear to have had better luck.

[…]

Source: Unpatched flaw creates ‘weaponised’ Apple AirTags • The Register

CRISPR Gene-Editing Experiment using direct injection Partly Restores Vision In Legally Blind Patients

Carlene Knight’s vision was so bad that she couldn’t even maneuver around the call center where she works using her cane. But that’s changed as a result of volunteering for a landmark medical experiment. Her vision has improved enough for her to make out doorways, navigate hallways, spot objects and even see colors. Knight is one of seven patients with a rare eye disease who volunteered to let doctors modify their DNA by injecting the revolutionary gene-editing tool CRISPR directly into cells that are still in their bodies. Knight and [another volunteer in the experiment, Michael Kalberer] gave NPR exclusive interviews about their experience. This is the first time researchers worked with CRISPR this way. Earlier experiments had removed cells from patients’ bodies, edited them in the lab and then infused the modified cells back into the patients. […]

CRISPR is already showing promise for treating devastating blood disorders such as sickle cell disease and beta thalassemia. And doctors are trying to use it to treat cancer. But those experiments involve taking cells out of the body, editing them in the lab, and then infusing them back into patients. That’s impossible for diseases like [Leber congenital amaurosis, or LCA], because cells from the retina can’t be removed and then put back into the eye. So doctors genetically modified a harmless virus to ferry the CRISPR gene editor and infused billions of the modified viruses into the retinas of Knight’s left eye and Kalberer’s right eye, as well as one eye of five other patients. The procedure was done on only one eye just in case something went wrong. The doctors hope to treat the patients’ other eye after the research is complete. Once the CRISPR was inside the cells of the retinas, the hope was that it would cut out the genetic mutation causing the disease, restoring vision by reactivating the dormant cells.

The procedure didn’t work for all of the patients, who have been followed for between three and nine months. The reasons it didn’t work might have been because their dose was too low or perhaps because their vision was too damaged. But Kalberer, who got the lowest dose, and one volunteer who got a higher dose, began reporting improvement starting at about four to six weeks after the procedure. Knight and one other patient who received a higher dose improved enough to show improvement on a battery of tests that included navigating a maze. For two others, it’s too soon to tell. None of the patients have regained normal vision — far from it. But the improvements are already making a difference to patients, the researchers say. And no significant side effects have occurred. Many more patients will have to be treated and followed for much longer to make sure the treatment is safe and know just how much this might be helping.

Source: CRISPR Gene-Editing Experiment Partly Restores Vision In Legally Blind Patients – Slashdot

China to have insight into and regulate web giants’ algorithms using governance model

China’s authorities have called for internet companies to create a governance system for their algorithms.

A set of guiding opinions on algorithms, issued overnight by nine government agencies, explains that algorithms play a big role in disseminating information online and enabling growth of the digital economy. But the guiding opinions also point out that algorithms employed online can also impact society, and financial markets.

[…]

To achieve its aims, Beijing expects that algo-wielding organisations will create algorithm governance teams to assess their code and detect any security or ethical flaws. Self-regulation is expected, as is continuous revision and self-improvement.

Chinese authorities will watch those efforts and will be unsparing when they find either harmful algorithms, or less-than-comprehensive compliance efforts. Citizen reports of erroneous algos will inform some regulatory actions.

Organisations have been given three years to get this done, with further guidance to come from Beijing.

[…]

Requiring oversight of algorithms suggests that Beijing is worried on two fronts. First, it’s concerned about how automation is already playing out on China’s internet. Second, it has observed that western web giants have used algorithms to increase user engagement in ways that amplify misinformation and that have clearly caused considerable real-world harm.

The new regulations are further evidence that Beijing wants to exercise control over what Chinese citizens can see online. That desire has already seen China crack down on depictions of effeminate men, warn fan clubs not to turn mean, ban racy online content aimed at kids, and crack down on computer games – including those that aren’t historically accurate – and even advise on what songs make for acceptable karaoke.

Source: China to regulate -may censor – web giants’ algorithms • The Register