How NSO Group’s zero-click iPhone-Hacking Exploit Works

[…] researchers managed to technically deconstruct just how one of the company’s notorious “zero-click” attacks work. Indeed, researchers with Google’s Project Zero published a detailed break-down that shows how an NSO exploit, dubbed “FORCEDENTRY,” can swiftly and silently take over a phone.

[…]

Initial details about it were captured by Citizen Lab, a research unit at the University of Toronto that has frequently published research related to NSO’s activities. Citizen Lab researchers managed to get ahold of phones that had been subjected to the company’s “zero-click” attacks and, in September, published initial research about how they worked. Around the same time, Apple announced it was suing NSO and also published security updates to patch the problems associated with the exploit.

Citizen Lab ultimately shared its findings with Google’s researchers who, as of last week, finally published their analysis of the attacks. As you might expect, it’s pretty incredible—and frightening—stuff.

[…]

Probably the most terrifying thing about FORCEDENTRY is that, according to Google’s researchers, the only thing necessary to hack a person was their phone number or their AppleID username.

Using one of those identifiers, the wielder of NSO’s exploit could quite easily compromise any device they wished. The attack process was simple: What appeared to be a GIF was texted to the victim’s phone via iMessage. However, the image in question was not actually a GIF; instead, it was a malicious PDF that had been dressed up with a .gif extension. Within the file was a highly sophisticated malicious payload that could hijack a vulnerability in Apple’s image processing software and use it to quickly take over valuable resources within the targeted device.

[…]

what FORCEDENTRY did was exploit a zero-day vulnerability within Apple’s image rendering library, CoreGraphics—the software that iOS uses to process on-device imagery and media. That vulnerability, officially tracked as CVE-2021-30860, is associated with an old piece of free, open-source code that iOS was apparently leveraging to encode and decode PDF files—the Xpdf implementation of JBIG2.

Here’s where the attack gets really wild, though. By exploiting the image processing vulnerability, FORCEDENTRY was able to get inside the targeted device and use the phone’s own memory to build a rudimentary virtual machine, basically a “computer within a computer.” From there, the machine could “bootstrap” NSO’s Pegasus malware from within, ultimately relaying data back to whoever had deployed the exploit.

[…]

The vulnerability related to this exploit was fixed in Apple’s iOS 14.8 update (issued in September), though some computer researchers have warned that if a person’s phone was compromised by Pegasus prior to the update, a patch may not do all that much to keep intruders out.

[…]

Source: How NSO Group’s iPhone-Hacking Exploit Works

Tesla Is Selling 2021 Model 3s With Degraded Batteries From 2017

When someone buys a new car, they generally expect to be getting a vehicle that’s fully up-to-date, not one built with leftover parts. Tesla customers who don’t read the fine print, though, could accidentally end up paying the price for a “new” Model 3 with a years-old battery, one which Tesla acknowledges may have already lost almost an eighth of its total capacity.

Use of older batteries in new Model 3s was first observed on Twitter, where user William Hummel shared images of a disclaimer on Tesla’s website that notes up to 12 percent reduced range stemming from the cars’ use of batteries built as far back as 2017. These screen captures were not of Tesla’s online configurator as Hummel’s use of “new car” might lead one to believe, but from Tesla’s inventory page, where “new” Model 3s are indeed listed for sale with the range disclaimer shown, along with a partial explanation accessed via the “Learn More” button.

[…]

Source: Tesla Is Selling 2021 Model 3s With Degraded Batteries From 2017

DARPA Open Sources Resources to Aid Evaluation of Adversarial AI Defenses

[…]DARPA’s Guaranteeing AI Robustness against Deception (GARD) program […] focuses on a few core objectives. One of which is the development of a testbed for characterizing ML defenses and assessing the scope of their applicability […]

Ensuring that emerging defenses are keeping pace with – or surpassing – the capabilities of known attacks is critical to establishing trust in the technology and ensuring its eventual use. To support this objective, GARD researchers developed a number of resources and virtual tools to help bolster the community’s efforts to evaluate and verify the effectiveness of existing and emerging ML models and defenses against adversarial attacks.

“Other technical communities – like cryptography – have embraced transparency and found that if you are open to letting people take a run at things, the technology will improve,” said Bruce Draper, the program manager leading GARD.

[…]

GARD researchers from Two Six Technologies, IBM, MITRE, University of Chicago, and Google Research have collaboratively generated a virtual testbed, toolbox, benchmarking dataset, and training materials to enable this effort. Further, they have made these assets available to the broader research community via a public repository

[…]

Central to the asset list is a virtual platform called Armory that enables repeatable, scalable, and robust evaluations of adversarial defenses. The Armory “testbed” provides researchers with a way to pit their defenses against known attacks and relevant scenarios. It also provides the ability to alter the scenarios and make changes, ensuring that the defenses are capable of delivering repeatable results across a range of attacks.

Armory utilizes a Python library for ML security called Adversarial Robustness Toolbox, or ART. ART provides tools that enable developers and researchers to defend and evaluate their ML models and applications against a number of adversarial threats, such as evasion, poisoning, extraction, and inference. The toolbox was originally developed outside of the GARD program as an academic-to-academic sharing platform.

[…]

The Adversarial Patches Rearranged In COnText, or APRICOT, benchmark dataset is also available via the repository. APRICOT was created to enable reproducible research on the real-world effectiveness of physical adversarial patch attacks on object detection systems. The dataset lets users project things in 3D so they can more easily replicate and defeat physical attacks, which is a unique function of this resource. “Essentially, we’re making it easier for researchers to test their defenses and ensure they are actually solving the problems they are designed to address,” said Draper.

[…]

Often, researchers and developers believe something will work across a spectrum of attacks, only to realize it lacks robustness against even minor deviations. To help address this challenge, Google Research has made the Google Research Self-Study repository that is available via the GARD evaluation toolkit. The repository contains “test dummies” – or defenses that aren’t designed to be the state-of-the-art but represent a common idea or approach that’s used to build defenses. The “dummies” are known to be broken, but offer a way for researchers to dive into the defenses and go through the process of properly evaluating their faults.

[…]

The GARD program’s Holistic Evaluation of Adversarial Defenses repository is available at https://www.gardproject.org/. Interested researchers are encouraged to take advantage of these resources and check back often for updates.

Source: DARPA Open Sources Resources to Aid Evaluation of Adversarial AI Defenses