Researchers Say They’ve Found a ‘Master Face’ to Bypass Face Rec Tech

[…]

computer scientists at Tel Aviv University in Israel say they have discovered a way to bypass a large percentage of facial recognition systems by basically faking your face. The team calls this method the “master face” (like a “master key,” harhar), which uses artificial intelligence technologies to create a facial template—one that can consistently juke and unlock identity verification systems.

“Our results imply that face-based authentication is extremely vulnerable, even if there is no information on the target identity,” researchers write in their study. “In order to provide a more secure solution for face recognition systems, anti-spoofing methods are usually applied. Our method might be combined with additional existing methods to bypass such defenses,” they add.

According to the study, the vulnerability being exploited here is the fact that facial recognition systems use broad sets of markers to identify specific individuals. By creating facial templates that match many of those markers, a sort of omni-face can be created that is capable of fooling a high percentage of security systems. In essence, the attack is successful because it generates “faces that are similar to a large portion of the population.”

This face-of-all-faces is created by inputting a specific algorithm into the StyleGAN, a widely used “generative model” of artificial intelligence tech that creates digital images of human faces that aren’t real. The team tested their face imprint on a large, open-source repository of 13,000 facial images operated by the University of Massachusetts and claim that it could unlock “more than 20% of the identities” within the database. Other tests showed even higher rates of success.

Furthermore, the researchers write that the face construct could hypothetically be paired with deepfake technologies, which will “animate” it, thus fooling “liveness detection methods” that are designed to assess whether a subject is living or not.

Source: Researchers Say They’ve Found a ‘Master Face’ to Bypass Face Rec Tech

Apple confirms it will begin scanning your iCloud Photos

[…] Apple told TechCrunch that the detection of child sexual abuse material (CSAM) is one of several new features aimed at better protecting the children who use its services from online harm, including filters to block potentially sexually explicit photos sent and received through a child’s iMessage account. Another feature will intervene when a user tries to search for CSAM-related terms through Siri and Search.

Most cloud services — Dropbox, Google, and Microsoft to name a few — already scan user files for content that might violate their terms of service or be potentially illegal, like CSAM. But Apple has long resisted scanning users’ files in the cloud by giving users the option to encrypt their data before it ever reaches Apple’s iCloud servers.

Apple said its new CSAM detection technology — NeuralHash — instead works on a user’s device, and can identify if a user uploads known child abuse imagery to iCloud without decrypting the images until a threshold is met and a sequence of checks to verify the content are cleared.

News of Apple’s effort leaked Wednesday when Matthew Green, a cryptography professor at Johns Hopkins University, revealed the existence of the new technology in a series of tweets. The news was met with some resistance from some security experts and privacy advocates, but also users who are accustomed to Apple’s approach to security and privacy that most other companies don’t have.

Apple is trying to calm fears by baking in privacy through multiple layers of encryption, fashioned in a way that requires multiple steps before it ever makes it into the hands of Apple’s final manual review.

[…]

Source: Apple confirms it will begin scanning iCloud Photos for child abuse images | TechCrunch

No matter what the cause, they have no right to be scanning your stuff at all, for any reason, at any time.

Apple is about to start scanning iPhone users’ photos

Apple is about to announce a new technology for scanning individual users’ iPhones for banned content. While it will be billed as a tool for detecting child abuse imagery, its potential for misuse is vast based on details entering the public domain.

The neural network-based tool will scan individual users’ iDevices for child sexual abuse material (CSAM), respected cryptography professor Matthew Green told The Register today.

Rather than using age-old hash-matching technology, however, Apple’s new tool – due to be announced today along with a technical whitepaper, we are told – will use machine learning techniques to identify images of abused children.

[…]Indiscriminately scanning end-user devices for CSAM is a new step in the ongoing global fight against this type of criminal content. In the UK the Internet Watch Foundation’s hash list of prohibited content is shared with ISPs who then block the material at source. Using machine learning to intrusively scan end user devices is new, however – and may shake public confidence in Apple’s privacy-focused marketing.

[…]

Governments in the West and authoritarion regions alike will be delighted by this initiative, Green feared. What’s to stop China (or some other censorious regime such as Russia or the UK) from feeding images of wanted fugitives into this technology and using that to physically locate them?

[…]

“Apple will hold the unencrypted database of photos (really the training data for the neural matching function) and your phone will hold the photos themselves. The two will communicate to scan the photos on your phone. Alerts will be sent to Apple if *multiple* photos in your library match, it can’t just be a single one.”

The privacy-busting scanning tech will be deployed against America-based iThing users first, with the idea being to gradually expand it around the world as time passes. Green said it would be initially deployed against photos backed up in iCloud before expanding to full handset scanning.

[…]

Source: Apple is about to start scanning iPhone users’ devices for banned content, warns professor • The Register

Wow, no matter what the pretext (and the pretext of sex offenders is very very often the very first step they take on a much longer road, because hey, who can be against bringing sex offenders to justice, right?) Apple has just basically said that they think they have the right to read whatever they like on your phone. Nothing privacy! So what will be next? Your emails? Text messages? Location history (again)?

As a user, you actually bought this hardware – anyone you don’t explicitly give consent to (and that means not being coerced by limiting functionality, eg) should stay out of it!