Zoom to pay $85M for lying about encryption and sending data to Facebook and Google

Zoom has agreed to pay $85 million to settle claims that it lied about offering end-to-end encryption and gave user data to Facebook and Google without the consent of users. The settlement between Zoom and the filers of a class-action lawsuit also covers security problems that led to rampant “Zoombombings.”

The proposed settlement would generally give Zoom users $15 or $25 each and was filed Saturday at US District Court for the Northern District of California. It came nine months after Zoom agreed to security improvements and a “prohibition on privacy and security misrepresentations” in a settlement with the Federal Trade Commission, but the FTC settlement didn’t include compensation for users.

As we wrote in November, the FTC said that Zoom claimed it offers end-to-end encryption in its June 2016 and July 2017 HIPAA compliance guides, in a January 2019 white paper, in an April 2017 blog post, and in direct responses to inquiries from customers and potential customers. In reality, “Zoom did not provide end-to-end encryption for any Zoom Meeting that was conducted outside of Zoom’s ‘Connecter’ product (which are hosted on a customer’s own servers), because Zoom’s servers—including some located in China—maintain the cryptographic keys that would allow Zoom to access the content of its customers’ Zoom Meetings,” the FTC said. In real end-to-end encryption, only the users themselves have access to the keys needed to decrypt content.

[…]

Source: Zoom to pay $85M for lying about encryption and sending data to Facebook and Google | Ars Technica

Stop using Zoom, Hamburg’s DPA warns state government – The US does not safeguard EU citizen data

Hamburg’s state government has been formally warned against using Zoom over data protection concerns.

The German state’s data protection agency (DPA) took the step of issuing a public warning yesterday, writing in a press release that the Senate Chancellory’s use of the popular videoconferencing tool violates the European Union’s General Data Protection Regulation (GDPR) since user data is transferred to the U.S. for processing.

The DPA’s concern follows a landmark ruling (Schrems II) by Europe’s top court last summer which invalidated a flagship data transfer arrangement between the EU and the U.S. (Privacy Shield), finding U.S. surveillance law to be incompatible with EU privacy rights.

The fallout from Schrems II has been slow to manifest — beyond an instant blanket of legal uncertainty. However, a number of European DPAs are now investigating the use of U.S.-based digital services because of the data transfer issue, in some instances publicly warning against the use of mainstream U.S. tools like Facebook and Zoom because user data cannot be adequately safeguarded when it’s taken over the pond.

German agencies are among the most proactive in this respect. But the EU’s data protection supervisor is also investigating the bloc’s use of cloud services from U.S. giants Amazon and Microsoft over the same data transfer concern.

[…]

The agency asserts that use of Zoom by the public body does not comply with the GDPR’s requirement for a valid legal basis for processing personal data, writing: “The documents submitted by the Senate Chancellery on the use of Zoom show that [GDPR] standards are not being adhered to.”

The DPA initiated a formal procedure earlier, via a hearing, on June 17, 2021, but says the Senate Chancellory failed to stop using the videoconferencing tool. Nor did it provide any additional documents or arguments to demonstrate compliance usage. Hence, the DPA taking the step of a formal warning, under Article 58 (2) (a) of the GDPR.

[…]

Source: Stop using Zoom, Hamburg’s DPA warns state government | TechCrunch

How to Limit Spotify From Tracking You, Because It Knows Too Much – and sells it

Most Spotify users are likely aware the streaming service tracks their listening activity, search history, playlists, and the songs they like or skip—that’s all part of helping the algorithm figure out what you like, right? However, some users may be less OK with how much other data Spotify and its partners are logging.

According to Spotify’s privacy policy, the company tracks:

  • Your name
  • Email address
  • Phone number
  • Date of birth
  • Gender
  • Street address, country, and other GPS location data
  • Login info
  • Billing info
  • Website cookies
  • IP address
  • Facebook user ID, login information, likes, and other data.
  • Device information like accelerometer or gyroscope data, operating system, model, browser, and even some data from other devices on your wifi network.

This information helps Spotify tailor song and artist recommendations to your tastes and is used to improve the in-app user experience, sure. However, the company also uses it to attract advertising partners, who can create personalized ads based on your information. And that doesn’t even touch on the third-party cross-site trackers that are eagerly eyeing your Spotify activity too.

Treating people and their data like a consumable resource is scummy, but it’s common practice for most companies and websites these days, and the common response from the general public is typically a shrug (never mind that a survey of US adults revealed we place a high value on our personal data). However, it’s still a security risk. As we’ve seen repeatedly over the years, all it takes is one poorly-secured server or an unusually skilled hacker to compromise the personal data that companies like Spotify hold onto.

And to top things off, almost all of your Spotify profile’s information is public by default—so anyone else with a Spotify account can easily look you up unless you go out of your way to change your settings.

Luckily, you can limit some of the data Spotify and connected third-party apps collect, and can review the personal information the app has stored. Spotify doesn’t offer that many data privacy options, and many of them are spread out across its web, desktop, and mobile apps, but we’ll show you where to find them all and which ones you should enable for the most private Spotify listening experience possible. You know, relatively.

How to change your Spotify account’s privacy settings

The web player is where to start if you want to tune up your Spotify privacy. Almost all of Spotify’s data privacy settings are found on there, rather than in the mobile or desktop apps.

We’ll start by cutting down on how much personal data you share with Spotify.

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Log in to Spotify’s web player on desktop.
  2. Click your user icon then go to Account > Edit profile.
  3. Remove or edit any personal info that you’re able to.
  4. Uncheck “Share my registration data with Spotify’s content providers for marketing purposes.”
  5. Click “Save Changes.”
Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse

Next, let’s limit how Spotify uses your personal data for advertising.

  1. Go to Account > Privacy settings.
  2. Turn off “Process my personal data for tailored ads.” Note that you’ll still get just as many ads—and Spotify will still track you—but your personal data will no longer be used to deliver you targeted ads.
  3. Turn off “Process my Facebook data. This will stop Spotify from using your Facebook account data to further refine the ads you hear.

Lastly, go to Account > Apps to review all the external apps linked to your Spotify account and see a list of all devices you’re logged in to. Remove any you don’t need or use anymore.

How to review your Spotify account data

You can also see how much of your personal data Spotify has collected. At the bottom of the Privacy Settings page, there’s an option to download your Spotify data for review. While you can’t remove this data from your account, it shows you a selection of personal information, your listening and search history, and other data the company has collected. Click “Request” to begin the process. Note that it can take up to 30 days for Spotify to get your data ready for download.

How to hide public playlists and listening activity on Spotify

Your Spotify playlists and listening activity are public by default, but you can quickly turn them off or even block certain listening activity in Spotify’s web and desktop apps. While this doesn’t affect Spotify’s data tracking, it’s still a good idea to keep some info hidden if you’re trying to make Spotify as private as possible.

How to turn off Spotify listening activity

Desktop

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Click your profile image and go to Settings > Social
  2. Turn off “Make my new playlists public.”
  3. Turn off “Share my listening activity on Spotify.”

Mobile

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Tap the settings icon in the upper-right of the app.
  2. Scroll down to “Social.”
  3. Disable “Listening Activity.”

How to hide Spotify Playlists

Don’t forget to hide previously created playlists, which are made public by default. This can be done from the desktop, web, and mobile apps.

Mobile

  1. Open the “Your Library” tab.
  2. Select a playlist.
  3. Tap the three-dot icon in the upper-right of the screen.
  4. Select “Make Secret.”

Desktop app and web player

  1. Open a playlist from the library bar on the left.
  2. Click the three-dot icon by the Playlist’s name.
  3. Select “Make Secret.”

How to use Private Listening mode on Spotify

Spotify’s Private Listening mode also hides your listening activity, but you need to enable it manually each time you want to use it.

Mobile

  1. In the app, go to Settings > Social.
  2. Tap “Enable private session.”

Desktop app and web player

There are three ways to enable a Private session on desktop:

  • Click your profile picture then select “Private session.”
  • Or, click the “…” icon in the upper-left and go to File > Private session.
  • Or, go to Settings > Social and toggle “Start a private session to listen anonymously.”

Note that Private sessions only affect what other users see (or don’t see, rather). It doesn’t stop Spotify from tracking your activity—though as Wired points out, Spotify’s Privacy Policy vaguely implies Private Mode “may not influence” your recommendations, so it’s possible some data isn’t tracked while this mode is turned on. It’s better to use the privacy controls outlined in the sections above if you want to change how Spotify collects data.

How to limit third-party cookie tracking in Spotify

Turning on the privacy settings above will help reduce how much data Spotify tracks and uses for advertising and keep some of your Spotify listening history hidden from other users, but you should also take steps to limit how other apps and websites track your Spotify activity.

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse

The desktop app has built-in cookie blocking controls that can do this:

  1. In the desktop app, click your username in the top right corner.
  2. Go to Settings > Show advanced settings.
  3. Scroll down to “Privacy” and turn on “Block all cookies for this installation of the Spotify desktop app.”
  4. Close and restart the app for the change to take effect.

For iOS and iPad users, you can disable app tracking in your device’s settings. Android users have a similar option, though it’s not as aggressive. And for those listening on the Spotify web player, use browsers with strict privacy controls like Safari, Firefox, or Brave.

The last resort: Delete your Spotify account

Even with all possible privacy settings turned on and Private Listening sessions enabled at all times, Spotify is still tracking your data. If that is absolutely unacceptable to you, the only real option is to delete your account. This will remove all your Spotify data for good—just make sure you download and back up any data you want to import to other services before you go through with it.

  1. Go to the Contact Spotify Support web page and sign in with your Spotify account.
  2. Select the “Account” section.
  3. Click “I want to close my account” from the list of options.
  4. Scroll down to the bottom of the page and click “Close Account.”
  5. Follow the on-screen prompts, clicking “Continue” each time to move forward.
  6. After the final confirmation, Spotify will send you an email with the cancellation link. Click the “Close My Account” button to verify you want to delete your account (this link is only active for 24 hours).

To be clear, we’re not advocating everyone go out and delete their Spotify accounts over the company’s privacy policy and advertising practices, but it’s always important to know how—and why—the apps and websites we use are tracking us. As we said at the top, even companies with the best intentions can fumble your data, unwittingly delivering it into the wrong hands.

Even if you’re cool with Spotify tracking you and don’t feel like enabling the options we’ve outlined in this guide, take a moment to tune up your account’s privacy with a strong password and two-factor sign-in, and remove any unnecessary info from your profile. These extra steps will help keep you safe if there’s ever an unexpected security breach.

Source: How to Limit Spotify From Tracking You, Because It Knows Too Much

Apple’s iPhone computer vision has the potential to preserve privacy but also break it completely

[…]

an AI on your phone will scan all those you have sent and will send to iPhotos. It will generate fingerprints that purportedly identify pictures, even if highly modified, that will be checked against fingerprints of known CSAM material. Too many of these – there’s a threshold – and Apple’s systems will let Apple staff investigate. They won’t get the pictures, but rather a voucher containing a version of the picture. But that’s not the picture, OK? If it all looks too dodgy, Apple will inform the authorities

[…]

In a blog post “Recognizing People in Photos Through Private On-Device Machine Learning” last month, Apple plumped itself up and strutted its funky stuff on how good its new person recognition process is. Obscured, oddly lit, accessorised, madly angled and other bizarrely presented faces are no problemo, squire.

By dint of extreme cleverness and lots of on-chip AI, Apple says it can efficiently recognise everyone in a gallery of photos. It even has a Hawkings-grade equation, just to show how serious it is, as proof that “finally, we rescale the obtained features by ss and use it as logit to compute the softmax cross-entropy loss based on the equation below.” Go, look. It’s awfully science-y.

The post is 3,500 words long, complex, and a very detailed paper on computer vision, one of the two tags Apple has given it. The other tag, Privacy, can be entirely summarised in six words: it’s on-device, therefore it’s private. No equation.

That would be more comforting if Apple hadn’t said days later how on-device analysis is going to be a key component in informing law enforcement agencies about things they disapprove of. Put the two together, and there’s a whole new and much darker angle to the fact, sold as a major consumer benefit, that Apple has been cramming in as much AI as it can so it can look at pictures as you take and after you’ve stored them.

We’ve all been worried about how mobile phones are stuffed with sensors that can watch what we watch, hear what we hear, track where we go and note what we do. The evolving world of personal data privacy is based around these not being stored in the vast vaults of big data, keeping them from being grist to the mill of manipulating our digital personas.

But what happens if the phone itself grinds that corn? It may never share a single photograph without your permission, but what if it can look at that photograph and generate precise metadata about what, who, how, when, and where it depicts?

This is an aspect of edge computing that is ahead of the regulators, even those of the EU who want to heavily control things like facial recognition. By the time any such regulation is produced, countless millions of devices will be using it to ostensibly provide safe, private, friendly on-device services that make taking and keeping photographs so much more convenient and fun.

It’s going to be very hard to turn that off, and very easy to argue for exemptions that weaken the regs to the point of pointlessness. Especially if the police and security services lobby hard as well, which they will as soon as they realise that this defeats end-to-end encryption without even touching end-to-end encryption.

So yes, Apple’s anti-CSAM model is capable of being used without impacting the privacy of the innocent, if it is run exactly as version 1.0 is described. It is also capable of working with the advances elsewhere in technology to break that privacy utterly, without setting off the tripwires of personal protection we’re putting in place right now.

[…]

Source: Apple’s iPhone computer vision has the potential to preserve privacy but also break it completely • The Register

Senators ask Amazon how it will use palm print data from its stores

If you’re concerned that Amazon might misuse palm print data from its One service, you’re not alone. TechCrunch reports that Senators Amy Klobuchar, Bill Cassidy and Jon Ossoff have sent a letter to new Amazon chief Andy Jassy asking him to explain how the company might expand use of One’s palm print system beyond stores like Amazon Go and Whole Foods. They’re also worried the biometric payment data might be used for more than payments, such as for ads and tracking.

The politicians are concerned that Amazon One reportedly uploads palm print data to the cloud, creating “unique” security issues. The move also casts doubt on Amazon’s “respect” for user privacy, the senators said.

In addition to asking about expansion plans, the senators wanted Jassy to outline the number of third-party One clients, the privacy protections for those clients and their customers and the size of the One user base. The trio gave Amazon until August 26th to provide an answer.

[…]

The company has offered $10 in credit to potential One users, raising questions about its eagerness to collect palm print data. This also isn’t the first time Amazon has clashed with government

[…]

Amazon declined to comment, but pointed to an earlier blog post where it said One palm images were never stored on-device and were sent encrypted to a “highly secure” cloud space devoted just to One content.

Source: Senators ask Amazon how it will use palm print data from its stores (updated) | Engadget

Basically having these palm prints all in the cloud is really an incredibly insecure way to keep all this biometric data of people that they can’t ever change, short of burning their palms off.

Boffins propose Pretty Good Phone Privacy to end pretty invasive location data harvesting by telcos

[…] In “Pretty Good Phone Privacy,” [PDF] a paper scheduled to be presented on Thursday at the Usenix Security Symposium, Schmitt and Barath Raghavan, assistant professor of computer science at the University of Southern California, describe a way to re-engineer the mobile network software stack so that it doesn’t betray the location of mobile network customers.

“It’s always been thought that since cell towers need to talk to phones then all users have to accept the status quo in which mobile operators track our every movement and sell the data to data brokers (as has been extensively reported),” said Schmitt. “We show how it’s possible to protect users’ mobile privacy while at the same time providing normal connectivity, and to do so without changing any of the hardware in mobile networks.”

In recent years, mobile carriers have been routinely selling and leaking location data, to the detriment of customer privacy. Efforts to alter the status quo have been hampered by an uneven regulatory landscape, the resistance of data brokers that profit from the status quo, and the assumption that cellular network architecture requires knowing where customers are located.

[…]

The purpose of Pretty Good Phone Privacy (PGPP) is to avoid using a unique identifier for authenticating customers and granting access to the network. It’s a technology that allows a Mobile Virtual Network Operator (MVNO) to issue SIM cards with identical SUPIs for every subscriber because the SUPI is only used to assess the validity of the SIM card. The PGPP network can then assign an IP address and a GUTI (Globally Unique Temporary Identifier) that can change in subsequent sessions, without telling the MVNO where the customer is located.

“We decouple network connectivity from authentication and billing, which allows the carrier to run Next Generation Core (NGC) services that are unaware of the identity or location of their users but while still authenticating them for network use,” the paper explains. “Our architectural change allows us to nullify the value of the user’s SUPI, an often targeted identifier in the cellular ecosystem, as a unique identifier.”

[…]

Its primary focus is defending against the surreptitious sale of location data by network providers.

[…]

Schmitt argues PGPP will help mobile operators comply with current and emerging data privacy regulations in US states like California, Colorado, and Virginia, and post-GDPR rules in Europe

Source: Boffins propose Pretty Good Phone Privacy to end pretty invasive location data harvesting by telcos • The Register

China stops networked vehicle data going offshore under new infosec rules

China has drafted new rules required of its autonomous and networked vehicle builders.

Data security is front and centre in the rules, with manufacturers required to store data generated by cars – and describing their drivers – within China. Data is allowed to go offshore, but only after government scrutiny.

Manufacturers are also required to name a chief of network security, who gets the job of ensuring autonomous vehicles can’t fall victim to cyber attacks. Made-in-China auto-autos are also required to be monitored to detect security issues.

Over-the-air upgrades are another requirement, with vehicle owners to be offered verbose information about the purpose of software updates, the time required to install them, and the status of upgrades.

Behind the wheel, drivers must be informed about the vehicle’s capabilities and the responsibilities that rest on their human shoulders. All autonomous vehicles will be required to detect when a driver’s hands leave the wheel, and to detect when it’s best to cede control to a human.

If an autonomous vehicle’s guidance systems fail, it must be able to hand back control.

[…]

Source: China stops networked vehicle data going offshore under new infosec rules • The Register

And again China is doing what the EU and US should be doing to a certain extent.

Have you made sure you have changed these Google Pay privacy settings?

Google Pay is an online paying system and digital wallet that makes it easy to buy anything on your mobile device or with your mobile device. But if you’re concerned about what Google is doing with all your data (which you probably should be), Google doesn’t make it easy for Google Pay has some secret settings to manage your settings.

 

A report from Bleeping Computer shows that privacy settings aren’t available through the main Google Pay setting page that is accessible through the navigation sidebar.

The URL for that settings page is:

https://pay.google.com/payments/u/0/home#settings

 

On that page, users can change general settings like address and payment users.

But if users want to change privacy settings, they have to go to a separate page:

https://pay.google.com/payments/u/0/home?page=privacySettings#privacySettings

 

On that screen, users can adjust all the same settings available on the other settings page, but they can also address three additional privacy settings—controlling whether Google Pay is allowed to share account information, personal information, and creditworthiness.

Here’s the full language of those three options:

-Allow Google Payment Corporation to share third party creditworthiness information about you with other companies owned and controlled by Google LLC for their everyday business purposes.

-Allow your personal information to be used by other companies owned and controlled by Google LLC to market to you. Opting out here does not impact whether other companies owned and controlled by Google LLC can market to you based on information you provide to them outside of Google Payment Corporation.

-Allow Google LLC or its affiliates to inform a third party merchant, whose site or app you visit, whether you have a Google Payments account that can be used for payment to that merchant. Opting out may impact your ability to use Google Payments to transact with certain third party merchants.

 

According to Bleeping Computer, the default of Google Pay is to enable all the above settings. In order to opt out, users have to go to the special URL that is not accessible through the navigation bar.

As the Reddit post that inspired the Bleeping Computer report claims, this discrepancy makes it appear that Google Pay is hiding its privacy options. “Google is not walking the talk when it claims to make it easy for their users to control the privacy and use of their own data,” the Redditor surmised.

A Google spokesperson told Gizmodo they’re working to make the privacy settings more accessible. “The different settings views described here are an issue resulting from a previous software update and we are working to fix this right away so that these privacy settings are always visible on pay.google.com,” the spokesperson told Gizmodo.

“All users are currently able to access these privacy settings via the ‘Google Payments privacy settings page’ link in the Google Pay privacy notice.”

In the meantime, here’s that link again for the privacy settings. Go ahead and uncheck those three boxes, if you feel so inclined.

Source: How To Find Google Pay’s Hidden Privacy Settings

Here’s hoping that my bank can set up it’s own version of Google Pay instead of integrating with it. I definitely don’t want Google or Apple getting their grubby little paws on my financial data.

create virtual cards to pay with online with Privacy

Protect your card details and your money by creating virtual cards at each place you spend online, or for each purchase

Create single-use cards that close themselves automatically

browser extension to create and auto-fill card numbers at checkout

Privacy Cards put the control in your hands when you make a purchase online. Business or personal, one-time or subscription, now you decide who can charge your card, how much, how often, and you can close a card any time

Source: Privacy – Smarter Payments

WhatsApp head says Apple’s child safety update is a ‘surveillance system’

One day after Apple confirmed plans for new software that will allow it to detect images of child abuse on users’ iCloud photos, Facebook’s head of WhatsApp says he is “concerned” by the plans.

In a thread on Twitter, Will Cathcart called it an “Apple built and operated surveillance system that could very easily be used to scan private content for anything they or a government decides it wants to control.” He also raised questions about how such a system may be exploited in China or other countries, or abused by spyware companies.

[…]

Source: WhatsApp head says Apple’s child safety update is a ‘surveillance system’ | Engadget

Pots and kettles – but he’s right though. This is a very serious lapse of privacy for Apple

Apple confirms it will begin scanning your iCloud Photos

[…] Apple told TechCrunch that the detection of child sexual abuse material (CSAM) is one of several new features aimed at better protecting the children who use its services from online harm, including filters to block potentially sexually explicit photos sent and received through a child’s iMessage account. Another feature will intervene when a user tries to search for CSAM-related terms through Siri and Search.

Most cloud services — Dropbox, Google, and Microsoft to name a few — already scan user files for content that might violate their terms of service or be potentially illegal, like CSAM. But Apple has long resisted scanning users’ files in the cloud by giving users the option to encrypt their data before it ever reaches Apple’s iCloud servers.

Apple said its new CSAM detection technology — NeuralHash — instead works on a user’s device, and can identify if a user uploads known child abuse imagery to iCloud without decrypting the images until a threshold is met and a sequence of checks to verify the content are cleared.

News of Apple’s effort leaked Wednesday when Matthew Green, a cryptography professor at Johns Hopkins University, revealed the existence of the new technology in a series of tweets. The news was met with some resistance from some security experts and privacy advocates, but also users who are accustomed to Apple’s approach to security and privacy that most other companies don’t have.

Apple is trying to calm fears by baking in privacy through multiple layers of encryption, fashioned in a way that requires multiple steps before it ever makes it into the hands of Apple’s final manual review.

[…]

Source: Apple confirms it will begin scanning iCloud Photos for child abuse images | TechCrunch

No matter what the cause, they have no right to be scanning your stuff at all, for any reason, at any time.

Apple is about to start scanning iPhone users’ photos

Apple is about to announce a new technology for scanning individual users’ iPhones for banned content. While it will be billed as a tool for detecting child abuse imagery, its potential for misuse is vast based on details entering the public domain.

The neural network-based tool will scan individual users’ iDevices for child sexual abuse material (CSAM), respected cryptography professor Matthew Green told The Register today.

Rather than using age-old hash-matching technology, however, Apple’s new tool – due to be announced today along with a technical whitepaper, we are told – will use machine learning techniques to identify images of abused children.

[…]Indiscriminately scanning end-user devices for CSAM is a new step in the ongoing global fight against this type of criminal content. In the UK the Internet Watch Foundation’s hash list of prohibited content is shared with ISPs who then block the material at source. Using machine learning to intrusively scan end user devices is new, however – and may shake public confidence in Apple’s privacy-focused marketing.

[…]

Governments in the West and authoritarion regions alike will be delighted by this initiative, Green feared. What’s to stop China (or some other censorious regime such as Russia or the UK) from feeding images of wanted fugitives into this technology and using that to physically locate them?

[…]

“Apple will hold the unencrypted database of photos (really the training data for the neural matching function) and your phone will hold the photos themselves. The two will communicate to scan the photos on your phone. Alerts will be sent to Apple if *multiple* photos in your library match, it can’t just be a single one.”

The privacy-busting scanning tech will be deployed against America-based iThing users first, with the idea being to gradually expand it around the world as time passes. Green said it would be initially deployed against photos backed up in iCloud before expanding to full handset scanning.

[…]

Source: Apple is about to start scanning iPhone users’ devices for banned content, warns professor • The Register

Wow, no matter what the pretext (and the pretext of sex offenders is very very often the very first step they take on a much longer road, because hey, who can be against bringing sex offenders to justice, right?) Apple has just basically said that they think they have the right to read whatever they like on your phone. Nothing privacy! So what will be next? Your emails? Text messages? Location history (again)?

As a user, you actually bought this hardware – anyone you don’t explicitly give consent to (and that means not being coerced by limiting functionality, eg) should stay out of it!

Amazon hit with $887 million fine by European privacy watchdog

Amazon has been issued with a fine of 746 million euros ($887 million) by a European privacy watchdog for breaching the bloc’s data protection laws.

The fine, disclosed by Amazon on Friday in a securities filing, was issued two weeks ago by Luxembourg’s privacy regulator.

The Luxembourg National Commission for Data Protection said Amazon’s processing of personal data did not comply with the EU’s General Data Protection Regulation.

[…]

Source: Amazon hit with $887 million fine by European privacy watchdog

Pretty massively strange that they don’t tell us what exactly they are fining Amazon for…

QR Menu Codes Are Tracking You More Than You Think

If you’ve returned to the restaurants and bars that have reopened in your neighborhood lately, you might have noticed a new addition to the post-quarantine decor: QR codes. Everywhere. And as they’ve become more ubiquitous on the dining scene, so has the quiet tracking and targeting that they do.

That’s according to a new analysis by the New York Times, that found these QR codes have the ability to collect customer data—enough to create what Jay Stanley, a senior policy analyst at the American Civil Liberties Union, called an “entire apparatus of online tracking,” that remembers who you are every time you sit down for a meal. While the data itself contains pretty uninteresting information, like your order history or contact information, it turns out there’s nothing stopping that data from being passed to whomever the establishment wants.

[…]

But as the Times piece points out, these little pieces of tech aren’t as innocuous as they might initially seem. Aside from storing data like menus or drink options, QR codes are often designed to transmit certain data about the person who scanned them in the first place—like their phone number or email address, along with how often the user might be scanning the code in question. This data collection comes with a few perks for the restaurants that use the codes (they know who their repeat customers are and what they might order). The only problem is that we actually don’t know where that data actually goes.

Source: QR Menu Codes Are Tracking You More Than You Think

Note for ant fuckers: the QR code does not in fact “transmit” anything – a server behind it detects that you have visited it (if you follow a URL in the code) and then collects data based on what you do on the server, but also on the initial connection (eg location through IP address, URL parameters which can include location information, OS, browser type, etc etc etc)

Inside the Industry That Unmasks People at Scale: yup your mobile advertising ID isn’t anonymous either

Tech companies have repeatedly reassured the public that trackers used to follow smartphone users through apps are anonymous or at least pseudonymous, not directly identifying the person using the phone. But what they don’t mention is that an entire overlooked industry exists to purposefully and explicitly shatter that anonymity.

They do this by linking mobile advertising IDs (MAIDs) collected by apps to a person’s full name, physical address, and other personal identifiable information (PII). Motherboard confirmed this by posing as a potential customer to a company that offers linking MAIDs to PII.

“If shady data brokers are selling this information, it makes a mockery of advertisers’ claims that the truckloads of data about Americans that they collect and sell is anonymous,” Senator Ron Wyden told Motherboard in a statement.

“We have one of the largest repositories of current, fresh MAIDS<>PII in the USA,” Brad Mack, CEO of data broker BIGDBM told us when we asked about the capabilities of the product while posing as a customer. “All BIGDBM USA data assets are connected to each other,” Mack added, explaining that MAIDs are linked to full name, physical address, and their phone, email address, and IP address if available. The dataset also includes other information, “too numerous to list here,” Mack wrote.

A MAID is a unique identifier a phone’s operating system gives to its users’ individual device. For Apple, that is the IDFA, which Apple has recently moved to largely phase out. For Google, that is the AAID, or Android Advertising ID. Apps often grab a user’s MAID and provide that to a host of third parties. In one leaked dataset from a location tracking firm called Predicio previously obtained by Motherboard, the data included users of a Muslim prayer app’s precise locations. That data was somewhat pseudonymized, because it didn’t contain the specific users’ name, but it did contain their MAID. Because of firms like BIGDBM, another company that buys the sort of data Predicio had could take that or similar data and attempt to unmask the people in the dataset simply by paying a fee.

[…]

“This real-world research proves that the current ad tech bid stream, which reveals mobile IDs within them, is a pseudonymous data flow, and therefore not-compliant with GDPR,” Edwards told Motherboard in an online chat.

“It’s an anonymous identifier, but has been used extensively to report on user behaviour and enable marketing techniques like remarketing,” a post on the website of the Internet Advertising Bureau, a trade group for the ad tech industry, reads, referring to MAIDs.

In April Apple launched iOS 14.5, which introduced sweeping changes to how apps can track phone users by making each app explicitly ask for permission to track them. That move has resulted in a dramatic dip in the amount of data available to third parties, with just 4 percent of U.S. users opting-in. Google said it plans to implement a similar opt-in measure broadly across the Android ecosystem in early 2022.

[…]

Source: Inside the Industry That Unmasks People at Scale

Sam Altman’s New Startup Wants to Give You Crypto for Eyeball Scans – yes this is a terrible dr evil plan idea

hould probably sit down for this one. Sam Altman, the former CEO of famed startup incubator Y Combinator, is reportedly working on a new cryptocurrency that’ll be distributed to everyone on Earth. Once you agree to scan your eyeballs.

Yes, you read correctly.

You can thank Bloomberg for inflicting this cursed news on the rest of us. In its report, Bloomberg says Altman’s forthcoming cryptocurrency and the company behind it, both dubbed Worldcoin, recently raised $25 million from investors. The company is purportedly backed by Andreessen Horowitz, LinkedIn founder Reid Hoffman, and Day One Ventures.

“I’ve been very interested in things like universal basic income and what’s going to happen to global wealth redistribution and how we can do that better,” Altman told Bloomberg, explaining what fever dream inspired this.
[…]

What supposedly makes Worldcoin different is it adds a hardware component to cryptocurrency in a bid to “ensur[e] both humanness and uniqueness of everybody signing up, while maintaining their privacy and the overall transparency of a permissionless blockchain.” Specifically, Bloomberg says the gadget is a portable “silver-colored spherical gizmo the size of a basketball” that’s used to scan people’s irises. It’s undergoing testing in some cities, and since Worldcoin is not yet ready for distribution, the company is giving volunteers other cryptocurrencies like Bitcoin in exchange for participating. There are supposedly fewer than 20 prototypes of this eyeball scanning orb, and currently, each reportedly costs $5,000 to make.

Supposedly the whole iris scanning thing is “essential” as it would generate a “unique numerical code” for each person, thereby discouraging scammers from signing up multiple times. As for the whole privacy problem, Worldcoin says the scanned image is deleted afterward and the company purportedly plans to be “as transparent as possible.”

Source: Sam Altman’s New Startup Wants to Give You Crypto for Eyeball Scans

Advertisers Are Selling Americans’ Data to Hundreds of Shady Foreign Businesses

Senator Ron Wyden has released a list of hundreds of secretive, foreign-owned companies that are buying up Americans’ data. Some of the customers include companies based in states that are ostensibly “unfriendly” to the U.S., like Russia and China.

First reported by Motherboard, the news comes after recent information requests made by a bipartisan coalition of Senators, who asked prominent advertising exchanges to provide a transparent list of any “foreign-headquartered or foreign-majority owned” firms to whom they sell consumer “bidstream data.” Such data is typically collected, bought, and sold amidst the intricate advertising ecosystem, which uses “real-time bidding” to monetize consumer preferences and interests.

Wyden, who helped lead the effort, has expressed concerns that Americans’ data could fall into the hands of foreign intelligence agencies to “supercharge hacking, blackmail, and influence campaigns,” as a previous letter from him and other Senators puts it.

“Few Americans realize that some auction participants are siphoning off and storing ‘bidstream’ data to compile exhaustive dossiers about them. In turn, these dossiers are being openly sold to anyone with a credit card, including to hedge funds, political campaigns, and even to governments,” the letter states.

In response to the information requests, most companies seem to have responded with vague, evasive answers. However, advertising firm Magnite has provided a list of over 150 different companies it sells to while declining to note which countries they are based in. Wyden’s staff spent time researching the companies and Motherboard reports that the list includes the likes of Adfalcon—a large ad firm based in Dubai that calls itself the “first mobile advertising network in the Middle East”—as well as Chinese companies like Adtiming and Mobvista International.

Magnite’s response further shows that the kinds of data it provides to these companies may include all sorts of user information—including age, name, and the site names and domains they visit, device identifiers, IP address, and other information that would help any discerning observer piece together a fairly comprehensive picture of who you are, where you’re located, and what you’re interested in.

You can peruse the full list of companies that Magnite works with and, foreign ownership aside, they just naturally sound creepy. With confidence-inspiring names like “12Mnkys,” “Freakout,” “CyberAgent Dynalst,” and “Zucks,” these firms—many of which you’d be hard-pressed to even find an accessible website for—are doing God knows what with the data they procure.

The question naturally arises: How is it that these companies that we know literally nothing about seem to have access to so much of our personal information? Also: Where are the federal regulations when you need them?

Source: Advertisers Are Selling Americans’ Data to Hundreds of Shady Foreign Businesses

And that’s why Europe has GDPR

Windows Users Surprised by Windows 11’s Short List of Supported CPUs – and front facing camera requirements

While a lot of focus has been on the TPM requirements for Windows 11, Microsoft has since updated its documentation to provide a complete list of supported processors. At present the list includes only Intel 8th Generation Core processors or newer, and AMD Ryzen Zen+ processors or newer, effectively limiting Windows 11 to PC less than 4-5 years old.

Notably absent from the list is the Intel Core i7-7820HQ, the processor used in Microsoft’s current flagship $3500+ Surface Studio 2. This has prompted many threads on Reddit from users angry that their (in some cases very new) Surface PC is failing the Windows 11 upgrade check.
The Verge confirms: Windows 11 will only support 8th Gen and newer Intel Core processors, alongside [Intel’s 2016-era] Apollo Lake and newer Pentium and Celeron processors. That immediately rules out millions of existing Windows 10 devices from upgrading to Windows 11… Windows 11 will also only support AMD Ryzen 2000 and newer processors, and 2nd Gen or newer [AMD] EPYC chips. You can find the full list of supported processors on Microsoft’s site…

Originally, Microsoft noted that CPU generation requirements are a “soft floor” limit for the Windows 11 installer, which should have allowed some older CPUs to be able to install Windows 11 with a warning, but hours after we published this story, the company updated that page to explicitly require the list of chips above.

Many Windows 10 users have been downloading Microsoft’s PC Health App (available here) to see whether Windows 11 works on their systems, only to find it fails the check… This is the first significant shift in Windows hardware requirements since the release of Windows 8 back in 2012, and the CPU changes are understandably catching people by surprise.

Microsoft is also requiring a front-facing camera for all Windows 11 devices except desktop PCs from January 2023 onwards.
“In order to run Windows 11, devices must meet the hardware specifications,” explains Microsoft’s official compatibility page for Windows 11.

“Devices that do not meet the hardware requirements cannot be upgraded to Windows 11.”

Source: Windows Users Surprised by Windows 11’s Short List of Supported CPUs – Slashdot

Why on earth should Microsoft require that it can look at you?!

Amazon is blocking Google’s FLoC

Most of Amazon’s properties including Amazon.com, WholeFoods.com and Zappos.com are preventing Google’s tracking system FLoC — or Federated Learning of Cohorts — from gathering valuable data reflecting the products people research in Amazon’s vast e-commerce universe, according to website code analyzed by Digiday and three technology experts who helped Digiday review the code.

Amazon declined to comment on this story.

As Google’s system gathers data about people’s web travels to inform how it categorizes them, Amazon’s under-the-radar move could not only be a significant blow to Google’s mission to guide the future of digital ad tracking after cookies die — it could give Amazon a leg up in its own efforts to sell advertising across what’s left of the open web.

[…]

Digiday watched last week as Amazon added code to its digital properties to block FLoC from tracking visitors using Google’s Chrome browser. For example, while earlier in the week WholeFoods.com and Woot.com did not include code to block FLoC, by Thursday Digiday saw that those sites did feature code telling Google’s system not to include activities of their visitors to inform cohorts or assign IDs. But Amazon’s blocking appears scattered.

[..]

Source: Amazon is blocking Google’s FLoC — and that could seriously weaken the system

Apple settles with student after authorized repair workers leaked her naked pics to her Facebook page. Apple blocks Right to repair for danger by unauthorised parties. Hmm.

Apple has paid a multimillion-dollar settlement to an unnamed Oregon college student after one of its outsourced repair facilities posted explicit pictures and videos of her to her Facebook page.

According to legal documents obtained by The Telegraph, the incident occurred in 2016 at a Pegatron-owned repair centre in Sacramento, California. The student had mailed in her device to have an unspecified fault fixed.

While it was at the facility, two technicians published a series of photographs showing the complainant unclothed to her Facebook account, as well as a “sex video.” The complaint said the post was made in a way that impersonated the victim, and was only removed after friends informed her of its existence.

The two men responsible were fired after an investigation. It is not known if the culprits faced criminal charges.

Much of the details of the case, as well as the exact size of the settlement, were sealed. Lawyers for the plaintiff sought a $5m payout. The settlement included non-disclosure provisions that prevented the student from revealing details about the case, or the exact size of the compensation.

Counsel for the victim threatened to sue for infliction of emotional distress, as well as invasion of privacy. The filings show they warned Apple that any lawsuit would result in inevitable negative publicity for the company.

Pegatron settled with the victim separately, per the filings.

In its fight against the right to repair, Apple has argued that allowing independent third-party businesses to service its computers and smartphones would present an unacceptable risk to user privacy and security.

This incident, which occurred at the facilities of an authorised contractor, has undercut that argument somewhat.

It follows a similar incident in November 2019, where a Genius Bar employee texted himself an explicit image taken from an iPhone he was repairing. After the victim complained, the employee was fired.

[…]

Source: Apple settles with student after authorized repair workers leaked her naked pics to her Facebook page • The Register

Google, Facebook, Chaos Computer Club join forces to oppose German state spyware

Plans by the German government to allow the police to deploy malware on any target’s devices, and force the tech world to help them, has run into some opposition, funnily enough.

In an open letter this month, the Chaos Computer Club – along with Google, Facebook, and others – said they are against proposals to dramatically expand the use of so-called state trojans, aka government-made spyware, in Germany. Under planned legislation, even people not suspected of committing a crime can be infected, and service providers will be forced to help. Plus all German spy agencies will be allowed to infiltrate people’s electronics and communications.

The proposals bypass the whole issue of backdooring or weakening encryption that American politicians seem fixated on. Once you have root access on a person’s computer or handheld, the the device can be an open book, encryption or not.

“The proposals are so absurd that all of the experts invited to the committee hearing in the Bundestag sharply criticized the ideas,” the CCC said.

“Even Facebook and Google – so far not positively recognized as pioneers of privacy – speak out vehemently against the project. Protect security and trust online – against an unlimited expansion of surveillance and for the protection of encryption.”

Source: Google, Facebook, Chaos Computer Club join forces to oppose German state spyware • The Register

Google reportedly made it harder to find Android privacy settings

Google’s approach to Android privacy is coming under fire following revelations from Arizona’s antitrust lawsuit over phone tracking. As Insider reports, freshly unredacted documents in the case suggest Google made Android privacy settings harder to find. When Google tested OS releases that surfaced privacy features, the company reportedly saw greater use of those features as a “problem” and aimed to put them deeper into the menu system.

The tech giant also “successfully pressured” phone brands like LG to bury location settings as they were popular, according to Arizona’s attorneys. Google personnel further acknowledged that it was difficult to stop the company from determining your home and work locations, and complained that there was “no way” to give third-party apps your location without also handing them to Google.

[…]

Source: Google reportedly made it harder to find Android privacy settings | Engadget

WhatsApp Won’t Limit Functionality if You Refuse Privacy Policy – for now. But it will pester you about it.

WhatsApp initially threatened to revoke core functions for users that refused to accept its controversial new privacy policy, only to walk back the severity of those consequences earlier this month amid international backlash, and now, it’s doing away with them altogether (for the time being, at least).

In a reversal, the company clarified on Friday that it won’t restrict any functionality even if you haven’t accepted the app’s updated privacy policy yet, TNW reports.

“Given recent discussions with various authorities and privacy experts, we want to make clear that we will not limit the functionality of how WhatsApp works for those who have not yet accepted the update,” a WhatsApp spokesperson said in a statement to the Verge. They added that this is the plan moving forward indefinitely.

In an update to the company’s FAQ page, WhatsApp clarifies that no users will have their accounts deleted or lose functionality if they don’t accept the new policies. That being said, WhatsApp will still send these users reminders to update “from time to time,” WhatsApp told the Verge. On its support page, WhatsApp claims that the majority of users who have seen the update have accepted.

Source: WhatsApp Won’t Limit Functionality if You Refuse Privacy Policy

Creepy Social Media Face Stealing firm Clearview hit with complaints in France, Austria, Italy, Greece and the UK

Data rights groups have filed complaints in the UK, France, Austria, Greece and Italy against Clearview AI, claiming its scraped and searchable database of biometric profiles breaches both the EU and UK General Data Protection Regulation (GDPR).

The facial recognition company, which is based in the US, claims to have “the largest known database of 3+ billion facial images”. Clearview AI’s facial recognition tool is trained on images harvested from YouTube, Facebook, Twitter and attempts to match faces fed into its machine learning software with results from its multi-billion picture database. The business then provides a link to the place it found the “match”.

Google, Twitter, Facebook and even Venmo all sent cease and desist letters to Clearview AI last year asking that it stop scraping people’s photos from their websites. The firm’s CEO defended its business model at the time by saying: “Google can pull in information from all different websites. So if it’s public and it’s out there and could be inside Google’s search engine, it can be inside ours as well.”

The US firm was sued last year by the American Civil Liberties Union. The ACLU also sued the US Department of Homeland Security and its law enforcement agencies last month for failing to respond to Freedom of Information Act requests about their use of Clearview’s tech.

[…]

Back in January this year, [PDF], Chaos Computer Club member Matthias Marx managed to get Clearview to delete the hash value representing his biometric profile – although not the actual images or metadata – after filing a complaint with the Hamburg data protection authorities.

The decision by the Hamburg DPA was that Clearview AI had added his biometric profile to its searchable database without his knowledge or consent. It did not order the deletion of the photographs, however.

“It is long known that Clearview AI has not only me, but many, probably thousands of Europeans in its illegal face database. An order by the European data protection authorities to remove the faces of all Europeans is long overdue,” Marx told The Reg via email. “It is not a solution that every person has to file [their] own complaint.”

[…]

 

Source: Facial recog firm Clearview hit with complaints in France, Austria, Italy, Greece and the UK • The Register

The New Sonos One SL Reminds Us That Smart Devices Have a Shelf Life, forces you to spying S2 update

[…]

if you’re thinking of buying a new One SL, you ought to keep in mind that it’ll only work with the newer Sonos S2 app.

This won’t be a problem for every Sonos owner, especially if you bought all your Sonos devices in the past year or two. It might be an issue, however, if you’re still operating a mix of newer and older Sonos hardware. Namely, the “legacy” Sonos products that were “killed off” last year. Those legacy gadgets will only work with the S1 app, and although Sonos committed to providing updates for these devices, controlling a mix of legacy and current Sonos gadgets isn’t possible on the S2 app.

[…]

Source: The New Sonos One SL Reminds Us That Smart Devices Have a Shelf Life

You can’t roll back from the old update which basically only seems to add rounded corners to backgrounds and break in dark mode – except that you allow Sonos to spy on you through the built in microphone with S2.