Client-Side Scanning Is An Insecure Nightmare Just Waiting To Be Exploited By Governments or companies. Apple basically installing spyware under a nice name.

In August, Apple declared that combating the spread of CSAM (child sexual abuse material) was more important than protecting millions of users who’ve never used their devices to store or share illegal material. While encryption would still protect users’ data and communications (in transit and at rest), Apple had given itself permission to inspect data residing on people’s devices before allowing it to be sent to others.

This is not a backdoor in a traditional sense. But it can be exploited just like an encryption backdoor if government agencies want access to devices’ contents or mandate companies like Apple do more to halt the spread of other content governments have declared troublesome or illegal.

Apple may have implemented its client-side scanning carefully after weighing the pros and cons of introducing a security flaw, but there’s simply no way to engage in this sort of scanning without creating a very large and slippery slope capable of accommodating plenty of unwanted (and unwarranted) government intercession.

Apple has put this program on hold for the time being, citing concerns raised by pretty much everyone who knows anything about client-side scanning and encryption. The conclusions that prompted Apple to step away from the precipice of this slope (at least momentarily) have been compiled in a report [PDF] on the negative side effects of client-side scanning, written by a large group of cybersecurity and encryption experts

[…]

Only policy decisions prevent the scanning expanding from illegal abuse images to other material of interest to governments; and only the lack of a software update prevents the scanning expanding from static images to content stored in other formats, such as voice, text, or video.

And if people don’t think governments will demand more than Apple’s proactive CSAM efforts, they haven’t been paying attention. CSAM is only the beginning of the list of content governments would like to see tech companies target and control.

While the Five Eyes governments and Apple have been talking about child sex-abuse material (CSAM) —specifically images— in their push for CSS, the European Union has included terrorism and organized crime along with sex abuse. In the EU’s view, targeted content extends from still images through videos to text, as text can be used for both sexual solicitation and terrorist recruitment. We cannot talk merely of “illegal” content, because proposed UK laws would require the blocking online of speech that is legal but that some actors find upsetting.

Once capabilities are built, reasons will be found to make use of them. Once there are mechanisms to perform on-device censorship at scale, court orders may require blocking of nonconsensual intimate imagery, also known as revenge porn. Then copyright owners may bring suit to block allegedly infringing material.

That’s just the policy and law side. And that’s only a very brief overview of clearly foreseeable expansions of CSS to cover other content, which also brings with it concerns about it being used as a tool for government censorship. Apple has already made concessions to notoriously censorial governments like China’s in order to continue to sell products and services there.

[…]

CSS is at odds with the least-privilege principle. Even if it runs in middleware, its scope depends on multiple parties in the targeting chain, so it cannot be claimed to use least-privilege in terms of the scanning scope. If the CSS system is a component used by many apps, then this also violates the least-privilege principle in terms of scope. If it runs at the OS level, things are worse still, as it can completely compromise any user’s device, accessing all their data, performing live intercept, and even turning the device into a room bug.

CSS has difficulty meeting the open-design principle, particularly when the CSS is for CSAM, which has secrecy requirements for the targeted content. As a result, it is not possible to publicly establish what the system actually does, or to be sure that fixes done in response to attacks are comprehensive. Even a meaningful audit must trust that the targeted content is what it purports to be, and so cannot completely test the system and all its failure modes.

Finally, CSS breaks the psychological-acceptability principle by introducing a spy in the owner’s private digital space. A tool that they thought was theirs alone, an intimate device to guard and curate their private life, is suddenly doing surveillance on behalf of the police. At the very least, this takes the chilling effect of surveillance and brings it directly to the owner’s fingertips and very thoughts.

[…]

Despite this comprehensive report warning against the implementation of client-side scanning, there’s a chance Apple may still roll its version out. And once it does, the pressure will be on other companies to do at least as much as Apple is doing to combat CSAM.

Source: Report: Client-Side Scanning Is An Insecure Nightmare Just Waiting To Be Exploited By Governments | Techdirt

CSAM is like installing listening software on a device. Once anyone has access to install whatever they like, there is nothing stopping them from listening in to everything. Despite the technically interesting name CSAM basically it’s talking about the manufacturer installing spyware on your device.

Facial recognition scheme in place in some British schools – more to come

Facial recognition technology is being employed in more UK schools to allow pupils to pay for their meals, according to reports today.

In North Ayrshire Council, a Scottish authority encompassing the Isle of Arran, nine schools are set to begin processing meal payments for school lunches using facial scanning technology.

The authority and the company implementing the technology, CRB Cunninghams, claim the system will help reduce queues and is less likely to spread COVID-19 than card payments and fingerprint scanners, according to the Financial Times.

Speaking to the publication, David Swanston, the MD of supplier CRB Cunninghams, said the cameras verify the child’s identity against “encrypted faceprint templates”, and will be held on servers on-site at the 65 schools that have so far signed up.

[…]

North Ayrshire council said 97 per cent of parents had given their consent for the new system, although some said they were unsure whether their children had been given enough information to make their decision.

Seemingly unaware of the controversy surrounding facial recognition, education solutions provider CRB Cunninghams announced its introduction of the technology in schools in June as the “next step in cashless catering.”

[…]

Privacy campaigners voiced concerns that moving the technology into schools merely for payment was needlessly normalising facial recognition.

“No child should have to go through border style identity checks just to get a school meal,” Silkie Carlo of the campaign group Big Brother Watch told The Reg.

“We are supposed to live in a democracy, not a security state. This is highly sensitive, personal data that children should be taught to protect, not to give away on a whim. This biometrics company has refused to disclose who else children’s personal information could be shared with and there are some red flags here for us. “Facial recognition technology typically suffers from inaccuracy, particularly for females and people of colour, and we’re extremely concerned about how this invasive and discriminatory system will impact children.”

[…]

Those concerned about the security of schools systems now storing children’s biometric data will not be assured by the fact that educational establishments have become targets for cyber-attacks.

In March, the Harris Federation, a not-for-profit charity responsible for running 50 primary and secondary academies in London and Essex, became the latest UK education body to fall victim to ransomware. The institution said it was “at least” the fourth multi-academy trust targeted just that month alone. Meanwhile, South and City College Birmingham earlier this year told 13,000 students that all lectures would be delivered via the web because a ransomware attack had disabled its core IT systems.

[…]

Source: Facial recognition scheme in place in some British schools • The Register

The students probably gave their consent because if they didn’t, they wouldn’t get any lunch. The problem with biometrics is that they don’t change. So if someone steals yours, then it’s stolen forever. It’s not a password you can reset.

Why does dutch supermarket Albert Heijn have camera’s looking at you at the self check out?

The Party for the Animals (PvdD) wants clarity from outgoing minister Dekker for Legal Protection about a camera on Albert Heijn’s self-scanner. It concerns the PS20 from manufacturer Zebra. According to this company, the camera on the self-scanner supports facial recognition to automatically identify customers. PvdD MPs Van Raan and Wassenberg want to know whether facial recognition is used in Albert Heijn stores in any way. The minister must also explain what legal basis Albert Heijn or other supermarket chains can rely on if they decide to use facial recognition. Finally, the PvdD MPs want to know what Minister Dekker can do to prevent supermarkets from using facial recognition now or in the future.

Source: PvdD wil opheldering over camera op zelfscanner van Albert Heijn – Emerce

Moscow metro launches facial recognition payment system despite privacy concerns

More than 240 metro stations across Moscow now allow passengers to pay for a ride by looking at a camera. The Moscow metro has launched what authorities say is the first mass-scale deployment of a facial recognition payment system. According to The Guardian, passengers can access the payment option called FacePay by linking their photo, bank card and metro card to the system via the Mosmetro app. “Now all passengers will be able to pay for travel without taking out their phone, Troika or bank card,” Moscow mayor Sergey Sobyanin tweeted.

In the official Moscow website’s announcement, the country’s Department of Transport said all Face Pay information will be encrypted. The cameras at the designated turnstyles will read a passenger’s biometric key only, and authorities said information collected for the system will be stored in data centers that can only be accessed by interior ministry staff. Moscow’s Department of Information Technology has also assured users that photographs submitted to the system won’t be handed over to the cops.

Still, privacy advocates are concerned over the growing use of facial recognition in the city. Back in 2017, officials added facial recognition tech to the city’s 170,000 security cameras as part of its efforts to ID criminals on the street. Activists filed a case against Moscow’s Department of Technology a few years later in hopes of convincing the courts to ban the use of the technology. However, a court in Moscow sided with the city, deciding that its use of facial recognition does not violate the privacy of citizens. Reuters reported earlier this year, though, that those cameras were also used to identify protesters who attended rallies.

Stanislav Shakirov, the founder of Roskomsvoboda, a group that aims to protect Russians’ digital rights, said in a statement:

“We are moving closer to authoritarian countries like China that have mastered facial technology. The Moscow metro is a government institution and all the data can end up in the hands of the security services.”

Meanwhile, the European Parliament called on lawmakers in the EU earlier this month to ban automated facial recognition in public spaces. It cited evidence that facial recognition AI can still misidentify PoCs, members of the LGBTI+ community, seniors and women at higher rates. In the US, local governments are banning the use of the technology in public spaces, including statewide bans by Massachusetts and Maine. Four Democratic lawmakers also proposed a bill to ban the federal government from using facial recognition.

Source: Moscow metro launches facial recognition payment system despite privacy concerns | Engadget

Of course one of the huge problems with biometrics is that you can’t change them. Once you are compromised, you can’t go and change the password.

Tesla’s Bringing Car Insurance to Texas W/ New ‘Safety Score’ by eating and selling your location data

After two years of offering car insurance to drivers across California, Tesla’s officially bringing a similar offering to clientele in its new home state of Texas. As Electrek first reported, the big difference between the two is how drivers’ premiums are calculated: in California, the prices were largely determined by statistical evaluations. In Texas, your insurance costs will be calculated in real-time, based on your driving behavior.

Tesla says it grades this behavior using the “Safety Score” feature—the in-house metric designed by the company in order to estimate a driver’s chance of future collision. These scores were recently rolled out in order to screen drivers that were interested in testing out Tesla’s “Full Self Driving” software, which, like the Safety Score itself, is currently in beta. And while the self-driving software release date is, um, kind of up in the air for now, Tesla drivers in the lone-star state can use their safety score to apply for quotes on Tesla’s website as of today.

As Tesla points out in its own documents, relying on a single score makes the company a bit of an outlier in the car insurance market. Most traditional insurers round up a driver’s costs based on a number of factors that are wholly unrelated to their actual driving: depending on the state, this can include age, gender, occupation, and credit score, all playing a part in defining how much a person’s insurance might cost.

Tesla, on the other hand, relies on a single score, which the company says get tallied up based on five different factors: the number of forward-collision warnings you get every 1,000 miles, the number of times you “hard brake,” how often you take too-fast turns, how closely you drive behind other drivers, and how often they take their hands off the wheel when Autopilot is engaged.

[…]

Source: Tesla’s Bringing Car Insurance to Texas W/ New ‘Safety Score’

The idea sounds reasonable – but giving Tesla my location data and allowing them to process and sell that doesn’t.

Researchers show Facebook’s ad tools can target a single specific user

A new research paper written by a team of academics and computer scientists from Spain and Austria has demonstrated that it’s possible to use Facebook’s targeting tools to deliver an ad exclusively to a single individual if you know enough about the interests Facebook’s platform assigns them.

The paper — entitled “Unique on Facebook: Formulation and Evidence of (Nano)targeting Individual Users with non-PII Data” — describes a “data-driven model” that defines a metric showing the probability a Facebook user can be uniquely identified based on interests attached to them by the ad platform.

The researchers demonstrate that they were able to use Facebook’s Ads manager tool to target a number of ads in such a way that each ad only reached a single, intended Facebook user.

[…]

Source: Researchers show Facebook’s ad tools can target a single user | TechCrunch

Study reveals Android phones constantly snoop on their users

A new study by a team of university researchers in the UK has unveiled a host of privacy issues that arise from using Android smartphones.

The researchers have focused on Samsung, Xiaomi, Realme, and Huawei Android devices, and LineageOS and /e/OS, two forks of Android that aim to offer long-term support and a de-Googled experience

The conclusion of the study is worrying for the vast majority of Android users .

With the notable exception of /e/OS, even when minimally configured and the handset is idle these vendor-customized Android variants transmit substantial amounts of information to the OS developer and also to third parties (Google, Microsoft, LinkedIn, Facebook, etc.) that have pre-installed system apps. – Researchers.

As the summary table indicates, sensitive user data like persistent identifiers, app usage details, and telemetry information are not only shared with the device vendors, but also go to various third parties, such as Microsoft, LinkedIn, and Facebook.

Summary of collected data
Summary of collected data
Source: Trinity College Dublin

And to make matters worse, Google appears at the receiving end of all collected data almost across the entire table.

No way to “turn it off”

It is important to note that this concerns the collection of data for which there’s no option to opt-out, so Android users are powerless against this type of telemetry.

This is particularly concerning when smartphone vendors include third-party apps that are silently collecting data even if they’re not used by the device owner, and which cannot be uninstalled.

For some of the built-in system apps like miui.analytics (Xiaomi), Heytap (Realme), and Hicloud (Huawei), the researchers found that the encrypted data can sometimes be decoded, putting the data at risk to man-in-the-middle (MitM) attacks.

Volume of data (KB/h) transmitted by each vendor
Volume of data (KB/h) transmitted by each vendor
Source: Trinity College Dublin

As the study points out, even if the user resets the advertising identifiers for their Google Account on Android, the data-collection system can trivially re-link the new ID back to the same device and append it to the original tracking history..

The deanonymisation of users takes place using various methods, such as looking at the SIM, IMEI, location data history, IP address, network SSID, or a combination of these.

Potential cross-linking data collection points
Potential cross-linking data collection points
Source: Trinity College Dublin

Privacy-conscious Android forks like /e/OS are getting more traction as increasing numbers of users realize that they have no means to disable the unwanted functionality in vanilla Android and seek more privacy on their devices.

However, the majority of Android users remain locked into never ending stream of data collection, which is where regulators and consumer protection organizations need to step in and to put an end to this.

Gael Duval, the creator of /e/OS has told BleepingComputer:

Today, more people understand that the advertising model that is fueling the mobile OS business is based on the industrial capture of personal data at a scale that has never been seen in history, at the world level. This has negative impacts on many aspects of our lives, and can even threaten democracy as seen in recent cases. I think regulation is needed more than ever regarding personal data protection. It has started with the GDPR, but it’s not enough and we need to switch to a “privacy by default” model instead of “privacy as an option”.

Update – A Google spokesperson has provided BleepingComputer the following comment on the findings of the study:

While we appreciate the work of the researchers, we disagree that this behavior is unexpected – this is how modern smartphones work. As explained in our Google Play Services Help Center article, this data is essential for core device services such as push notifications and software updates across a diverse ecosystem of devices and software builds. For example, Google Play services uses data on certified Android devices to support core device features. Collection of limited basic information, such as a device’s IMEI, is necessary to deliver critical updates reliably across Android devices and apps.

Source: Study reveals Android phones constantly snoop on their users

England’s Data Guardian warns of plans to grant police access to patient data

England’s National Data Guardian has warned that government plans to allow data sharing between NHS bodies and the police could “erode trust and confidence” in doctors and other healthcare providers.

Speaking to the Independent newspaper, Dr Nicola Byrne said she had raised concerns with the government over clauses in the Police, Crime, Sentencing and Courts Bill.

The bill, set to go through the House of Lords this month, could force NHS bodies such as commissioning groups to share data with police and other specified authorities to prevent and reduce serious violence in their local areas.

Dr Byrne said the proposed law could “erode trust and confidence, and deter people from sharing information, and even from presenting for clinical care.”

Meanwhile, the bill [PDF] did not detail what information it would cover, she said. “The case isn’t made as to why that is necessary. These things need to be debated openly and in public.”

In a blog published last week, Dr Byrne said the bill imposes a duty on clinical groups in the NHS to disclose information to police without breaching any obligation of patient confidentiality.

“Whilst tackling serious violence is important, it is essential that the risks and harms that this new duty pose to patient confidentiality, and thereby public trust, are engaged with and addressed,” she said.

[…]

Source: England’s Data Guardian warns of plans to grant police access to patient data • The Register

MEPs support curbing police use of facial recognition, border biometric data trawling drastically

Police should be banned from using blanket facial-recognition surveillance to identify people not suspected of crimes. Certain private databases of people’s faces for identification systems ought to be outlawed, too.

That’s the feeling of the majority of members in the European Parliament this week. In a vote on Wednesday, 377 MEPs backed a resolution restricting law enforcement’s use of facial recognition, 248 voted against, and 62 abstained.

“AI-based identification systems already misidentify minority ethnic groups, LGBTI people, seniors and women at higher rates, which is particularly concerning in the context of law enforcement and the judiciary,” reads a statement from the parliament.

“To ensure that fundamental rights are upheld when using these technologies, algorithms should be transparent, traceable and sufficiently documented, MEPs ask. Where possible, public authorities should use open-source software in order to be more transparent.”

As well as this, most of the representatives believe facial-recognition tech should not be used by the police in automatic mass surveillance of people in public, and monitoring should be restricted to only those thought to have broken the law. Datasets amassed by private companies, such as Clearview AI, for identifying citizens should also be prohibited along with systems that allow cops to predict crime from people’s behavior and backgrounds.

[…]

The vote is non-biding, meaning it cannot directly lead to any legislative change. Instead, it was cast to reveal if members might be supportive of upcoming bills like the AI Act, a spokesperson for the EU parliament told The Register.

“The resolution is a non-exhaustive list of AI uses that MEPs within the home affairs field find problematic. They ask for a moratorium on deploying new facial recognition systems for law enforcement, and a ban on the narrower category of private facial recognition databases,” the spokesperson added.

It also called for border control systems to stop using biometric data to track travelers across the EU, too.

Source: MEPs support curbing police use of facial recognition • The Register

There’s a Murky Multibillion-Dollar Market for Your Phone’s Location Data

Companies that you likely have never heard of are hawking access to the location history on your mobile phone. An estimated $12 billion market, the location data industry has many players: collectors, aggregators, marketplaces, and location intelligence firms, all of which boast about the scale and precision of the data that they’ve amassed.

Location firm Near describes itself as “The World’s Largest Dataset of People’s Behavior in the Real-World,” with data representing “1.6B people across 44 countries.” Mobilewalla boasts “40+ Countries, 1.9B+ Devices, 50B Mobile Signals Daily, 5+ Years of Data.” X-Mode’s website claims its data covers “25%+ of the Adult U.S. population monthly.”

In an effort to shed light on this little-monitored industry, The Markup has identified 47 companies that harvest, sell, or trade in mobile phone location data. While hardly comprehensive, the list begins to paint a picture of the interconnected players that do everything from providing code to app developers to monetize user data to offering analytics from “1.9 billion devices” and access to datasets on hundreds of millions of people. Six companies claimed more than a billion devices in their data, and at least four claimed their data was the “most accurate” in the industry.

The Location Data Industry: Collectors, Buyers, Sellers, and Aggregators

The Markup identified 47 players in the location data industry

The logo of 1010Data

1010Data
The logo of Acxiom

Acxiom
The logo of AdSquare

AdSquare
The logo of ADVAN

ADVAN
The logo of Airsage

Airsage
The logo of Amass Insights

Amass Insights
The logo of Alqami

Alqami
The logo of Amazon AWS Data Exchange

Amazon AWS Data Exchange
The logo of Anomaly 6

Anomaly 6
The logo of Babel Street

Babel Street
The logo of Blis

Blis
The logo of Complementics

Complementics
The logo of Cuebiq

Cuebiq
The logo of Datarade

Datarade
The logo of Foursquare

Foursquare
The logo of Gimbal

Gimbal
The logo of Gravy Analytics

Gravy Analytics
The logo of GroundTruth

GroundTruth
The logo of Huq Industries

Huq Industries
The logo of InMarket / NinthDecimal

InMarket / NinthDecimal
The logo of Irys

Irys
The logo of Kochava Collective

Kochava Collective
The logo of Lifesight

Lifesight
The logo of Mobilewalla

Mobilewalla

“40+ Countries, 1.9B+ Devices, 50B Mobile Signals Daily, 5+ Years of Data”

The logo of Narrative

Narrative
The logo of Near

Near

“The World’s Largest Dataset of People’s Behavior in the Real-World”

The logo of Onemata

Onemata
The logo of Oracle

Oracle
The logo of Phunware

Phunware
The logo of PlaceIQ

PlaceIQ
The logo of Placer.ai

Placer.ai
The logo of Predicio

Predicio
The logo of Predik Data-Driven

Predik Data-Driven
The logo of Quadrant

Quadrant
The logo of QueXopa

QueXopa
The logo of Reveal Mobile

Reveal Mobile
The logo of SafeGraph

SafeGraph
The logo of Snowflake

Snowflake
The logo of start.io

start.io
The logo of Stirista

Stirista
The logo of Tamoco

Tamoco
The logo of THASOS

THASOS
The logo of Unacast

Unacast
The logo of Venntel

Venntel
The logo of Venpath

Venpath
The logo of Veraset

Veraset
The logo of X-Mode (Outlogic)

X-Mode (Outlogic)
Created by Joel Eastwood and Gabe Hongsdusit. Source: The Markup. (See our data, including extended company responses, here.)

“There isn’t a lot of transparency and there is a really, really complex shadowy web of interactions between these companies that’s hard to untangle,” Justin Sherman, a cyber policy fellow at the Duke Tech Policy Lab, said. “They operate on the fact that the general public and people in Washington and other regulatory centers aren’t paying attention to what they’re doing.”

Occasionally, stories illuminate just how invasive this industry can be. In 2020, Motherboard reported that X-Mode, a company that collects location data through apps, was collecting data from Muslim prayer apps and selling it to military contractors. The Wall Street Journal also reported in 2020 that Venntel, a location data provider, was selling location data to federal agencies for immigration enforcement.

A Catholic news outlet also used location data from a data vendor to out a priest who had frequented gay bars, though it’s still unknown what company sold that information.

Many firms promise that privacy is at the center of their businesses and that they’re careful to never sell information that can be traced back to a person. But researchers studying anonymized location data have shown just how misleading that claim can be.

[…]

Most times, the location data pipeline starts off in your hands, when an app sends a notification asking for permission to access your location data.

Apps have all kinds of reasons for using your location. Map apps need to know where you are in order to give you directions to where you’re going. A weather, waves, or wind app checks your location to give you relevant meteorological information. A video streaming app checks where you are to ensure you’re in a country where it’s licensed to stream certain shows.

But unbeknownst to most users, some of those apps sell or share location data about their users with companies that analyze the data and sell their insights, like Advan Research. Other companies, like Adsquare, buy or obtain location data from apps for the purpose of aggregating it with other data sources

[…]

Companies like Adsquare and Cuebiq told The Markup that they don’t publicly disclose what apps they get location data from to keep a competitive advantage but maintained that their process of obtaining location data was transparent and with clear consent from app users.

[…]

Yiannis Tsiounis, the CEO of the location analytics firm Advan Research, said his company buys from location data aggregators, who collect the data from thousands of apps—but would not say which ones.

[…]

Into the Location Data Marketplace 

Once a person’s location data has been collected from an app and it has entered the location data marketplace, it can be sold over and over again, from the data providers to an aggregator that resells data from multiple sources. It could end up in the hands of a “location intelligence” firm that uses the raw data to analyze foot traffic for retail shopping areas and the demographics associated with its visitors. Or with a hedge fund that wants insights on how many people are going to a certain store.

“There are the data aggregators that collect the data from multiple applications and sell in bulk. And then there are analytics companies which buy data either from aggregators or from applications and perform the analytics,” said Tsiounis of Advan Research. “And everybody sells to everybody else.”

Some data marketplaces are part of well-known companies, like Amazon’s AWS Data Exchange, or Oracle’s Data Marketplace, which sell all types of data, not just location data.

[…]

companies, like Narrative, say they are simply connecting data buyers and sellers by providing a platform. Narrative’s website, for instance, lists location data providers like SafeGraph and Complementics among its 17 providers with more than two billion mobile advertising IDs to buy from

[…]

To give a sense of how massive the industry is, Amass Insights has 320 location data providers listed on its directory, Jordan Hauer, the company’s CEO, said. While the company doesn’t directly collect or sell any of the data, hedge funds will pay it to guide them through the myriad of location data companies, he said.

[…]

Oh, the Places Your Data Will Go

There are a whole slew of potential buyers for location data: investors looking for intel on market trends or what their competitors are up to, political campaigns, stores keeping tabs on customers, and law enforcement agencies, among others.

Data from location intelligence firm Thasos Group has been used to measure the number of workers pulling extra shifts at Tesla plants. Political campaigns on both sides of the aisle have also used location data from people who were at rallies for targeted advertising.

Fast food restaurants and other businesses have been known to buy location data for advertising purposes down to a person’s steps. For example, in 2018, Burger King ran a promotion in which, if a customer’s phone was within 600 feet of a McDonalds, the Burger King app would let the user buy a Whopper for one cent.

The Wall Street Journal and Motherboard have also written extensively about how federal agencies including the Internal Revenue Service, Customs and Border Protection, and the U.S. military bought location data from companies tracking phones.

[…]

Outlogic (formerly known as X-Mode) offers a license for a location dataset titled “Cyber Security Location data” on Datarade for $240,000 per year. The listing says “Outlogic’s accurate and granular location data is collected directly from a mobile device’s GPS.”

At the moment, there are few if any rules limiting who can buy your data.

Sherman, of the Duke Tech Policy Lab, published a report in August finding that data brokers were advertising location information on people based on their political beliefs, as well as data on U.S. government employees and military personnel.

“There is virtually nothing in U.S. law preventing an American company from selling data on two million service members, let’s say, to some Russian company that’s just a front for the Russian government,” Sherman said.

Existing privacy laws in the U.S., like California’s Consumer Privacy Act, do not limit who can purchase data, though California residents can request that their data not be “sold”—which can be a tricky definition. Instead, the law focuses on allowing people to opt out of sharing their location in the first place.

[…]

“We know in practice that consumers don’t take action,” he said. “It’s incredibly taxing to opt out of hundreds of data brokers you’ve never even heard of.”

[…]

 

Source: There’s a Multibillion-Dollar Market for Your Phone’s Location Data – The Markup

Apple’s App Tracking Transparency Feature Doesn’t Stop Tracking

In 2014, some very pervy creeps stole some very personal iCloud photos from some very high-profile celebs and put them on the open web, creating one very specific PR crisis for Apple’s CEO, Tim Cook. The company was about to roll out Apple Pay as part of its latest software update, a process that took more than a decade bringing high-profile payment processors and retailers on board. The only issue was that nobody seemed to want their credit card details in the hands of the same company whose service had been used to steal dozens of nude photos of Jennifer Lawrence just a week earlier.

Apple desperately needed a rebrand, and that’s exactly what we got. Within days, the company rolled out a polished promotional campaign—complete with a brand new website and an open letter from Cook himself—explaining the company’s beefed-up privacy prowess, and the safeguards adopted in the wake of that leak. Apple wasn’t only a company you could trust, Cook said, it was arguably the company—unlike the other guys (*cough* Facebook *cough*) who built their Silicon Valley empires off of pawning your data to marketing companies, Apple’s business model is built off of “selling great products,” no data-mining needed.

That ad campaign’s been playing out for the last seven years, and by all accounts, it’s worked. It’s worked well enough that in 2021, we trust Apple with our credit card info, our personal health information, and most of what’s inside our homes. And when Tim Cook decried things like the “data-industrial complex” in interviews earlier this year and then rolled out a slew of iOS updates meant to give users the power they deserved, we updated our iPhones and felt a tiny bit safer.

The App Tracking Transparency (ATT) settings that came bundled in an iOS 14 update gave iPhone users everywhere the power to tell their favorite apps (and Facebook) to knock off the whole tracking thing. Saying no, Apple promised, would stop these apps from tracking you as you browse the web, and through other apps on your phone. Well, it turns out that wasn’t quite the case. The Washington Post was first to report on a research study that put Apple’s ATT feature to the test, and found the setting… pretty much useless. As the researchers put it:

In our tests of ten top-ranked apps, we found no meaningful difference in third-party tracking activity when choosing App Tracking Transparency’s “Ask App Not To Track.” The number of active third-party trackers was identical regardless of a user’s ATT choice, and the number of tracking attempts was only slightly (~13%) lower when the user chose “Ask App Not To Track”.

So, what the hell happened? In short, ATT addresses one specific (and powerful) piece of digital data that advertisers use to identify your specific device—and your specific identity—across multiple sites and services: the so-called ID for Advertisers, or IDFA. Telling an app not to track severs their access to this identifier, which is why companies like Facebook lost their minds over these changes. Without the IDFA, Facebook had no way to know whether, say, an Instagram ad translated into a sale on some third-party platform, or whether you downloaded an app because of an ad you saw in your news feed.

Luckily for said companies (but unluckily for us), tracking doesn’t start and end with the IDFA. Fingerprinting—or cobbling together a bunch of disparate bits of mobile data to uniquely identify your device—has come up as a pretty popular alternative to some major digital ad companies, which eventually led Apple to tell them to knock that shit off. But because “fingerprinting” encompasses so many different kinds of data in so many different contexts (and can go by many different names), nobody knocked anything off. And outside of one or two banned apps, Apple really didn’t seem to care.

[…]

Some Apple critics in the marketing world have been raising red flags for months about potential antitrust issues with Apple’s ATT rollout, and it’s not hard to see why. It gave Apple exclusive access to a particularly powerful piece of intel on all of its customers, the IDFA, while leaving competing tech firms scrambling for whatever scraps of data they can find. If all of those scraps become Apple’s sole property, too, that’s practically begging for even more antitrust scrutiny to be thrown its way. What Apple seems to be doing here is what any of us would likely do in its situation: picking its battles.

Source: Apple’s App Tracking Transparency Feature Doesn’t Stop Tracking

WhatsApp fined over $260M for EU privacy violations, failng to explain how data is shared with Facebook

WhatsApp didn’t fully explain to Europeans how it uses their data as called for by EU privacy law, Ireland’s Data Protection Commission said on Thursday. The regulator hit the messaging app with a fine of 225 million euros, about $267 million.

Partly at issue is how WhatsApp share information with parent company Facebook, according to the commission. The decision brings an end to a GDPR inquiry the privacy regulator started in December 2018.

[…]

Source: WhatsApp fined over $260M for EU privacy violations – CNET

Sky Broadband sends Subscribers browsing data through to Premier League without user knowledge or consent

UK ISP Sky Broadband is monitoring the IP addresses of servers suspected of streaming pirated content to subscribers and supplying that data to an anti-piracy company working with the Premier League. That inside knowledge is then processed and used to create blocklists used by the country’s leading ISPs, to prevent subscribers from watching pirated events.

[…]

In recent weeks, an anonymous source shared a small trove of information relating to the systems used to find, positively identity, and then ultimately block pirate streams at ISPs. According to the documents, the module related to the Premier League work is codenamed ‘RedBeard’.

The activity appears to start during the week football matches or PPV events take place. A set of scripts at anti-piracy company Friend MTS are tasked with producing lists of IP addresses that are suspected of being connected to copyright infringement. These addresses are subsequently dumped to Amazon S3 buckets and the data is used by ISPs to block access to infringing video streams, the documents indicate.

During actual event scanning, content is either manually or fingerprint matched, with IP addresses extracted from DNS information related to hostnames in media URLs, load balancers, and servers hosting Electronic Program Guides (EPG), all of which are used by unlicensed IPTV services.

Confirmed: Sky is Supplying Traffic Data to Assist IPTV Blocking

The big question then is how the Premier League’s anti-piracy partner discovers the initial server IP addresses that it subsequently puts forward for ISP blocking.

According to documents reviewed by TF, information comes from three sources – the anti-piracy company’s regular monitoring (which identifies IP addresses and their /24 range), manually entered IP addresses (IP addresses and ports), and a third, potentially more intriguing source – ISPs themselves.

“ISPs provide lists of Top Talker IP addresses, these are the IP addresses that they see on their network which many consumers are receiving a large sum of bandwidth from,” one of the documents reveals.

“The IP addresses are the uploading IP address which host information which the ISP’s customers are downloading information from. They are not the IP addresses of the ISP’s customer’s home internet connections.”

The document revealing this information is not dated but other documents in the batch reference dates in 2021. At the time of publishing date, the document indicates that ISP cooperation is currently limited to Sky Broadband only. TorrentFreak asked Friend MTS if that remains the case or whether additional ISPs are now involved.

[…]

Source: Sky Subscribers’ Piracy Habits Directly Help Premier League Block Illegal Streams * TorrentFreak

Apple stalls CSAM auto-scan on devices after ‘feedback’ from everyone on Earth – will still scan all your pics at some point

Apple on Friday said it intends to delay the introduction of its plan to commandeer customers’ own devices to scan their iCloud-bound photos for illegal child exploitation imagery, a concession to the broad backlash that followed from the initiative.

“Previously we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them and to help limit the spread of Child Sexual Abuse Material,” the company said in a statement posted to its child safety webpage.

“Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”

[…]

Apple – rather than actually engaging with the security community and the public – published a list of Frequently Asked Questions and responses to address the concern that censorious governments will demand access to the CSAM scanning system to look for politically objectionable images.

“Could governments force Apple to add non-CSAM images to the hash list?” the company asked in its interview of itself, and then responded, “No. Apple would refuse such demands and our system has been designed to prevent that from happening.”

Apple however has not refused government demands in China with regard to VPNs or censorship. Nor has it refused government demands in Russia, with regard to its 2019 law requiring pre-installed Russian apps.

Tech companies uniformly say they comply with all local laws. So if China, Russia, or the US were to pass a law requiring on-device scanning to be adapted to address “national security concerns” or some other plausible cause, Apple’s choice would be to comply or face the consequences – it would no longer be able to say, “We can’t do on-device scanning.”

Source: Apple stalls CSAM auto-scan on devices after ‘feedback’ from everyone on Earth • The Register

Facebook used facial recognition without consent 200,000 times, says South Korea’s data watchdog. Netflix fined too and Google scolded.

Facebook, Netflix and Google have all received reprimands or fines, and an order to make corrective action, from South Korea’s government data protection watchdog, the Personal Information Protection Commission (PIPC).

The PIPC announced a privacy audit last year and has revealed that three companies – Facebook, Netflix and Google – were in violations of laws and had insufficient privacy protection.

Facebook alone was ordered to pay 6.46 billion won (US$5.5M) for creating and storing facial recognition templates of 200,000 local users without proper consent between April 2018 and September 2019.

Another 26 million won (US$22,000) penalty was issued for illegally collecting social security numbers, not issuing notifications regarding personal information management changes, and other missteps.

Facebook has been ordered to destroy facial information collected without consent or obtain consent, and was prohibited from processing identity numbers without legal basis. It was also ordered to destroy collected data and disclose contents related to foreign migration of personal information. Zuck’s brainchild was then told to make it easier for users to check legal notices regarding personal information.

[…]

Netflix’s fine was a paltry 220 million won (US$188,000), with that sum imposed for collecting data from five million people without their consent, plus another 3.2 million won (US$2,700) for not disclosing international transfer of the data.

Google got off the easiest, with just a “recommendation” to improve its personal data handling processes and make legal notices more precise.

The PPIC said it is not done investigating methods of collecting personal information from overseas businesses and will continue with a legal review.

[…]

Source: Facebook used facial recognition without consent 200,000 times, says South Korea’s data watchdog • The Register

China puts continuous consent at the center of data protection law

[…] The new “Personal Information Protection Law of the People’s Republic of China” comes into effect on November 1st, 2021, and comprises eight chapters and 74 articles

[…]

The Cyberspace Administration of China (CAC) said, as translated from Mandarin using automated tools:

On the basis of relevant laws, the law further refines and perfects the principles and personal information processing rules to be followed in the protection of personal information, clarifies the boundaries of rights and obligations in personal information processing activities, and improves the work systems and mechanisms for personal information protection.

The document outlines standardized data-handling processes, defines rules on big data and large-scale operations, regulates those processing data, addresses data that flows across borders, and outlines legal enforcement of its provisions. It also clarifies that state agencies are not immune from these measures.

The CAC asserts that consenting to collection of data is at the core of China’s laws and the new legislation requires continual up-to-date fully informed advance consent of the individual. Parties gathering data cannot require excessive information nor refuse products or services if the individual disapproves. The individual whose data is collected can withdraw consent, and death doesn’t end the information collector’s responsibilities or the individual’s rights – it only passes down the right to control the data to the deceased subject’s family.

Information processors must also take “necessary measures to ensure the security of the personal information processed” and are required to set up compliance management systems and internal audits.

To collect sensitive data, like biometrics, religious beliefs, and medical, health and financial accounts, information needs to be necessary, for a specific purpose and protected. Prior to collection, there must be an impact assessment, and the individual should be informed of the collected data’s necessity and impact on personal rights.

Interestingly, the law seeks to prevent companies from using big data to prey on consumers – for example setting transaction prices – or mislead or defraud consumers based on individual characteristics or habits. Furthermore, large-scale network platforms must establish compliance systems, publicly self-report their efforts, and outsource data-protective measures.

And if data flows across borders, the data collectors must establish a specialized agency in China or appoint a representative to be responsible. Organizations are required to offer clarity on how data is protected and its security assessed.

Storing data overseas does not exempt a person or company from compliance to any of the Personal Information Protection Laws.

In the end, supervision and law enforcement falls to the Cyberspace Administration and relevant departments of the State Council.

[…]

Source: China puts continuous consent at the center of data protection law • The Register

It looks like China has had a good look at the EU Cybersecurity Act and enhanced on that. All this looks very good and of course even better that they mandate the Chinese governmental agencies to also follow this, but is it true? With all the governmental AI systems, cameras and facial recognition systems tracking ethnic minorities (such as the Uyghurs) and setting good behaviour scores, how will these be affected? Somehow I doubt they will dismantle the pervasive surveillance apparatus they have. So even if the laws sound excellent, the proof is in the pudding.

Sensitive Data On Afghan Allies Collected By The US Military Is Now In The Hands Of The Taliban

The problem with harvesting reams of sensitive data is that it presents a very tempting target for malicious hackers, enemy governments, and other wrongdoers. That hasn’t prevented anyone from collecting and storing all of this data, secure only in the knowledge this security will ultimately be breached.

[…]

The Taliban is getting everything we left behind. It’s not just guns, gear, and aircraft. It’s the massive biometric collections we amassed while serving as armed ambassadors of goodwill. The stuff the US government compiled to track its allies are now handy repositories that will allow the Taliban to hunt down its enemies. Ken Klippenstein and Sara Sirota have more details for The Intercept.

The devices, known as HIIDE, for Handheld Interagency Identity Detection Equipment, were seized last week during the Taliban’s offensive, according to a Joint Special Operations Command official and three former U.S. military personnel, all of whom worried that sensitive data they contain could be used by the Taliban. HIIDE devices contain identifying biometric data such as iris scans and fingerprints, as well as biographical information, and are used to access large centralized databases. It’s unclear how much of the U.S. military’s biometric database on the Afghan population has been compromised.

At first, it might seem that this will only allow the Taliban to high-five each other for making the US government’s shit list. But it wasn’t just used to track terrorists. It was used to track allies.

While billed by the U.S. military as a means of tracking terrorists and other insurgents, biometric data on Afghans who assisted the U.S. was also widely collected and used in identification cards, sources said.

[…]

Source: Sensitive Data On Afghan Allies Collected By The US Military Is Now In The Hands Of The Taliban | Techdirt

Apple’s Not Digging Itself Out of This One: scanning your pictures is dangerous and flawed

Online researchers say they have found flaws in Apple’s new child abuse detection tool that could allow bad actors to target iOS users. However, Apple has denied these claims, arguing that it has intentionally built safeguards against such exploitation.

It’s just the latest bump in the road for the rollout of the company’s new features, which have been roundly criticized by privacy and civil liberties advocates since they were initially announced two weeks ago. Many critics view the updates—which are built to scour iPhones and other iOS products for signs of child sexual abuse material (CSAM)—as a slippery slope towards broader surveillance.

The most recent criticism centers around allegations that Apple’s “NeuralHash” technology—which scans for the bad images—can be exploited and tricked to potentially target users. This started because online researchers dug up and subsequently shared code for NeuralHash as a way to better understand it. One Github user, AsuharietYgvar, claims to have reverse-engineered the scanning tech’s algorithm and published the code to his page. Ygvar wrote in a Reddit post that the algorithm was basically available in iOS 14.3 as obfuscated code and that he had taken the code and rebuilt it in a Python script to assemble a clearer picture of how it worked.

Problematically, within a couple of hours, another researcher said they were able to use the posted code to trick the system into misidentifying an image, creating what is called a “hash collision.”

[…]

However, “hash collisions” involve a situation in which two totally different images produce the same “hash” or signature. In the context of Apple’s new tools, this has the potential to create a false-positive, potentially implicating an innocent person for having child porn, critics claim. The false-positive could be accidental or intentionally triggered by a malicious actor.

[…]

ost alarmingly, researchers noted that it could be easily co-opted by a government or other powerful entity, which might repurpose its surveillance tech to look for other kinds of content. “Our system could easily be repurposed for surveillance and censorship,” writes Mayer and his research partner, Anunay Kulshrestha, in an op-ed in the Washington Post. “The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching data base, and the person using that service would be none the wiser.”

The researchers were “so disturbed” by their findings that they subsequently declared the system dangerous, and warned that it shouldn’t be adopted by a company or organization until more research could be done to curtail the potential dangers it presented. However, not long afterward, Apple announced its plans to roll out a nearly identical system to over 1.5 billion devices in an effort to scan for CSAM. The op-ed ultimately notes that Apple is “gambling with security, privacy and free speech worldwide” by implementing a similar system in such a hasty, slapdash way.

[…]

pple’s decision to launch such an invasive technology so swiftly and unthinkingly is a major liability for consumers. The fact that Apple says it has built safety nets around this feature is not comforting at all, he added.

“You can always build safety nets underneath a broken system,” said Green, noting that it doesn’t ultimately fix the problem. “I have a lot of issues with this [new system]. I don’t think it’s something that we should be jumping into—this idea that local files on your device will be scanned.” Green further affirmed the idea that Apple had rushed this experimental system into production, comparing it to an untested airplane whose engines are held together via duct tape. “It’s like Apple has decided we’re all going to go on this airplane and we’re going to fly. Don’t worry [they say], the airplane has parachutes,” he said.

[…]

Source: Apple’s Not Digging Itself Out of This One

Your Credit Score Should Be Based On Your Web History, IMF Says

In a new blog post for the International Monetary Fund, four researchers presented their findings from a working paper that examines the current relationship between finance and tech as well as its potential future. Gazing into their crystal ball, the researchers see the possibility of using the data from your browsing, search, and purchase history to create a more accurate mechanism for determining the credit rating of an individual or business. They believe that this approach could result in greater lending to borrowers who would potentially be denied by traditional financial institutions. At its heart, the paper is trying to wrestle with the dawning notion that the institutional banking system is facing a serious threat from tech companies like Google, Facebook, and Apple. The researchers identify two key areas in which this is true: Tech companies have greater access to soft-information, and messaging platforms can take the place of the physical locations that banks rely on for meeting with customers.

The concept of using your web history to inform credit ratings is framed around the notion that lenders rely on hard-data that might obscure the worthiness of a borrower or paint an unnecessarily dire picture during hard times. Citing soft-data points like “the type of browser and hardware used to access the internet, the history of online searches and purchases” that could be incorporated into evaluating a borrower, the researchers believe that when a lender has a more intimate relationship with the potential client’s history, they might be more willing to cut them some slack. […] But how would all this data be incorporated into credit ratings? Machine learning, of course. It’s black boxes all the way down. The researchers acknowledge that there will be privacy and policy concerns related to incorporating this kind of soft-data into credit analysis. And they do little to explain how this might work in practice.

Source: Your Credit Score Should Be Based On Your Web History, IMF Says – Slashdot

So now the banks want your browsing history. They don’t want to miss out on the surveillance economy.

How to Stop Apple From Scanning Your iPhone Photos Before iOS 15 Arrives – disable photo backups. No alternative offered, sorry.

Photos that are sent in messaging apps like WhatsApp or Telegram aren’t scanned by Apple. Still, if you don’t want Apple to do this scanning at all, your only option is to disable iCloud Photos. To do that, open the “Settings” app on your iPhone or iPad, go to the “Photos” section, and disable the “iCloud Photos” feature. From the popup, choose the “Download Photos & Videos” option to download the photos from your iCloud Photos library.

Image for article titled How to Stop Apple From Scanning Your iPhone Photos Before iOS 15 Arrives
Screenshot: Khamosh Pathak

You can also use the iCloud website to download all photos to your computer. Your iPhone will now stop uploading new photos to iCloud, and Apple won’t scan any of your photos now.

Looking for an alternative? There really isn’t one. All major cloud-backup providers have the same scanning feature, it’s just that they do it completely in the cloud (while Apple uses a mix of on-device and cloud scanning). If you don’t want this kind of photo scanning, use local backups, NAS, or a backup service that is completely end-to-end encrypted.

Source: How to Stop Apple From Scanning Your iPhone Photos Before iOS 15 Arrives

Zoom to pay $85M for lying about encryption and sending data to Facebook and Google

Zoom has agreed to pay $85 million to settle claims that it lied about offering end-to-end encryption and gave user data to Facebook and Google without the consent of users. The settlement between Zoom and the filers of a class-action lawsuit also covers security problems that led to rampant “Zoombombings.”

The proposed settlement would generally give Zoom users $15 or $25 each and was filed Saturday at US District Court for the Northern District of California. It came nine months after Zoom agreed to security improvements and a “prohibition on privacy and security misrepresentations” in a settlement with the Federal Trade Commission, but the FTC settlement didn’t include compensation for users.

As we wrote in November, the FTC said that Zoom claimed it offers end-to-end encryption in its June 2016 and July 2017 HIPAA compliance guides, in a January 2019 white paper, in an April 2017 blog post, and in direct responses to inquiries from customers and potential customers. In reality, “Zoom did not provide end-to-end encryption for any Zoom Meeting that was conducted outside of Zoom’s ‘Connecter’ product (which are hosted on a customer’s own servers), because Zoom’s servers—including some located in China—maintain the cryptographic keys that would allow Zoom to access the content of its customers’ Zoom Meetings,” the FTC said. In real end-to-end encryption, only the users themselves have access to the keys needed to decrypt content.

[…]

Source: Zoom to pay $85M for lying about encryption and sending data to Facebook and Google | Ars Technica

Stop using Zoom, Hamburg’s DPA warns state government – The US does not safeguard EU citizen data

Hamburg’s state government has been formally warned against using Zoom over data protection concerns.

The German state’s data protection agency (DPA) took the step of issuing a public warning yesterday, writing in a press release that the Senate Chancellory’s use of the popular videoconferencing tool violates the European Union’s General Data Protection Regulation (GDPR) since user data is transferred to the U.S. for processing.

The DPA’s concern follows a landmark ruling (Schrems II) by Europe’s top court last summer which invalidated a flagship data transfer arrangement between the EU and the U.S. (Privacy Shield), finding U.S. surveillance law to be incompatible with EU privacy rights.

The fallout from Schrems II has been slow to manifest — beyond an instant blanket of legal uncertainty. However, a number of European DPAs are now investigating the use of U.S.-based digital services because of the data transfer issue, in some instances publicly warning against the use of mainstream U.S. tools like Facebook and Zoom because user data cannot be adequately safeguarded when it’s taken over the pond.

German agencies are among the most proactive in this respect. But the EU’s data protection supervisor is also investigating the bloc’s use of cloud services from U.S. giants Amazon and Microsoft over the same data transfer concern.

[…]

The agency asserts that use of Zoom by the public body does not comply with the GDPR’s requirement for a valid legal basis for processing personal data, writing: “The documents submitted by the Senate Chancellery on the use of Zoom show that [GDPR] standards are not being adhered to.”

The DPA initiated a formal procedure earlier, via a hearing, on June 17, 2021, but says the Senate Chancellory failed to stop using the videoconferencing tool. Nor did it provide any additional documents or arguments to demonstrate compliance usage. Hence, the DPA taking the step of a formal warning, under Article 58 (2) (a) of the GDPR.

[…]

Source: Stop using Zoom, Hamburg’s DPA warns state government | TechCrunch

How to Limit Spotify From Tracking You, Because It Knows Too Much – and sells it

Most Spotify users are likely aware the streaming service tracks their listening activity, search history, playlists, and the songs they like or skip—that’s all part of helping the algorithm figure out what you like, right? However, some users may be less OK with how much other data Spotify and its partners are logging.

According to Spotify’s privacy policy, the company tracks:

  • Your name
  • Email address
  • Phone number
  • Date of birth
  • Gender
  • Street address, country, and other GPS location data
  • Login info
  • Billing info
  • Website cookies
  • IP address
  • Facebook user ID, login information, likes, and other data.
  • Device information like accelerometer or gyroscope data, operating system, model, browser, and even some data from other devices on your wifi network.

This information helps Spotify tailor song and artist recommendations to your tastes and is used to improve the in-app user experience, sure. However, the company also uses it to attract advertising partners, who can create personalized ads based on your information. And that doesn’t even touch on the third-party cross-site trackers that are eagerly eyeing your Spotify activity too.

Treating people and their data like a consumable resource is scummy, but it’s common practice for most companies and websites these days, and the common response from the general public is typically a shrug (never mind that a survey of US adults revealed we place a high value on our personal data). However, it’s still a security risk. As we’ve seen repeatedly over the years, all it takes is one poorly-secured server or an unusually skilled hacker to compromise the personal data that companies like Spotify hold onto.

And to top things off, almost all of your Spotify profile’s information is public by default—so anyone else with a Spotify account can easily look you up unless you go out of your way to change your settings.

Luckily, you can limit some of the data Spotify and connected third-party apps collect, and can review the personal information the app has stored. Spotify doesn’t offer that many data privacy options, and many of them are spread out across its web, desktop, and mobile apps, but we’ll show you where to find them all and which ones you should enable for the most private Spotify listening experience possible. You know, relatively.

How to change your Spotify account’s privacy settings

The web player is where to start if you want to tune up your Spotify privacy. Almost all of Spotify’s data privacy settings are found on there, rather than in the mobile or desktop apps.

We’ll start by cutting down on how much personal data you share with Spotify.

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Log in to Spotify’s web player on desktop.
  2. Click your user icon then go to Account > Edit profile.
  3. Remove or edit any personal info that you’re able to.
  4. Uncheck “Share my registration data with Spotify’s content providers for marketing purposes.”
  5. Click “Save Changes.”
Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse

Next, let’s limit how Spotify uses your personal data for advertising.

  1. Go to Account > Privacy settings.
  2. Turn off “Process my personal data for tailored ads.” Note that you’ll still get just as many ads—and Spotify will still track you—but your personal data will no longer be used to deliver you targeted ads.
  3. Turn off “Process my Facebook data. This will stop Spotify from using your Facebook account data to further refine the ads you hear.

Lastly, go to Account > Apps to review all the external apps linked to your Spotify account and see a list of all devices you’re logged in to. Remove any you don’t need or use anymore.

How to review your Spotify account data

You can also see how much of your personal data Spotify has collected. At the bottom of the Privacy Settings page, there’s an option to download your Spotify data for review. While you can’t remove this data from your account, it shows you a selection of personal information, your listening and search history, and other data the company has collected. Click “Request” to begin the process. Note that it can take up to 30 days for Spotify to get your data ready for download.

How to hide public playlists and listening activity on Spotify

Your Spotify playlists and listening activity are public by default, but you can quickly turn them off or even block certain listening activity in Spotify’s web and desktop apps. While this doesn’t affect Spotify’s data tracking, it’s still a good idea to keep some info hidden if you’re trying to make Spotify as private as possible.

How to turn off Spotify listening activity

Desktop

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Click your profile image and go to Settings > Social
  2. Turn off “Make my new playlists public.”
  3. Turn off “Share my listening activity on Spotify.”

Mobile

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Tap the settings icon in the upper-right of the app.
  2. Scroll down to “Social.”
  3. Disable “Listening Activity.”

How to hide Spotify Playlists

Don’t forget to hide previously created playlists, which are made public by default. This can be done from the desktop, web, and mobile apps.

Mobile

  1. Open the “Your Library” tab.
  2. Select a playlist.
  3. Tap the three-dot icon in the upper-right of the screen.
  4. Select “Make Secret.”

Desktop app and web player

  1. Open a playlist from the library bar on the left.
  2. Click the three-dot icon by the Playlist’s name.
  3. Select “Make Secret.”

How to use Private Listening mode on Spotify

Spotify’s Private Listening mode also hides your listening activity, but you need to enable it manually each time you want to use it.

Mobile

  1. In the app, go to Settings > Social.
  2. Tap “Enable private session.”

Desktop app and web player

There are three ways to enable a Private session on desktop:

  • Click your profile picture then select “Private session.”
  • Or, click the “…” icon in the upper-left and go to File > Private session.
  • Or, go to Settings > Social and toggle “Start a private session to listen anonymously.”

Note that Private sessions only affect what other users see (or don’t see, rather). It doesn’t stop Spotify from tracking your activity—though as Wired points out, Spotify’s Privacy Policy vaguely implies Private Mode “may not influence” your recommendations, so it’s possible some data isn’t tracked while this mode is turned on. It’s better to use the privacy controls outlined in the sections above if you want to change how Spotify collects data.

How to limit third-party cookie tracking in Spotify

Turning on the privacy settings above will help reduce how much data Spotify tracks and uses for advertising and keep some of your Spotify listening history hidden from other users, but you should also take steps to limit how other apps and websites track your Spotify activity.

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse

The desktop app has built-in cookie blocking controls that can do this:

  1. In the desktop app, click your username in the top right corner.
  2. Go to Settings > Show advanced settings.
  3. Scroll down to “Privacy” and turn on “Block all cookies for this installation of the Spotify desktop app.”
  4. Close and restart the app for the change to take effect.

For iOS and iPad users, you can disable app tracking in your device’s settings. Android users have a similar option, though it’s not as aggressive. And for those listening on the Spotify web player, use browsers with strict privacy controls like Safari, Firefox, or Brave.

The last resort: Delete your Spotify account

Even with all possible privacy settings turned on and Private Listening sessions enabled at all times, Spotify is still tracking your data. If that is absolutely unacceptable to you, the only real option is to delete your account. This will remove all your Spotify data for good—just make sure you download and back up any data you want to import to other services before you go through with it.

  1. Go to the Contact Spotify Support web page and sign in with your Spotify account.
  2. Select the “Account” section.
  3. Click “I want to close my account” from the list of options.
  4. Scroll down to the bottom of the page and click “Close Account.”
  5. Follow the on-screen prompts, clicking “Continue” each time to move forward.
  6. After the final confirmation, Spotify will send you an email with the cancellation link. Click the “Close My Account” button to verify you want to delete your account (this link is only active for 24 hours).

To be clear, we’re not advocating everyone go out and delete their Spotify accounts over the company’s privacy policy and advertising practices, but it’s always important to know how—and why—the apps and websites we use are tracking us. As we said at the top, even companies with the best intentions can fumble your data, unwittingly delivering it into the wrong hands.

Even if you’re cool with Spotify tracking you and don’t feel like enabling the options we’ve outlined in this guide, take a moment to tune up your account’s privacy with a strong password and two-factor sign-in, and remove any unnecessary info from your profile. These extra steps will help keep you safe if there’s ever an unexpected security breach.

Source: How to Limit Spotify From Tracking You, Because It Knows Too Much

Apple’s iPhone computer vision has the potential to preserve privacy but also break it completely

[…]

an AI on your phone will scan all those you have sent and will send to iPhotos. It will generate fingerprints that purportedly identify pictures, even if highly modified, that will be checked against fingerprints of known CSAM material. Too many of these – there’s a threshold – and Apple’s systems will let Apple staff investigate. They won’t get the pictures, but rather a voucher containing a version of the picture. But that’s not the picture, OK? If it all looks too dodgy, Apple will inform the authorities

[…]

In a blog post “Recognizing People in Photos Through Private On-Device Machine Learning” last month, Apple plumped itself up and strutted its funky stuff on how good its new person recognition process is. Obscured, oddly lit, accessorised, madly angled and other bizarrely presented faces are no problemo, squire.

By dint of extreme cleverness and lots of on-chip AI, Apple says it can efficiently recognise everyone in a gallery of photos. It even has a Hawkings-grade equation, just to show how serious it is, as proof that “finally, we rescale the obtained features by ss and use it as logit to compute the softmax cross-entropy loss based on the equation below.” Go, look. It’s awfully science-y.

The post is 3,500 words long, complex, and a very detailed paper on computer vision, one of the two tags Apple has given it. The other tag, Privacy, can be entirely summarised in six words: it’s on-device, therefore it’s private. No equation.

That would be more comforting if Apple hadn’t said days later how on-device analysis is going to be a key component in informing law enforcement agencies about things they disapprove of. Put the two together, and there’s a whole new and much darker angle to the fact, sold as a major consumer benefit, that Apple has been cramming in as much AI as it can so it can look at pictures as you take and after you’ve stored them.

We’ve all been worried about how mobile phones are stuffed with sensors that can watch what we watch, hear what we hear, track where we go and note what we do. The evolving world of personal data privacy is based around these not being stored in the vast vaults of big data, keeping them from being grist to the mill of manipulating our digital personas.

But what happens if the phone itself grinds that corn? It may never share a single photograph without your permission, but what if it can look at that photograph and generate precise metadata about what, who, how, when, and where it depicts?

This is an aspect of edge computing that is ahead of the regulators, even those of the EU who want to heavily control things like facial recognition. By the time any such regulation is produced, countless millions of devices will be using it to ostensibly provide safe, private, friendly on-device services that make taking and keeping photographs so much more convenient and fun.

It’s going to be very hard to turn that off, and very easy to argue for exemptions that weaken the regs to the point of pointlessness. Especially if the police and security services lobby hard as well, which they will as soon as they realise that this defeats end-to-end encryption without even touching end-to-end encryption.

So yes, Apple’s anti-CSAM model is capable of being used without impacting the privacy of the innocent, if it is run exactly as version 1.0 is described. It is also capable of working with the advances elsewhere in technology to break that privacy utterly, without setting off the tripwires of personal protection we’re putting in place right now.

[…]

Source: Apple’s iPhone computer vision has the potential to preserve privacy but also break it completely • The Register

Senators ask Amazon how it will use palm print data from its stores

If you’re concerned that Amazon might misuse palm print data from its One service, you’re not alone. TechCrunch reports that Senators Amy Klobuchar, Bill Cassidy and Jon Ossoff have sent a letter to new Amazon chief Andy Jassy asking him to explain how the company might expand use of One’s palm print system beyond stores like Amazon Go and Whole Foods. They’re also worried the biometric payment data might be used for more than payments, such as for ads and tracking.

The politicians are concerned that Amazon One reportedly uploads palm print data to the cloud, creating “unique” security issues. The move also casts doubt on Amazon’s “respect” for user privacy, the senators said.

In addition to asking about expansion plans, the senators wanted Jassy to outline the number of third-party One clients, the privacy protections for those clients and their customers and the size of the One user base. The trio gave Amazon until August 26th to provide an answer.

[…]

The company has offered $10 in credit to potential One users, raising questions about its eagerness to collect palm print data. This also isn’t the first time Amazon has clashed with government

[…]

Amazon declined to comment, but pointed to an earlier blog post where it said One palm images were never stored on-device and were sent encrypted to a “highly secure” cloud space devoted just to One content.

Source: Senators ask Amazon how it will use palm print data from its stores (updated) | Engadget

Basically having these palm prints all in the cloud is really an incredibly insecure way to keep all this biometric data of people that they can’t ever change, short of burning their palms off.