Boffins propose Pretty Good Phone Privacy to end pretty invasive location data harvesting by telcos

[…] In “Pretty Good Phone Privacy,” [PDF] a paper scheduled to be presented on Thursday at the Usenix Security Symposium, Schmitt and Barath Raghavan, assistant professor of computer science at the University of Southern California, describe a way to re-engineer the mobile network software stack so that it doesn’t betray the location of mobile network customers.

“It’s always been thought that since cell towers need to talk to phones then all users have to accept the status quo in which mobile operators track our every movement and sell the data to data brokers (as has been extensively reported),” said Schmitt. “We show how it’s possible to protect users’ mobile privacy while at the same time providing normal connectivity, and to do so without changing any of the hardware in mobile networks.”

In recent years, mobile carriers have been routinely selling and leaking location data, to the detriment of customer privacy. Efforts to alter the status quo have been hampered by an uneven regulatory landscape, the resistance of data brokers that profit from the status quo, and the assumption that cellular network architecture requires knowing where customers are located.

[…]

The purpose of Pretty Good Phone Privacy (PGPP) is to avoid using a unique identifier for authenticating customers and granting access to the network. It’s a technology that allows a Mobile Virtual Network Operator (MVNO) to issue SIM cards with identical SUPIs for every subscriber because the SUPI is only used to assess the validity of the SIM card. The PGPP network can then assign an IP address and a GUTI (Globally Unique Temporary Identifier) that can change in subsequent sessions, without telling the MVNO where the customer is located.

“We decouple network connectivity from authentication and billing, which allows the carrier to run Next Generation Core (NGC) services that are unaware of the identity or location of their users but while still authenticating them for network use,” the paper explains. “Our architectural change allows us to nullify the value of the user’s SUPI, an often targeted identifier in the cellular ecosystem, as a unique identifier.”

[…]

Its primary focus is defending against the surreptitious sale of location data by network providers.

[…]

Schmitt argues PGPP will help mobile operators comply with current and emerging data privacy regulations in US states like California, Colorado, and Virginia, and post-GDPR rules in Europe

Source: Boffins propose Pretty Good Phone Privacy to end pretty invasive location data harvesting by telcos • The Register

China stops networked vehicle data going offshore under new infosec rules

China has drafted new rules required of its autonomous and networked vehicle builders.

Data security is front and centre in the rules, with manufacturers required to store data generated by cars – and describing their drivers – within China. Data is allowed to go offshore, but only after government scrutiny.

Manufacturers are also required to name a chief of network security, who gets the job of ensuring autonomous vehicles can’t fall victim to cyber attacks. Made-in-China auto-autos are also required to be monitored to detect security issues.

Over-the-air upgrades are another requirement, with vehicle owners to be offered verbose information about the purpose of software updates, the time required to install them, and the status of upgrades.

Behind the wheel, drivers must be informed about the vehicle’s capabilities and the responsibilities that rest on their human shoulders. All autonomous vehicles will be required to detect when a driver’s hands leave the wheel, and to detect when it’s best to cede control to a human.

If an autonomous vehicle’s guidance systems fail, it must be able to hand back control.

[…]

Source: China stops networked vehicle data going offshore under new infosec rules • The Register

And again China is doing what the EU and US should be doing to a certain extent.

Have you made sure you have changed these Google Pay privacy settings?

Google Pay is an online paying system and digital wallet that makes it easy to buy anything on your mobile device or with your mobile device. But if you’re concerned about what Google is doing with all your data (which you probably should be), Google doesn’t make it easy for Google Pay has some secret settings to manage your settings.

 

A report from Bleeping Computer shows that privacy settings aren’t available through the main Google Pay setting page that is accessible through the navigation sidebar.

The URL for that settings page is:

https://pay.google.com/payments/u/0/home#settings

 

On that page, users can change general settings like address and payment users.

But if users want to change privacy settings, they have to go to a separate page:

https://pay.google.com/payments/u/0/home?page=privacySettings#privacySettings

 

On that screen, users can adjust all the same settings available on the other settings page, but they can also address three additional privacy settings—controlling whether Google Pay is allowed to share account information, personal information, and creditworthiness.

Here’s the full language of those three options:

-Allow Google Payment Corporation to share third party creditworthiness information about you with other companies owned and controlled by Google LLC for their everyday business purposes.

-Allow your personal information to be used by other companies owned and controlled by Google LLC to market to you. Opting out here does not impact whether other companies owned and controlled by Google LLC can market to you based on information you provide to them outside of Google Payment Corporation.

-Allow Google LLC or its affiliates to inform a third party merchant, whose site or app you visit, whether you have a Google Payments account that can be used for payment to that merchant. Opting out may impact your ability to use Google Payments to transact with certain third party merchants.

 

According to Bleeping Computer, the default of Google Pay is to enable all the above settings. In order to opt out, users have to go to the special URL that is not accessible through the navigation bar.

As the Reddit post that inspired the Bleeping Computer report claims, this discrepancy makes it appear that Google Pay is hiding its privacy options. “Google is not walking the talk when it claims to make it easy for their users to control the privacy and use of their own data,” the Redditor surmised.

A Google spokesperson told Gizmodo they’re working to make the privacy settings more accessible. “The different settings views described here are an issue resulting from a previous software update and we are working to fix this right away so that these privacy settings are always visible on pay.google.com,” the spokesperson told Gizmodo.

“All users are currently able to access these privacy settings via the ‘Google Payments privacy settings page’ link in the Google Pay privacy notice.”

In the meantime, here’s that link again for the privacy settings. Go ahead and uncheck those three boxes, if you feel so inclined.

Source: How To Find Google Pay’s Hidden Privacy Settings

Here’s hoping that my bank can set up it’s own version of Google Pay instead of integrating with it. I definitely don’t want Google or Apple getting their grubby little paws on my financial data.

create virtual cards to pay with online with Privacy

Protect your card details and your money by creating virtual cards at each place you spend online, or for each purchase

Create single-use cards that close themselves automatically

browser extension to create and auto-fill card numbers at checkout

Privacy Cards put the control in your hands when you make a purchase online. Business or personal, one-time or subscription, now you decide who can charge your card, how much, how often, and you can close a card any time

Source: Privacy – Smarter Payments

WhatsApp head says Apple’s child safety update is a ‘surveillance system’

One day after Apple confirmed plans for new software that will allow it to detect images of child abuse on users’ iCloud photos, Facebook’s head of WhatsApp says he is “concerned” by the plans.

In a thread on Twitter, Will Cathcart called it an “Apple built and operated surveillance system that could very easily be used to scan private content for anything they or a government decides it wants to control.” He also raised questions about how such a system may be exploited in China or other countries, or abused by spyware companies.

[…]

Source: WhatsApp head says Apple’s child safety update is a ‘surveillance system’ | Engadget

Pots and kettles – but he’s right though. This is a very serious lapse of privacy for Apple

Apple confirms it will begin scanning your iCloud Photos

[…] Apple told TechCrunch that the detection of child sexual abuse material (CSAM) is one of several new features aimed at better protecting the children who use its services from online harm, including filters to block potentially sexually explicit photos sent and received through a child’s iMessage account. Another feature will intervene when a user tries to search for CSAM-related terms through Siri and Search.

Most cloud services — Dropbox, Google, and Microsoft to name a few — already scan user files for content that might violate their terms of service or be potentially illegal, like CSAM. But Apple has long resisted scanning users’ files in the cloud by giving users the option to encrypt their data before it ever reaches Apple’s iCloud servers.

Apple said its new CSAM detection technology — NeuralHash — instead works on a user’s device, and can identify if a user uploads known child abuse imagery to iCloud without decrypting the images until a threshold is met and a sequence of checks to verify the content are cleared.

News of Apple’s effort leaked Wednesday when Matthew Green, a cryptography professor at Johns Hopkins University, revealed the existence of the new technology in a series of tweets. The news was met with some resistance from some security experts and privacy advocates, but also users who are accustomed to Apple’s approach to security and privacy that most other companies don’t have.

Apple is trying to calm fears by baking in privacy through multiple layers of encryption, fashioned in a way that requires multiple steps before it ever makes it into the hands of Apple’s final manual review.

[…]

Source: Apple confirms it will begin scanning iCloud Photos for child abuse images | TechCrunch

No matter what the cause, they have no right to be scanning your stuff at all, for any reason, at any time.

Apple is about to start scanning iPhone users’ photos

Apple is about to announce a new technology for scanning individual users’ iPhones for banned content. While it will be billed as a tool for detecting child abuse imagery, its potential for misuse is vast based on details entering the public domain.

The neural network-based tool will scan individual users’ iDevices for child sexual abuse material (CSAM), respected cryptography professor Matthew Green told The Register today.

Rather than using age-old hash-matching technology, however, Apple’s new tool – due to be announced today along with a technical whitepaper, we are told – will use machine learning techniques to identify images of abused children.

[…]Indiscriminately scanning end-user devices for CSAM is a new step in the ongoing global fight against this type of criminal content. In the UK the Internet Watch Foundation’s hash list of prohibited content is shared with ISPs who then block the material at source. Using machine learning to intrusively scan end user devices is new, however – and may shake public confidence in Apple’s privacy-focused marketing.

[…]

Governments in the West and authoritarion regions alike will be delighted by this initiative, Green feared. What’s to stop China (or some other censorious regime such as Russia or the UK) from feeding images of wanted fugitives into this technology and using that to physically locate them?

[…]

“Apple will hold the unencrypted database of photos (really the training data for the neural matching function) and your phone will hold the photos themselves. The two will communicate to scan the photos on your phone. Alerts will be sent to Apple if *multiple* photos in your library match, it can’t just be a single one.”

The privacy-busting scanning tech will be deployed against America-based iThing users first, with the idea being to gradually expand it around the world as time passes. Green said it would be initially deployed against photos backed up in iCloud before expanding to full handset scanning.

[…]

Source: Apple is about to start scanning iPhone users’ devices for banned content, warns professor • The Register

Wow, no matter what the pretext (and the pretext of sex offenders is very very often the very first step they take on a much longer road, because hey, who can be against bringing sex offenders to justice, right?) Apple has just basically said that they think they have the right to read whatever they like on your phone. Nothing privacy! So what will be next? Your emails? Text messages? Location history (again)?

As a user, you actually bought this hardware – anyone you don’t explicitly give consent to (and that means not being coerced by limiting functionality, eg) should stay out of it!

Amazon hit with $887 million fine by European privacy watchdog

Amazon has been issued with a fine of 746 million euros ($887 million) by a European privacy watchdog for breaching the bloc’s data protection laws.

The fine, disclosed by Amazon on Friday in a securities filing, was issued two weeks ago by Luxembourg’s privacy regulator.

The Luxembourg National Commission for Data Protection said Amazon’s processing of personal data did not comply with the EU’s General Data Protection Regulation.

[…]

Source: Amazon hit with $887 million fine by European privacy watchdog

Pretty massively strange that they don’t tell us what exactly they are fining Amazon for…

QR Menu Codes Are Tracking You More Than You Think

If you’ve returned to the restaurants and bars that have reopened in your neighborhood lately, you might have noticed a new addition to the post-quarantine decor: QR codes. Everywhere. And as they’ve become more ubiquitous on the dining scene, so has the quiet tracking and targeting that they do.

That’s according to a new analysis by the New York Times, that found these QR codes have the ability to collect customer data—enough to create what Jay Stanley, a senior policy analyst at the American Civil Liberties Union, called an “entire apparatus of online tracking,” that remembers who you are every time you sit down for a meal. While the data itself contains pretty uninteresting information, like your order history or contact information, it turns out there’s nothing stopping that data from being passed to whomever the establishment wants.

[…]

But as the Times piece points out, these little pieces of tech aren’t as innocuous as they might initially seem. Aside from storing data like menus or drink options, QR codes are often designed to transmit certain data about the person who scanned them in the first place—like their phone number or email address, along with how often the user might be scanning the code in question. This data collection comes with a few perks for the restaurants that use the codes (they know who their repeat customers are and what they might order). The only problem is that we actually don’t know where that data actually goes.

Source: QR Menu Codes Are Tracking You More Than You Think

Note for ant fuckers: the QR code does not in fact “transmit” anything – a server behind it detects that you have visited it (if you follow a URL in the code) and then collects data based on what you do on the server, but also on the initial connection (eg location through IP address, URL parameters which can include location information, OS, browser type, etc etc etc)

Inside the Industry That Unmasks People at Scale: yup your mobile advertising ID isn’t anonymous either

Tech companies have repeatedly reassured the public that trackers used to follow smartphone users through apps are anonymous or at least pseudonymous, not directly identifying the person using the phone. But what they don’t mention is that an entire overlooked industry exists to purposefully and explicitly shatter that anonymity.

They do this by linking mobile advertising IDs (MAIDs) collected by apps to a person’s full name, physical address, and other personal identifiable information (PII). Motherboard confirmed this by posing as a potential customer to a company that offers linking MAIDs to PII.

“If shady data brokers are selling this information, it makes a mockery of advertisers’ claims that the truckloads of data about Americans that they collect and sell is anonymous,” Senator Ron Wyden told Motherboard in a statement.

“We have one of the largest repositories of current, fresh MAIDS<>PII in the USA,” Brad Mack, CEO of data broker BIGDBM told us when we asked about the capabilities of the product while posing as a customer. “All BIGDBM USA data assets are connected to each other,” Mack added, explaining that MAIDs are linked to full name, physical address, and their phone, email address, and IP address if available. The dataset also includes other information, “too numerous to list here,” Mack wrote.

A MAID is a unique identifier a phone’s operating system gives to its users’ individual device. For Apple, that is the IDFA, which Apple has recently moved to largely phase out. For Google, that is the AAID, or Android Advertising ID. Apps often grab a user’s MAID and provide that to a host of third parties. In one leaked dataset from a location tracking firm called Predicio previously obtained by Motherboard, the data included users of a Muslim prayer app’s precise locations. That data was somewhat pseudonymized, because it didn’t contain the specific users’ name, but it did contain their MAID. Because of firms like BIGDBM, another company that buys the sort of data Predicio had could take that or similar data and attempt to unmask the people in the dataset simply by paying a fee.

[…]

“This real-world research proves that the current ad tech bid stream, which reveals mobile IDs within them, is a pseudonymous data flow, and therefore not-compliant with GDPR,” Edwards told Motherboard in an online chat.

“It’s an anonymous identifier, but has been used extensively to report on user behaviour and enable marketing techniques like remarketing,” a post on the website of the Internet Advertising Bureau, a trade group for the ad tech industry, reads, referring to MAIDs.

In April Apple launched iOS 14.5, which introduced sweeping changes to how apps can track phone users by making each app explicitly ask for permission to track them. That move has resulted in a dramatic dip in the amount of data available to third parties, with just 4 percent of U.S. users opting-in. Google said it plans to implement a similar opt-in measure broadly across the Android ecosystem in early 2022.

[…]

Source: Inside the Industry That Unmasks People at Scale

Sam Altman’s New Startup Wants to Give You Crypto for Eyeball Scans – yes this is a terrible dr evil plan idea

hould probably sit down for this one. Sam Altman, the former CEO of famed startup incubator Y Combinator, is reportedly working on a new cryptocurrency that’ll be distributed to everyone on Earth. Once you agree to scan your eyeballs.

Yes, you read correctly.

You can thank Bloomberg for inflicting this cursed news on the rest of us. In its report, Bloomberg says Altman’s forthcoming cryptocurrency and the company behind it, both dubbed Worldcoin, recently raised $25 million from investors. The company is purportedly backed by Andreessen Horowitz, LinkedIn founder Reid Hoffman, and Day One Ventures.

“I’ve been very interested in things like universal basic income and what’s going to happen to global wealth redistribution and how we can do that better,” Altman told Bloomberg, explaining what fever dream inspired this.
[…]

What supposedly makes Worldcoin different is it adds a hardware component to cryptocurrency in a bid to “ensur[e] both humanness and uniqueness of everybody signing up, while maintaining their privacy and the overall transparency of a permissionless blockchain.” Specifically, Bloomberg says the gadget is a portable “silver-colored spherical gizmo the size of a basketball” that’s used to scan people’s irises. It’s undergoing testing in some cities, and since Worldcoin is not yet ready for distribution, the company is giving volunteers other cryptocurrencies like Bitcoin in exchange for participating. There are supposedly fewer than 20 prototypes of this eyeball scanning orb, and currently, each reportedly costs $5,000 to make.

Supposedly the whole iris scanning thing is “essential” as it would generate a “unique numerical code” for each person, thereby discouraging scammers from signing up multiple times. As for the whole privacy problem, Worldcoin says the scanned image is deleted afterward and the company purportedly plans to be “as transparent as possible.”

Source: Sam Altman’s New Startup Wants to Give You Crypto for Eyeball Scans

Advertisers Are Selling Americans’ Data to Hundreds of Shady Foreign Businesses

Senator Ron Wyden has released a list of hundreds of secretive, foreign-owned companies that are buying up Americans’ data. Some of the customers include companies based in states that are ostensibly “unfriendly” to the U.S., like Russia and China.

First reported by Motherboard, the news comes after recent information requests made by a bipartisan coalition of Senators, who asked prominent advertising exchanges to provide a transparent list of any “foreign-headquartered or foreign-majority owned” firms to whom they sell consumer “bidstream data.” Such data is typically collected, bought, and sold amidst the intricate advertising ecosystem, which uses “real-time bidding” to monetize consumer preferences and interests.

Wyden, who helped lead the effort, has expressed concerns that Americans’ data could fall into the hands of foreign intelligence agencies to “supercharge hacking, blackmail, and influence campaigns,” as a previous letter from him and other Senators puts it.

“Few Americans realize that some auction participants are siphoning off and storing ‘bidstream’ data to compile exhaustive dossiers about them. In turn, these dossiers are being openly sold to anyone with a credit card, including to hedge funds, political campaigns, and even to governments,” the letter states.

In response to the information requests, most companies seem to have responded with vague, evasive answers. However, advertising firm Magnite has provided a list of over 150 different companies it sells to while declining to note which countries they are based in. Wyden’s staff spent time researching the companies and Motherboard reports that the list includes the likes of Adfalcon—a large ad firm based in Dubai that calls itself the “first mobile advertising network in the Middle East”—as well as Chinese companies like Adtiming and Mobvista International.

Magnite’s response further shows that the kinds of data it provides to these companies may include all sorts of user information—including age, name, and the site names and domains they visit, device identifiers, IP address, and other information that would help any discerning observer piece together a fairly comprehensive picture of who you are, where you’re located, and what you’re interested in.

You can peruse the full list of companies that Magnite works with and, foreign ownership aside, they just naturally sound creepy. With confidence-inspiring names like “12Mnkys,” “Freakout,” “CyberAgent Dynalst,” and “Zucks,” these firms—many of which you’d be hard-pressed to even find an accessible website for—are doing God knows what with the data they procure.

The question naturally arises: How is it that these companies that we know literally nothing about seem to have access to so much of our personal information? Also: Where are the federal regulations when you need them?

Source: Advertisers Are Selling Americans’ Data to Hundreds of Shady Foreign Businesses

And that’s why Europe has GDPR

Windows Users Surprised by Windows 11’s Short List of Supported CPUs – and front facing camera requirements

While a lot of focus has been on the TPM requirements for Windows 11, Microsoft has since updated its documentation to provide a complete list of supported processors. At present the list includes only Intel 8th Generation Core processors or newer, and AMD Ryzen Zen+ processors or newer, effectively limiting Windows 11 to PC less than 4-5 years old.

Notably absent from the list is the Intel Core i7-7820HQ, the processor used in Microsoft’s current flagship $3500+ Surface Studio 2. This has prompted many threads on Reddit from users angry that their (in some cases very new) Surface PC is failing the Windows 11 upgrade check.
The Verge confirms: Windows 11 will only support 8th Gen and newer Intel Core processors, alongside [Intel’s 2016-era] Apollo Lake and newer Pentium and Celeron processors. That immediately rules out millions of existing Windows 10 devices from upgrading to Windows 11… Windows 11 will also only support AMD Ryzen 2000 and newer processors, and 2nd Gen or newer [AMD] EPYC chips. You can find the full list of supported processors on Microsoft’s site…

Originally, Microsoft noted that CPU generation requirements are a “soft floor” limit for the Windows 11 installer, which should have allowed some older CPUs to be able to install Windows 11 with a warning, but hours after we published this story, the company updated that page to explicitly require the list of chips above.

Many Windows 10 users have been downloading Microsoft’s PC Health App (available here) to see whether Windows 11 works on their systems, only to find it fails the check… This is the first significant shift in Windows hardware requirements since the release of Windows 8 back in 2012, and the CPU changes are understandably catching people by surprise.

Microsoft is also requiring a front-facing camera for all Windows 11 devices except desktop PCs from January 2023 onwards.
“In order to run Windows 11, devices must meet the hardware specifications,” explains Microsoft’s official compatibility page for Windows 11.

“Devices that do not meet the hardware requirements cannot be upgraded to Windows 11.”

Source: Windows Users Surprised by Windows 11’s Short List of Supported CPUs – Slashdot

Why on earth should Microsoft require that it can look at you?!

Amazon is blocking Google’s FLoC

Most of Amazon’s properties including Amazon.com, WholeFoods.com and Zappos.com are preventing Google’s tracking system FLoC — or Federated Learning of Cohorts — from gathering valuable data reflecting the products people research in Amazon’s vast e-commerce universe, according to website code analyzed by Digiday and three technology experts who helped Digiday review the code.

Amazon declined to comment on this story.

As Google’s system gathers data about people’s web travels to inform how it categorizes them, Amazon’s under-the-radar move could not only be a significant blow to Google’s mission to guide the future of digital ad tracking after cookies die — it could give Amazon a leg up in its own efforts to sell advertising across what’s left of the open web.

[…]

Digiday watched last week as Amazon added code to its digital properties to block FLoC from tracking visitors using Google’s Chrome browser. For example, while earlier in the week WholeFoods.com and Woot.com did not include code to block FLoC, by Thursday Digiday saw that those sites did feature code telling Google’s system not to include activities of their visitors to inform cohorts or assign IDs. But Amazon’s blocking appears scattered.

[..]

Source: Amazon is blocking Google’s FLoC — and that could seriously weaken the system

Apple settles with student after authorized repair workers leaked her naked pics to her Facebook page. Apple blocks Right to repair for danger by unauthorised parties. Hmm.

Apple has paid a multimillion-dollar settlement to an unnamed Oregon college student after one of its outsourced repair facilities posted explicit pictures and videos of her to her Facebook page.

According to legal documents obtained by The Telegraph, the incident occurred in 2016 at a Pegatron-owned repair centre in Sacramento, California. The student had mailed in her device to have an unspecified fault fixed.

While it was at the facility, two technicians published a series of photographs showing the complainant unclothed to her Facebook account, as well as a “sex video.” The complaint said the post was made in a way that impersonated the victim, and was only removed after friends informed her of its existence.

The two men responsible were fired after an investigation. It is not known if the culprits faced criminal charges.

Much of the details of the case, as well as the exact size of the settlement, were sealed. Lawyers for the plaintiff sought a $5m payout. The settlement included non-disclosure provisions that prevented the student from revealing details about the case, or the exact size of the compensation.

Counsel for the victim threatened to sue for infliction of emotional distress, as well as invasion of privacy. The filings show they warned Apple that any lawsuit would result in inevitable negative publicity for the company.

Pegatron settled with the victim separately, per the filings.

In its fight against the right to repair, Apple has argued that allowing independent third-party businesses to service its computers and smartphones would present an unacceptable risk to user privacy and security.

This incident, which occurred at the facilities of an authorised contractor, has undercut that argument somewhat.

It follows a similar incident in November 2019, where a Genius Bar employee texted himself an explicit image taken from an iPhone he was repairing. After the victim complained, the employee was fired.

[…]

Source: Apple settles with student after authorized repair workers leaked her naked pics to her Facebook page • The Register

Google, Facebook, Chaos Computer Club join forces to oppose German state spyware

Plans by the German government to allow the police to deploy malware on any target’s devices, and force the tech world to help them, has run into some opposition, funnily enough.

In an open letter this month, the Chaos Computer Club – along with Google, Facebook, and others – said they are against proposals to dramatically expand the use of so-called state trojans, aka government-made spyware, in Germany. Under planned legislation, even people not suspected of committing a crime can be infected, and service providers will be forced to help. Plus all German spy agencies will be allowed to infiltrate people’s electronics and communications.

The proposals bypass the whole issue of backdooring or weakening encryption that American politicians seem fixated on. Once you have root access on a person’s computer or handheld, the the device can be an open book, encryption or not.

“The proposals are so absurd that all of the experts invited to the committee hearing in the Bundestag sharply criticized the ideas,” the CCC said.

“Even Facebook and Google – so far not positively recognized as pioneers of privacy – speak out vehemently against the project. Protect security and trust online – against an unlimited expansion of surveillance and for the protection of encryption.”

Source: Google, Facebook, Chaos Computer Club join forces to oppose German state spyware • The Register

Google reportedly made it harder to find Android privacy settings

Google’s approach to Android privacy is coming under fire following revelations from Arizona’s antitrust lawsuit over phone tracking. As Insider reports, freshly unredacted documents in the case suggest Google made Android privacy settings harder to find. When Google tested OS releases that surfaced privacy features, the company reportedly saw greater use of those features as a “problem” and aimed to put them deeper into the menu system.

The tech giant also “successfully pressured” phone brands like LG to bury location settings as they were popular, according to Arizona’s attorneys. Google personnel further acknowledged that it was difficult to stop the company from determining your home and work locations, and complained that there was “no way” to give third-party apps your location without also handing them to Google.

[…]

Source: Google reportedly made it harder to find Android privacy settings | Engadget

WhatsApp Won’t Limit Functionality if You Refuse Privacy Policy – for now. But it will pester you about it.

WhatsApp initially threatened to revoke core functions for users that refused to accept its controversial new privacy policy, only to walk back the severity of those consequences earlier this month amid international backlash, and now, it’s doing away with them altogether (for the time being, at least).

In a reversal, the company clarified on Friday that it won’t restrict any functionality even if you haven’t accepted the app’s updated privacy policy yet, TNW reports.

“Given recent discussions with various authorities and privacy experts, we want to make clear that we will not limit the functionality of how WhatsApp works for those who have not yet accepted the update,” a WhatsApp spokesperson said in a statement to the Verge. They added that this is the plan moving forward indefinitely.

In an update to the company’s FAQ page, WhatsApp clarifies that no users will have their accounts deleted or lose functionality if they don’t accept the new policies. That being said, WhatsApp will still send these users reminders to update “from time to time,” WhatsApp told the Verge. On its support page, WhatsApp claims that the majority of users who have seen the update have accepted.

Source: WhatsApp Won’t Limit Functionality if You Refuse Privacy Policy

Creepy Social Media Face Stealing firm Clearview hit with complaints in France, Austria, Italy, Greece and the UK

Data rights groups have filed complaints in the UK, France, Austria, Greece and Italy against Clearview AI, claiming its scraped and searchable database of biometric profiles breaches both the EU and UK General Data Protection Regulation (GDPR).

The facial recognition company, which is based in the US, claims to have “the largest known database of 3+ billion facial images”. Clearview AI’s facial recognition tool is trained on images harvested from YouTube, Facebook, Twitter and attempts to match faces fed into its machine learning software with results from its multi-billion picture database. The business then provides a link to the place it found the “match”.

Google, Twitter, Facebook and even Venmo all sent cease and desist letters to Clearview AI last year asking that it stop scraping people’s photos from their websites. The firm’s CEO defended its business model at the time by saying: “Google can pull in information from all different websites. So if it’s public and it’s out there and could be inside Google’s search engine, it can be inside ours as well.”

The US firm was sued last year by the American Civil Liberties Union. The ACLU also sued the US Department of Homeland Security and its law enforcement agencies last month for failing to respond to Freedom of Information Act requests about their use of Clearview’s tech.

[…]

Back in January this year, [PDF], Chaos Computer Club member Matthias Marx managed to get Clearview to delete the hash value representing his biometric profile – although not the actual images or metadata – after filing a complaint with the Hamburg data protection authorities.

The decision by the Hamburg DPA was that Clearview AI had added his biometric profile to its searchable database without his knowledge or consent. It did not order the deletion of the photographs, however.

“It is long known that Clearview AI has not only me, but many, probably thousands of Europeans in its illegal face database. An order by the European data protection authorities to remove the faces of all Europeans is long overdue,” Marx told The Reg via email. “It is not a solution that every person has to file [their] own complaint.”

[…]

 

Source: Facial recog firm Clearview hit with complaints in France, Austria, Italy, Greece and the UK • The Register

The New Sonos One SL Reminds Us That Smart Devices Have a Shelf Life, forces you to spying S2 update

[…]

if you’re thinking of buying a new One SL, you ought to keep in mind that it’ll only work with the newer Sonos S2 app.

This won’t be a problem for every Sonos owner, especially if you bought all your Sonos devices in the past year or two. It might be an issue, however, if you’re still operating a mix of newer and older Sonos hardware. Namely, the “legacy” Sonos products that were “killed off” last year. Those legacy gadgets will only work with the S1 app, and although Sonos committed to providing updates for these devices, controlling a mix of legacy and current Sonos gadgets isn’t possible on the S2 app.

[…]

Source: The New Sonos One SL Reminds Us That Smart Devices Have a Shelf Life

You can’t roll back from the old update which basically only seems to add rounded corners to backgrounds and break in dark mode – except that you allow Sonos to spy on you through the built in microphone with S2.

Pentagon Surveilling Americans Without a Warrant, Senator Reveals

Senator Wyden’s office asked the Department of Defense (DoD), which includes various military and intelligence agencies such as the National Security Agency (NSA) and the Defense Intelligence Agency (DIA), for detailed information about its data purchasing practices after Motherboard revealed special forces were buying location data. The responses also touched on military or intelligence use of internet browsing and other types of data, and prompted Wyden to demand more answers specifically about warrantless spying on American citizens.

Some of the answers the DoD provided were given in a form that means Wyden’s office cannot legally publish specifics on the surveillance; one answer in particular was classified. In the letter Wyden is pushing the DoD to release the information to the public.

[…]

“Are any DoD components buying and using without a court order internet metadata, including ‘netflow’ and Domain Name System (DNS) records,” the question read, and asked whether those records were about “domestic internet communications (where the sender and recipient are both U.S. IP addresses)” and “internet communications where one side of the communication is a U.S. IP address and the other side is located abroad.”

Netflow data creates a picture of traffic flow and volume across a network. DNS records relate to when a user looks up a particular domain, and a system then converts that text into the specific IP address for a computer to understand; essentially a form of internet browsing history.

Wyden’s new letter to Austin urging the DoD to release that answer and others says “Information should only be classified if its unauthorized disclosure would cause damage to national security. The information provided by DoD in response to my questions does not meet that bar.”

[…]

“Other than DIA, are any DoD components buying and using without a court order location data collected from phones located in the United States?” one of Wyden’s questions reads. The answer to that is one that Wyden is urging the DoD to release.

The DIA memo said the agency believes it does not require a warrant to obtain such information. Following this, Wyden also asked the DoD which other DoD components have adopted a similar interpretation of the law. One response said that each component is itself responsible to make sure they follow the law.

Wyden is currently proposing a new piece of legislation called The Fourth Amendment Is Not For Sale Act which would force some agencies to obtain a warrant for location and other data.

[…]

Source: Pentagon Surveilling Americans Without a Warrant, Senator Reveals

Facebook Ordered to Stop German WhatsApp Users’ Data Collection

Facebook Inc. was ordered to stop collecting German users’ data from its WhatsApp unit, after a regulator in the nation said the company’s attempt to make users agree to the practice in its updated terms isn’t legal.

Johannes Caspar, who heads Hamburg’s privacy authority, issued a three-month emergency ban, prohibiting Facebook from continuing with the data collection. He also asked a panel of European Union data regulators to take action and issue a ruling across the 27-nation bloc. The new WhatsApp terms enabling the data scoop are invalid because they are intransparent, inconsistent and overly broad, he said.

“The order aims to secure the rights and freedoms of millions of users which are agreeing to the terms Germany-wide,” Caspar said in a statement on Tuesday. “We need to prevent damage and disadvantages linked to such a black-box-procedure.”

The order strikes at the heart of Facebook’s business model and advertising strategy. It echoes a similar and contested step by Germany’s antitrust office attacking the network’s habit of collecting data about what users do online and merging the information with their Facebook profiles. That trove of information allows ads to be tailored to individual users — creating a cash cow for Facebook.

Facebook’s WhatsApp unit called Caspar’s claims “wrong” and said the order won’t stop the roll-out of the new terms. The regulator’s action is “based on a fundamental misunderstanding” of the update’s purpose and effect, the company said in an emailed statement.

Read more: Facebook Faces German Bid to Halt WhatsApp Data Collection

The U.S. tech giant has faced global criticism over the new terms that WhatsApp users are required to accept by May 15. Caspar said Facebook may already be wrongfully handling data and said it’s important to prevent misuse of the information to influence the German national election in September.

Source: Facebook Ordered to Stop German WhatsApp Users’ Data Collection – Bloomberg

Justice Department Quietly Seized Washington Post Reporters’ Phone Records During Trump Era

The Department of Justice quietly seized phone records and tried to obtain email records for three Washington Post reporters, ostensibly over their coverage of then-U.S. Attorney General Jeff Sessions and Russia’s role in the 2016 presidential election, according to officials and government letters reviewed by the Post.

Justice Department regulations typically mandate that news organizations be notified when it subpoenas such records. However, though the Trump administration OK’d the decision, officials apparently left the notification part for the Biden administration to deal with. I guess they just never got around to it. Probably too busy inspiring an insurrection and trying to overthrow the presidential election.

In three separate letters dated May 3 addressed to reporters Ellen Nakashima, Greg Miller, and former reporter Adam Entous, the Justice Department wrote they were “hereby notified that pursuant to legal process the United States Department of Justice received toll records associated with the following telephone numbers for the period from April 15, 2017 to July 31, 2017,” according to the Post. Listed were Miller’s work and cellphone numbers, Entous’ cellphone number, and Nakashima’s work, cellphone, and home phone numbers. These records included all calls to and from the phones as well as how long each call lasted but did not reveal what was said.

According to the letters, the Post reports that prosecutors also secured a court order to seize “non content communications records” for the reporters’ email accounts, which would disclose who emailed whom and when the emails were sent but not their contents. However, officials ultimately did not obtain these records, the outlet said.

[…]

“We are deeply troubled by this use of government power to seek access to the communications of journalists,” said the Post’s acting executive editor Cameron Barr. “The Department of Justice should immediately make clear its reasons for this intrusion into the activities of reporters doing their jobs, an activity protected under the First Amendment.”

Frustratingly, the letters apparently don’t go into why the Department of Justice seized this data. A department spokesperson told the outlet that the decision to do so was made in 2020 during the Trump administration. (It’s worth noting that former President Donald Trump has made it crystal clear that he despises news media and the government leakers that provide them their scoops.)

Based on the time period cited in the letters and what the reporters covered during those months, the Post speculates that their investigations into Sessions and Russian interference could be why the department wanted to get its hands on their phone data.

[…]

Source: Justice Department Quietly Seized Washington Post Reporters’ Phone Records During Trump Era

WhatsApp’s privacy policy – not accepting will slowly kill your functionality and then delete your history

After facing international backlash over impending updates to its privacy policy, WhatsApp has ever-so-slightly backtracked on the harsh consequences it initially planned for users who don’t accept them—but not entirely.

In an update to the company’s FAQ page, WhatsApp clarifies that no users will have their accounts deleted or instantly lose app functionality if they don’t accept the new policies. It’s a step back from what WhatsApp had been telling users up until this point. When this page was first posted back in February, it specifically told users that those who don’t accept the platform’s new policies “won’t have full functionality” until they do. The threat of losing functionality is still there, but it won’t be automatic.

“For a short time, you’ll be able to receive calls and notifications, but won’t be able to read or send messages from the app,” WhatsApp wrote at the time. While the deadline to accept was initially early February, the blowback the company got from, well, just about everyone, caused the deadline to be postponed until May 15—this coming Saturday.

After that, folks that gave the okay to the new policy won’t notice any difference to their daily WhatsApp experience, and neither will the people that didn’t—at least at first. “After a period of several weeks, the reminder [to accept] people receive will eventually become persistent,” WhatsApp wrote, adding that users getting these “persistent” reminders will see their app stymied pretty significantly: For a “few weeks,” users won’t be able to access their chat lists, but will be able to answer incoming phone and video calls made over WhatsApp. After that grace period, WhatsApp will stop sending messages and calls to your phone entirely (until you accept).

[…]

It’s worth mentioning here that if you keep the app installed but still refuse to accept the policy for whatever reason, WhatsApp won’t outright delete your account because of that. That said, WhatsApp will probably delete your account due to “inactivity” if you don’t connect for 120 days, as is WhatsApp policy.

[…]

While the company has done the bare minimum in explaining what this privacy policy update actually means, the company hasn’t done much to assuage the concerns of lawyers, lawmakers, or really anyone else. And it doesn’t look like these new “reminders” will put them at ease, either.

Source: WhatsApp’s New Update: What It Means for Your Account

TV maker Skyworth under fire for excessive data collection that users call spying whilst China clamps down on user tracking

Chinese television maker Skyworth has issued an apology after a consumer found that his set was quietly collecting a wide range of private data and sending it to a Beijing-based analytics company without his consent.

A network traffic analysis revealed that a Skyworth smart TV scanned for other devices connected to the same local network every 10 minutes and gathered data that included device names, IP addresses, network latency and even the names of other Wi-Fi networks within range, according to a post last week on the Chinese developer forum V2EX.

The data was sent to the Beijing-based firm Gozen Data, the forum user said. Gozen is a data analytics company that specialises in targeted advertising on smart TVs, and it calls itself China‘s first “home marketing company empowered by big data centred on family data”.

[…]

“Isn’t this already the criminal offence of spying on people?” asked one user on Sina.com, a Chinese financial news portal. “Whom will the collected data be sold to, and who is the end user of this data?”

The reaction online eventually prompted Skyworth to respond.

The Shenzhen-based TV and set-top box maker issued a statement on April 27, saying it had ended its “cooperation” with Gozen and demanded the firm delete all its “illegally” collected data. Skyworth also said it had stopped using the Gozen app on its televisions and was looking into the issue.

Gozen issued a statement on its website on the same day, saying its Gozen Data Android app could be disabled on Skyworth TVs, but it did not address the likelihood that users would be aware of this functionality. The company also apologised for “causing user concerns about privacy and security”.

On its official WeChat account, Gozen said in a post from 2019 that it has been working with Skyworth since 2014. Its latest post, which included its apology, said the company collected data for viewership research that includes “television ratings for households and individuals, viewership analysis, advertising analysis and optimisation”. Neither company provided information on the scope and depth of the data collection.

[…]

The revelations about Skyworth and Gozen come amid a national crackdown on the rampant collection and use of user data. Beijing recently introduced new regulations for protecting personal data and curbing its collection through mobile apps.

New rules introduced in March

define for the first time

personal information considered “necessary” for apps in 39 different categories, including messaging and e-commerce. Users should be able to decline to provide data that is not necessary for an app to function, according to the new rules. Users of live-streaming and short-video apps, for example, should be able to use such apps without providing any personal information.

[…]

There have been no reports that Skyworth or Gozen are being investigated. Still, the disclosure and corporate statements have fanned fears among users in China, where Skyworth was the third biggest TV brand by sales volume in 2020, behind

Xiaomi

and

Hisense

, making up more than 13 per cent of the market. Globally, the company was the fifth-largest TV maker, according to data from Trendforce, behind Samsung Electronics, LG Electronics, TCL and Hisense.

Source: Chinese TV maker Skyworth under fire for excessive data collection that users call spying | South China Morning Post