Australia’s spy agencies caught collecting COVID-19 app data

Australia’s intelligence agencies have been caught “incidentally” collecting data from the country’s COVIDSafe contact-tracing app during the first six months of its launch, a government watchdog has found.

The report, published Monday by the Australian government’s inspector general for the intelligence community, which oversees the government’s spy and eavesdropping agencies, said the app data was scooped up “in the course of the lawful collection of other data.”

But the watchdog said that there was “no evidence” that any agency “decrypted, accessed or used any COVID app data.”

Incidental collection is a common term used by spies to describe the data that was not deliberately targeted but collected as part of a wider collection effort. This kind of collection isn’t accidental, but more of a consequence of when spy agencies tap into fiber optic cables, for example, which carries an enormous firehose of data. An Australian government spokesperson told one outlet, which first reported the news, that incidental collection can also happen as a result of the “execution of warrants.”

The report did not say when the incidental collection stopped, but noted that the agencies were “taking active steps to ensure compliance” with the law, and that the data would be “deleted as soon as practicable,” without setting a firm date.

For some, fears that a government spy agency could access COVID-19 contact-tracing data was the worst possible outcome.

[…]

Source: Australia’s spy agencies caught collecting COVID-19 app data | TechCrunch

Amazon’s ad-hoc Ring, Echo mesh network can mooch off your neighbors’ Wi-Fi if needed – and it’s opt-out

Amazon is close to launching Sidewalk – its ad-hoc wireless network for smart-home devices that taps into people’s Wi-Fi – and it is pretty much an opt-out affair.

The gist of Sidewalk is this: nearby Amazon gadgets, regardless of who owns them, can automatically organize themselves into their own private wireless network mesh, communicating primarily using Bluetooth Low Energy over short distances, and 900MHz LoRa over longer ranges.

At least one device in a mesh will likely be connected to the internet via someone’s Wi-Fi, and so, every gadget in the mesh can reach the ‘net via that bridging device. This means all the gadgets within a mesh can be remotely controlled via an app or digital assistant, either through their owners’ internet-connected Wi-Fi or by going through a suitable bridge in the mesh. If your internet goes down, your Amazon home security gizmo should still be reachable, and send out alerts, via the mesh.

It also means if your neighbor loses broadband connectivity, their devices in the Sidewalk mesh can still work over the ‘net by routing through your Sidewalk bridging device and using your home ISP connection.

[…]

Amazon Echoes, Ring Floodlight Cams, and Ring Spotlight Cams will be the first Sidewalk bridging devices as well as Sidewalk endpoints. The internet giant hopes to encourage third-party manufacturers to produce equipment that is also Sidewalk compatible, extending meshes everywhere.

Crucially, it appears Sidewalk is opt-out for those who already have the hardware, and will be opt-in for those buying new gear.

[…]

if you already have, say, an Amazon Ring, it will soon get a software update that will automatically enable Sidewalk connectivity, and you’ll get an email explaining how to switch that off. When powering up a new gizmo, you’ll at least get the chance to opt in or out.

[…]

We’re told Sidewalk will only sip your internet connection rather than hog it, limiting itself to half a gigabyte a month. This policy appears to live in hope that people aren’t on stingy monthly data caps.

[…]

Just don’t forget that Ring and the police, in the US at least, have a rather cosy relationship. While Amazon stresses that Ring owners are in control of the footage recorded by their camera-fitted doorbells, homeowners are often pressured into turning their equipment into surveillance systems for the cops.

Source: Amazon’s ad-hoc Ring, Echo mesh network can mooch off your neighbors’ Wi-Fi if needed – and it’s opt-out • The Register

The ones who brought you Let’s Encrypt, bring you: Tools for gathering anonymized app usage metrics from netizens

The Internet Security Research Group (ISRG) has a plan to allow companies to collect information about how people are using their products while protecting the privacy of those generating the data.

Today, the California-based non-profit, which operates Let’s Encrypt, introduced Prio Services, a way to gather online product metrics without compromising the personal information of product users.

“Applications such as web browsers, mobile applications, and websites generate metrics,” said Josh Aas, founder and executive director of ISRG, and Tim Geoghegan, site reliability engineer, in an announcement. “Normally they would just send all of the metrics back to the application developer, but with Prio, applications split the metrics into two anonymized and encrypted shares and upload each share to different processors that do not share data with each other.”

Prio is described in a 2017 research paper [PDF] as “a privacy-preserving system for the collection of aggregate statistics.” The system was developed by Henry Corrigan-Gibbs, then a Stanford doctoral student and currently an MIT assistant professor, and Dan Boneh, a professor of computer science and electrical engineering at Stanford.

Prio implements a cryptographic approach called secret-shared non-interactive proofs (SNIPs). According to its creators, it handles data only 5.7x slower than systems with no privacy protection. That’s considerably better than the competition: client-generated non-interactive zero-knowledge proofs of correctness (NIZKs) are 267x slower than unprotected data processing and privacy methods based on succinct non-interactive arguments of knowledge (SNARKs) clock in at three orders of magnitude slower.

“With Prio, you can get both: the aggregate statistics needed to improve an application or service and maintain the privacy of the people who are providing that data,” said Boneh in a statement. “This system offers a robust solution to two growing demands in our tech-driven economy.”

In 2018 Mozilla began testing Prio to gather Firefox telemetry data and found the cryptographic scheme compelling enough to make it the basis of its Firefox Origin Telemetry service.

[…]

Source: The ones who brought you Let’s Encrypt, bring you: Tools for gathering anonymized app usage metrics from netizens • The Register

Google Will Make It a bit Easier to Turn Off Smart Features which track you, Slightly Harder for Regulators to Break Up Google

Soon, Google will present you with a clear choice to disable smart features, like Google assistant reminders to pay your bills and predictive text in Gmail. Whether you like the Gmail mindreader function that autofills “all the best” and “reaching out,” or have long dreaded the arrival of the machine staring back from the void,: it’s your world, Google’s just living in it. According to Google.

We’ve always been able to disable these functions if we bothered hunting through account settings. But “in the coming weeks” Google will show a new blanket setting to “turn off smart features” which will disable features like Smart Compose, Smart Reply, in apps like Gmail; the second half of the same prompt will disable whether additional Google products—like Maps or Assistant, for example—are allowed to be personalized based on data from Gmail, Meet, and Chat.

Google writes in its blog post about the new-ish settings that humans are not looking at your emails to enable smart features, and Google ads are “not based on your personal data in Gmail,” something CEO Sundar Pichai has likewise said time and again. Google claims to have stopped that practice in 2017, although the following year the Wall Street Journal reported that third-party app developers had freely perused inboxes with little oversight. (When asked whether this is still a problem, the spokesperson pointed us to Google’s 2018 effort to tighten security.)

A Google spokesperson emphasized that the company only uses email contents for security purposes like filtering spam and phishing attempts.

These personalization changes aren’t so much about tightening security as they are another informed consent defense which Google can use to repel the current regulatory siege being waged against it by lawmakers. It has expanded incognito mode for maps and auto-deleting data in location history or web and app activity and on YouTube (though after a period of a few months).

Inquiries in the U.S. and EU have found that Google’s privacy settings have historically presented the appearance of privacy, rather than privacy itself. After a 2018 AP article exposed the extent of Google’s location data harvesting, an investigation found that turning location off in Android was no guarantee that Google wouldn’t collect location data (though Google has denied this.) Plaintiffs in a $5 billion class-action lawsuit filed this summer alleged that “incognito mode” in Chrome didn’t prevent Google from capturing and sharing their browsing history. And last year, French regulators fined Google nearly $57 million for violating the General Data Protection Regulation (GDPR) by allegedly burying privacy controls beneath five or six layers of settings. (When asked, the spokesperson said Google has no additional comment on these cases.)

So this is nice, and also Google’s announcement reads as a letter to regulators. “This new setting is designed to reduce the work of understanding and managing [a choice over how data is processed], in view of what we’ve learned from user experience research and regulators’ emphasis on comprehensible, actionable user choices over data.”

Source: Google Will Make It Easier to Turn Off Smart Features

Apple hits back at European activist lawsuit against unauthorised tracking installs – says it doesn’t use it… but 3rd parties do

The group, led by campaigner Max Schrems, filed complaints with data protection watchdogs in Germany and Spain alleging that the tracking tool illegally enabled the $2 trillion U.S. tech giant to store users’ data without their consent.

Apple directly rebutted the claims filed by Noyb, the digital rights group founded by Schrems, saying they were “factually inaccurate and we look forward to making that clear to privacy regulators should they examine the complaint”.

Schrems is a prominent figure in Europe’s digital rights movement that has resisted intrusive data-gathering by Silicon Valley’s tech platforms. He has fought two cases against Facebook, winning landmark judgments that forced the social network to change how it handles user data.

Noyb’s complaints were brought against Apple’s use of a tracking code, known as the Identifier for Advertisers (IDFA), that is automatically generated on every iPhone when it is set up.

The code, stored on the device, makes it possible to track a user’s online behaviour and consumption preferences – vital in allowing companies to send targeted adverts.

“Apple places codes that are comparable to a cookie in its phones without any consent by the user. This is a clear breach of European Union privacy laws,” Noyb lawyer Stefano Rossetti said.

Rossetti referred to the EU’s e-Privacy Directive, which requires a user’s consent before installation and using such information.

Apple said in response that it “does not access or use the IDFA on a user’s device for any purpose”.

It said its aim was to protect the privacy of its users and that the latest release of its iOS 14 operating system gave users greater control over whether apps could link with third parties for the purposes of targeted advertising.

Source: Apple hits back at European activist complaints against tracking tool | Reuters

The complaint against Apple is that the IDFA is set at all without consent from the user. And it’s not the point that Apple accesses it or not, the point is that unspecified 3rd parties (advertisers, hackers, government, etc) can.

How the U.S. Military Buys Location Data from Ordinary Apps

The U.S. military is buying the granular movement data of people around the world, harvested from innocuous-seeming apps, Motherboard has learned. The most popular app among a group Motherboard analyzed connected to this sort of data sale is a Muslim prayer and Quran app that has more than 98 million downloads worldwide. Others include a Muslim dating app, a popular Craigslist app, an app for following storms, and a “level” app that can be used to help, for example, install shelves in a bedroom.

Through public records, interviews with developers, and technical analysis, Motherboard uncovered two separate, parallel data streams that the U.S. military uses, or has used, to obtain location data. One relies on a company called Babel Street, which creates a product called Locate X. U.S. Special Operations Command (USSOCOM), a branch of the military tasked with counterterrorism, counterinsurgency, and special reconnaissance, bought access to Locate X to assist on overseas special forces operations. The other stream is through a company called X-Mode, which obtains location data directly from apps, then sells that data to contractors, and by extension, the military.

The news highlights the opaque location data industry and the fact that the U.S. military, which has infamously used other location data to target drone strikes, is purchasing access to sensitive data. Many of the users of apps involved in the data supply chain are Muslim, which is notable considering that the United States has waged a decades-long war on predominantly Muslim terror groups in the Middle East, and has killed hundreds of thousands of civilians during its military operations in Pakistan, Afghanistan, and Iraq. Motherboard does not know of any specific operations in which this type of app-based location data has been used by the U.S. military.

[…]

In March, tech publication Protocol first reported that U.S. law enforcement agencies such as Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) were using Locate X. Motherboard then obtained an internal Secret Service document confirming the agency’s use of the technology. Some government agencies, including CBP and the Internal Revenue Service (IRS), have also purchased access to location data from another vendor called Venntel.

“In my opinion, it is practically certain that foreign entities will try to leverage (and are almost certainly actively exploiting) similar sources of private platform user data. I think it would be naïve to assume otherwise,” Mark Tallman, assistant professor at the Department of Emergency Management and Homeland Security at the Massachusetts Maritime Academy, told Motherboard in an email.

THE SUPPLY CHAIN

Some companies obtain app location data through bidstream data, which is information gathered from the real-time bidding that occurs when advertisers pay to insert their adverts into peoples’ browsing sessions. Firms also often acquire the data from software development kits (SDKs).

[…]

In a recent interview with CNN, X-Mode CEO Joshua Anton said the company tracks 25 million devices inside the United States every month, and 40 million elsewhere, including in the European Union, Latin America, and the Asia-Pacific region. X-Mode previously told Motherboard that its SDK is embedded in around 400 apps.

In October the Australian Competition & Consumer Commission published a report about data transfers by smartphone apps. A section of that report included the endpoint—the URL some apps use—to send location data back to X-Mode. Developers of the Guardian app, which is designed to protect users from the transfer of location data, also published the endpoint. Motherboard then used that endpoint to discover which specific apps were sending location data to the broker.

Motherboard used network analysis software to observe both the Android and iOS versions of the Muslim Pro app sending granular location data to the X-Mode endpoint multiple times. Will Strafach, an iOS researcher and founder of Guardian, said he also saw the iOS version of Muslim Pro sending location data to X-Mode.

The data transfer also included the name of the wifi network the phone was currently collected to, a timestamp, and information about the phone such as its model, according to Motherboard’s tests.

[…]

 

Source: How the U.S. Military Buys Location Data from Ordinary Apps

Your Computer isn’t Yours – Apple edition – how is it snooping on you, why can’t you start apps when their server is down

It’s here. It happened. Did you notice?

I’m speaking, of course, of the world that Richard Stallman predicted in 1997. The one Cory Doctorow also warned us about.

On modern versions of macOS, you simply can’t power on your computer, launch a text editor or eBook reader, and write or read, without a log of your activity being transmitted and stored.

It turns out that in the current version of the macOS, the OS sends to Apple a hash (unique identifier) of each and every program you run, when you run it. Lots of people didn’t realize this, because it’s silent and invisible and it fails instantly and gracefully when you’re offline, but today the server got really slow and it didn’t hit the fail-fast code path, and everyone’s apps failed to open if they were connected to the internet.

Because it does this using the internet, the server sees your IP, of course, and knows what time the request came in. An IP address allows for coarse, city-level and ISP-level geolocation, and allows for a table that has the following headings:

Date, Time, Computer, ISP, City, State, Application Hash

Apple (or anyone else) can, of course, calculate these hashes for common programs: everything in the App Store, the Creative Cloud, Tor Browser, cracking or reverse engineering tools, whatever.

This means that Apple knows when you’re at home. When you’re at work. What apps you open there, and how often. They know when you open Premiere over at a friend’s house on their Wi-Fi, and they know when you open Tor Browser in a hotel on a trip to another city.

“Who cares?” I hear you asking.

Well, it’s not just Apple. This information doesn’t stay with them:

  1. These OCSP requests are transmitted unencrypted. Everyone who can see the network can see these, including your ISP and anyone who has tapped their cables.
  2. These requests go to a third-party CDN run by another company, Akamai.
  3. Since October of 2012, Apple is a partner in the US military intelligence community’s PRISM spying program, which grants the US federal police and military unfettered access to this data without a warrant, any time they ask for it. In the first half of 2019 they did this over 18,000 times, and another 17,500+ times in the second half of 2019.

This data amounts to a tremendous trove of data about your life and habits, and allows someone possessing all of it to identify your movement and activity patterns. For some people, this can even pose a physical danger to them.

Now, it’s been possible up until today to block this sort of stuff on your Mac using a program called Little Snitch (really, the only thing keeping me using macOS at this point). In the default configuration, it blanket allows all of this computer-to-Apple communication, but you can disable those default rules and go on to approve or deny each of these connections, and your computer will continue to work fine without snitching on you to Apple.

The version of macOS that was released today, 11.0, also known as Big Sur, has new APIs that prevent Little Snitch from working the same way. The new APIs don’t permit Little Snitch to inspect or block any OS level processes. Additionally, the new rules in macOS 11 even hobble VPNs so that Apple apps will simply bypass them.

Mozilla *privacy not included tech buyers guide rated on creepy scale

This is a list of 130 Smart home gadgets, fitness trackers, toys and more, rated for their privacy & security. It’s a large list and shows you how basically anything by big tech is pretty creepy – anything by Amazon and Facebook is super creepy, Google pretty creepy, Apple only creepy. There are a few surprises, like Moleskine being super creepy. Fitness machinery is pretty bad as are some coffee makers… Nintendo Switches and PS5s (surprisingly) aren’t creepy at all…

Source: Mozilla – *privacy not included

Six Reasons Why Google Maps Is the Creepiest App On Your Phone

VICE has highlighted six reasons why Google Maps is the creepiest app on your phone. An anonymous reader shares an excerpt from the report: 1. Google Maps Wants Your Search History: Google’s “Web & App Activity” settings describe how the company collects data, such as user location, to create a faster and “more personalized” experience. In plain English, this means that every single place you’ve looked up in the app — whether it’s a strip club, a kebab shop or your moped-riding drug dealer’s location — is saved and integrated into Google’s search engine algorithm for a period of 18 months. Google knows you probably find this creepy. That’s why the company uses so-called “dark patterns” — user interfaces crafted to coax us into choosing options we might not otherwise, for example by highlighting an option with certain fonts or brighter colors.

2. Google Maps Limits Its Features If You Don’t Share Your Search History: If you open your Google Maps app, you’ll see a circle in the top right corner that signifies you’re logged in with your Google account. That’s not necessary, and you can simply log out. Of course, the log out button is slightly hidden, but can be found like this: click on the circle > Settings > scroll down > Log out of Google Maps. Unfortunately, Google Maps won’t let you save frequently visited places if you’re not logged into your Google account. If you choose not to log in, when you click on the search bar you get a “Tired of typing?” button, suggesting you sign in, and coaxing you towards more data collection.

3. Google Maps Can Snitch On You: Another problematic feature is the “Google Maps Timeline,” which “shows an estimate of places you may have been and routes you may have taken based on your Location History.” With this feature, you can look at your personal travel routes on Google Maps, including the means of transport you probably used, such as a car or a bike. The obvious downside is that your every move is known to Google, and to anyone with access to your account. And that’s not just hackers — Google may also share data with government agencies such as the police. […] If your “Location History” is on, your phone “saves where you go with your devices, even when you aren’t using a specific Google service,” as is explained in more detail on this page. This feature is useful if you lose your phone, but also turns it into a bonafide tracking device.

4. Google Maps Wants to Know Your Habits: Google Maps often asks users to share a quick public rating. “How was Berlin Burger? Help others know what to expect,” suggests the app after you’ve picked up your dinner. This feels like a casual, lighthearted question and relies on the positive feeling we get when we help others. But all this info is collected in your Google profile, making it easier for someone to figure out if you’re visiting a place briefly and occasionally (like on holiday) or if you live nearby.

5. Google Maps Doesn’t Like It When You’re Offline: Remember GPS navigation? It might have been clunky and slow, but it’s a good reminder that you don’t need to be connected to the internet to be directed. In fact, other apps offer offline navigation. On Google, you can download maps, but offline navigation is only available for cars. It seems fairly unlikely the tech giant can’t figure out how to direct pedestrians and cyclists without internet.

6. Google Makes It Seem Like This Is All for Your Own Good: “Providing useful, meaningful experiences is at the core of what Google does,” the company says on its website, adding that knowing your location is important for this reason. They say they use this data for all kinds of useful things, like “security” and “language settings” — and, of course, selling ads. Google also sells advertisers the possibility to evaluate how well their campaigns reached their target (that’s you!) and how often people visited their physical shops “in an anonymized and aggregated manner”. But only if you opt in (or you forget to opt out).

Source: Six Reasons Why Google Maps Is the Creepiest App On Your Phone – Slashdot

It Took Just 5 Minutes Of Movement Data To Identify ‘Anonymous’ VR Users

As companies and governments increasingly hoover up our personal data, a common refrain to keep people from worrying is the claim that nothing can go wrong because the data itself is “anonymized” — or stripped of personal identifiers like social security numbers. But time and time again, studies have shown how this really is cold comfort, given it takes only a little effort to pretty quickly identify a person based on access to other data sets. Yet most companies, many privacy policy folk, and even government officials still like to act as if “anonymizing” your data means something.

The latest case in point: new research out of Stanford (first spotted by the German website Mixed), found that it took researchers just five minutes of examining the movement data of VR users to identify them in the real world. The paper says participants using an HTC Vive headset and controllers watched five 20-second clips from a randomized set of 360-degree videos, then answered a set of questions in VR that were tracked in a separate research paper.

The movement data (including height, posture, head movement speed and what participants looked at and for how long) was then plugged into three machine learning algorithms, which, from a pool of 511 participants, was able to correctly identify 95% of users accurately “when trained on less than 5 min of tracking data per person.” The researchers went on to note that while VR headset makers (like every other company) assures users that “de-identified” or “anonymized” data would protect their identities, that’s really not the case:

“In both the privacy policy of Oculus and HTC, makers of two of the most popular VR headsets in 2020, the companies are permitted to share any de-identified data,” the paper notes. “If the tracking data is shared according to rules for de-identified data, then regardless of what is promised in principle, in practice taking one’s name off a dataset accomplishes very little.”

If you don’t like this study, there’s just an absolute ocean of research over the last decade making the same point: “anonymized” or “de-identified” doesn’t actually mean “anonymous.” Researchers from the University of Washington and the University of California, San Diego, for example, found that they could identify drivers based on just 15 minutes’ worth of data collected from brake pedal usage alone. Researchers from Stanford and Princeton universities found that they could correctly identify an “anonymized” user 70% of the time just by comparing their browsing data to their social media activity.

[…]

Source: It Took Just 5 Minutes Of Movement Data To Identify ‘Anonymous’ VR Users | Techdirt

Police Will Pilot a Program to Live-Stream Amazon Ring Cameras

This is not a drill. Red alert: The police surveillance center in Jackson, Mississippi, will be conducting a 45-day pilot program to live stream the Amazon Ring cameras of participating residents.

Since Ring first made a splash in the private security camera market, we’ve been warning of its potential to undermine the civil liberties of its users and their communities. We’ve been especially concerned with Ring’s 1,000+ partnerships with local police departments, which facilitate bulk footage requests directly from users without oversight or having to acquire a warrant.

While people buy Ring cameras and put them on their front door to keep their packages safe, police use them to build comprehensive CCTV camera networks blanketing whole neighborhoods. This  serves two police purposes. First, it allows police departments to avoid the cost of buying surveillance equipment and to put that burden onto consumers by convincing them they need cameras to keep their property safe. Second, it evades the natural reaction of fear and distrust that many people would have if they learned police were putting up dozens of cameras on their block, one for every house.

Now, our worst fears have been confirmed. Police in Jackson, Mississippi, have started a pilot program that would allow Ring owners to patch the camera streams from their front doors directly to a police Real Time Crime Center. The footage from your front door includes you coming and going from your house, your neighbors taking out the trash, and the dog walkers and delivery people who do their jobs in your street. In Jackson, this footage can now be live streamed directly onto a dozen monitors scrutinized by police around the clock. Even if you refuse to allow your footage to be used that way, your neighbor’s camera pointed at your house may still be transmitting directly to the police.

[…]

Source: Police Will Pilot a Program to Live-Stream Amazon Ring Cameras | Electronic Frontier Foundation

Brave browser first to nix CNAME deception, the sneaky DNS trick used by marketers to duck privacy controls

The Brave web browser will soon block CNAME cloaking, a technique used by online marketers to defy privacy controls designed to prevent the use of third-party cookies.

The browser security model makes a distinction between first-party domains – those being visited – and third-party domains – from the suppliers of things like image assets or tracking code, to the visited site. Many of the online privacy abuses over the years have come from third-party resources like scripts and cookies, which is why third-party cookies are now blocked by default in Brave, Firefox, Safari, and Tor Browser.

Microsoft Edge, meanwhile, has a tiered scheme that defaults to a “Balanced” setting, which blocks some third-party cookies. Google Chrome has implemented its SameSite cookie scheme as a prelude to its planned 2022 phase-out of third-party cookies, maybe.

While Google tries to win support for its various Privacy Sandbox proposals, which aim to provide marketers with ostensibly privacy-preserving alternatives to increasingly shunned third-party cookies, marketers have been relying on CNAME shenanigans to pass their third-party trackers off as first-party resources.

The developers behind open-source content blocking extension uBlock Origin implemented a defense against CNAME-based tracking in November and now Brave has done so as well.

CNAME by name, cookie by nature

In a blog post on Tuesday, Anton Lazarev, research engineer at Brave Software, and senior privacy researcher Peter Snyder, explain that online tracking scripts may use canonical name DNS records, known as CNAMEs, to make associated third-party tracking domains look like they’re part of the first-party websites actually being visited.

They point to the site https://mathon.fr as an example, noting that without CNAME uncloaking, Brave blocks six requests for tracking scripts served by ad companies like Google, Facebook, Criteo, Sirdan, and Trustpilot.

But the page also makes four requests via a script hosted at a randomized path under the first-party subdomain 16ao.mathon.fr.

“Inspection outside of the browser reveals that 16ao.mathon.fr actually has a canonical name of et5.eulerian.net, meaning it’s a third-party script served by Eulerian,” observe Lazarev and Snyder.

When Brave 1.17 ships next month (currently available as a developer build), it will be able to uncloak the CNAME deception and block the Eulerian script.

Other browser vendors are planning related defenses. Mozilla has been working on a fix in Firefox since last November. And in August, Apple’s Safari WebKit team proposed a way to prevent CNAME cloaking from being used to bypass the seven-day cookie lifetime imposed by WebKit’s Intelligent Tracking Protection system

Source: Brave browser first to nix CNAME deception, the sneaky DNS trick used by marketers to duck privacy controls • The Register

When you tell Chrome to wipe private data about you, it spares two websites from the purge: Google.com, YouTube

Google exempts its own websites from Chrome’s automatic data-scrubbing feature, allowing the ads giant to potentially track you even when you’ve told it not to.

Programmer Jeff Johnson noticed the unusual behavior, and this month documented the issue with screenshots. In his assessment of the situation, he noted that if you set up Chrome, on desktop at least, to automatically delete all cookies and so-called site data when you quit the browser, it deletes it all as expected – except your site data for Google.com and YouTube.com.

While cookies are typically used to identify you and store some of your online preferences when visiting websites, site data is on another level: it includes, among other things, a storage database in which a site can store personal information about you, on your computer, that can be accessed again by the site the next time you visit. Thus, while your Google and YouTube cookies may be wiped by Chrome, their site data remains on your computer, and it could, in future, be used to identify you.

Johnson noted that after he configured Chrome to wipe all cookies and site data when the application closed, everything was cleared as expected for sites like apple.com. Yet, the main Google search site and video service YouTube were allowed to keep their site data, though the cookies were gone. If Google chooses at some point to stash the equivalent of your Google cookies in the Google.com site data storage, they could be retrieved next time you visit Google, and identify you, even though you thought you’d told Chrome not to let that happen.

Ultimately, it potentially allows Google, and only Google, to continue tracking Chrome users who opted for some more privacy; something that is enormously valuable to the internet goliath in delivering ads. Many users set Chrome to automatically delete cookies-and-site-data on exit for that reason – to prevent being stalked around the web – even though it often requires them to log back into websites the next time they visit due to their per-session cookies being wiped.

Yet Google appears to have granted itself an exception. The situation recalls a similar issue over location tracking, where Google continued to track people’s location through their apps even when users actively selected the option to prevent that. Google had put the real option to start location tracking under a different setting that didn’t even include the word “location.”

In this case, “Clear cookies and site data when you quit Chrome” doesn’t actually mean what it says, at least not for Google.

There is a workaround: you can manually add “Google.com” and “YouTube.com” within the browser to a list of “Sites that can never use cookies.” In that case, no information, not even site data, is saved from those sites, which is all in all a little confusing.

[…]

 

Source: When you tell Chrome to wipe private data about you, it spares two websites from the purge: Google.com, YouTube • The Register

Thought the FBI were the only ones able to unlock encrypted phones? Pretty much every US cop can get the job done – and does

Never mind the Feds. American police forces routinely “circumvent most security features” in smartphones to extract mountains of personal information, according to a report that details the massive, ubiquitous cracking of devices by cops.

Two years of public records requests by Upturn, a Washington DC non-profit, has revealed that every one of the United States’ largest 50 police departments, as well as half of the largest sheriff’s offices and two-thirds of the largest prosecuting attorney’s offices, regularly use specialist hardware and software to access the contents of suspects’ handhelds. There isn’t a state in the Union that hasn’t got some advanced phone-cracking capabilities.

The report concludes that, far from modern phones being a bastion of privacy and security, there are in fact routinely rifled through for trivial crimes without a warrant in sight. In one case, the cops confiscated and searched the phones of two men who were caught arguing over a $70 debt in a McDonalds.

In another, officers witnessed “suspicious behavior” in a Whole Foods grocery store parking lot and claimed to have smelt “the odor of marijuana” coming from a car. The car was stopped and searched, and the driver’s phone was seized and searched for “further evidence of the nature of the suspected controlled substance exchange.”

A third example given saw police officers shot and kill a man after he “ran from the driver’s side of the vehicle” during a traffic stop. They apparently discovered a small orange prescription pill container next to the victim, and tested the pills, which contained acetaminophen and fentanyl. They also discovered a phone in the empty car, and searched it for evidence related to “counterfeit Oxycodone” and “evidence relating to… motives for fleeing from the police.”

The report gives numerous other examples of phones taken from their owners and searched for evidence, without a warrant – many in cases where the value of the information was negligible such as cases involving graffiti, shoplifting, marijuana possession, prostitution, vandalism, car crashes, parole violations, petty theft, and public intoxication.

Not what you imagined

That is a completely different picture to the one, we imagine, most Americans assumed, particularly given the high legal protections afforded smartphones in recent high-profile court cases.

In 2018, the Supreme Court ruled that the government needs a warrant to access its citizens’ cellphone location data and talked extensively about a citizen’s expectation of privacy limiting “official intrusion” when it comes to smartphones.

In 2014, the court decided a warrant was required to search a mobile phone, and that the “reasonable expectation of privacy” that people have in their “physical movements” should extend to records stored by third parties. But the reality on the grounds is that those grand words mean nothing if the cops decide they want to look through your phone.

The report was based on reports from 44 law enforcement agencies across the US and covered 50,000 extractions of data from cellphones between 2015 and 2019, a figure that Upturn notes “represents a severe undercount” of the actual number of cellphone extractions.

[…]

They include banning the use of “consent searches” where the police ask the owner if they can search their phone and then require no further approval to go through a device. “Courts pretend that ‘consent searches’ are voluntary, when they are effectively coerced,” the report argues and notes that most people are probably unaware they by agreeing to it, they can have their phone’s entire contents downloaded and perused at will later on.

It also reckons that the argument that the contents of a phone are in “plain view” because a police officer can see a phone when at the scene of a crime, an important legal distinction that allows the police to search phones, is legally untenable because people carry their phones with them as a rule, and the contents are not themselves also visible – only the device itself.

The report also argues for more extensive audit logs of phone searches so there is a degree of accountability, particularly if evidence turned up is later used in court. And it argues for better and clearer data deletion rules, as well as more reporting requirements around phone searches by law enforcement.

It concludes: “For too long, public debate and discussion regarding these tools has been abstracted to the rarest and most sensational cases in which law enforcement cannot gain access to cellphone data. We hope that this report will help recenter the conversation regarding law enforcement’s use of mobile device forensic tools to the on-the-ground reality of cellphone searches today in the United States.”

Source: Thought the FBI were the only ones able to unlock encrypted phones? Pretty much every US cop can get the job done • The Register

UK test and trace data can be handed to police, reveals memorandum – that mission crept quickly

As if things were not going badly enough for the UK’s COVID-19 test and trace service, it now seems police will be able to access some test data, prompting fear that the disclosure could deter people who should have tests from coming forward.

As revealed in the Health Service Journal (paywalled), Department for Health and Social Care (DHSC) guidance describing how testing data will be handled was updated on Friday.

The memorandum of understanding between DHSC and National Police Chiefs’ Council said forces could be allowed to access test information that tells them if a “specific individual” has been told to self-isolate.

A failure to self-isolate after getting a positive COVID-19 test or being in contact with someone who has tested positive, could result in a police fine of £1,000, or even a £10,000 penalty for those serial offenders or those seriously breaking the rules.

A Department of Health and Social Care spokesperson said: “It is a legal requirement for people who have tested positive for COVID-19 and their close contacts to self-isolate when formally notified to do so.

“The DHSC has agreed a memorandum of understanding with the National Police Chiefs Council to enable police forces to have access on a case-by-case basis to information that enables them to know if a specific individual has been notified to self-isolate.

[…]

The UK government’s emphasis should be on providing support to people – financial and otherwise – if they need to self-isolate, so that no one is deterred from coming forward for a test, the BMA spokesperson added.

The UK’s test and trace system, backed by £12bn in public money, was outsourced to Serco for £45m in June. Sitel is also a provider.

The service has had a bumpy ride to say the least. Earlier this month, it came to light that as many as 48,000 people were not informed they had come in close contact with people who had tested positive, as the service under-reported 15,841 novel coronavirus cases between 25 September and 2 October.

The use of Microsoft’s Excel spreadsheet program in transferring test results from labs to the health service to total up was at the heart of the problem. A plausible explanation emerged that test results were automatically fetched in CSV format by PHE from various commercial testing labs, and stored in rows in an older .XLS Excel format that limited the number of rows to 65,536 per spreadsheet, rather than the one-million row limit offered by the modern .XLSX file format.

But that was not the only miss-step. It has emerged that people in line for a coronavirus test were sent to a site in Sevenoaks, Kent, where, in fact, no test centre existed, according to reports.

Source: UK test and trace data can be handed to police, reveals memorandum • The Register

Remember when Zoom was rumbled for lousy crypto? Six months later it says end-to-end is ready – but it’s not

The world’s plague-time video meeting tool of choice, Zoom, says it’s figured out how to do end-to-end encryption sufficiently well to offer users a tech preview.

News of the trial comes after April 2020 awkwardness that followed the revelation that Zoom was fibbing about its service using end-to-end encryption.

As we reported at the time, Zoom ‘fessed up but brushed aside criticism with a semantic argument about what “end-to-end” means.

“When we use the phrase ‘End-to-end’ in our other literature, it is in reference to the connection being encrypted from Zoom end point to Zoom end point,” the company said. The commonly accepted definition of end-to-end encryption requires even the host of a service to be unable to access the content of a communication. As we explained at the time, Zoom’s use of TLS and HTTPS meant it could intercept and decrypt video chats.

Come May, Zoom quickly acquired secure messaging Keybase to give it the chops to build proper crypto.

To use it, customers must enable E2EE meetings at the account level and opt-in to E2EE on a per-meeting basis

Now Zoom reckons it has cracked the problem.

A Wednesday post revealed: “starting next week, Zoom’s end-to-end encryption (E2EE) offering will be available as a technical preview, which means we’re proactively soliciting feedback from users for the first 30 days.”

Sharp-eyed Reg readers have doubtless noticed that Zoom has referred to “E2EE”, not just the “E2E” contraction of “end-to-end”.

What’s up with that? The company has offered the following explanation:

“Zoom’s E2EE uses the same powerful GCM encryption you get now in a Zoom meeting. The only difference is where those encryption keys live.In typical meetings, Zoom’s cloud generates encryption keys and distributes them to meeting participants using Zoom apps as they join. With Zoom’s E2EE, the meeting’s host generates encryption keys and uses public key cryptography to distribute these keys to the other meeting participants. Zoom’s servers become oblivious relays and never see the encryption keys required to decrypt the meeting contents.”

Don’t go thinking the preview means Zoom has squared away security, because the company says: “To use it, customers must enable E2EE meetings at the account level and opt-in to E2EE on a per-meeting basis.”

With users having to be constantly reminded to use non-rubbish passwords, not to click on phish or leak business data on personal devices, they’ll almost certainly choose E2EE every time without ever having to be prompted, right?

Source: Remember when Zoom was rumbled for lousy crypto? Six months later it says end-to-end is ready • The Register

Five Eyes governments, India, and Japan make new call for encryption backdoors – insist that democracy is an insecure police state

Members of the intelligence-sharing alliance Five Eyes, along with government representatives for Japan and India, have published a statement over the weekend calling on tech companies to come up with a solution for law enforcement to access end-to-end encrypted communications.

The statement is the alliance’s latest effort to get tech companies to agree to encryption backdoors.

The Five Eyes alliance, comprised of the US, the UK, Canada, Australia, and New Zealand, have made similar calls to tech giants in 2018 and 2019, respectively.

Just like before, government officials claim tech companies have put themselves in a corner by incorporating end-to-end encryption (E2EE) into their products.

If properly implemented, E2EE lets users have secure conversations — may them be chat, audio, or video — without sharing the encryption key with the tech companies.

Representatives from the seven governments argue that the way E2EE encryption is currently supported on today’s major tech platforms prohibits law enforcement from investigating crime rings, but also the tech platforms themselves from enforcing their own terms of service.

Signatories argue that “particular implementations of encryption technology” are currently posing challenges to law enforcement investigations, as the tech platforms themselves can’t access some communications and provide needed data to investigators.

This, in turn, allows a safe haven for criminal activity and puts the safety of “highly vulnerable members of our societies like sexually exploited children” in danger, officials argued.

Source: Five Eyes governments, India, and Japan make new call for encryption backdoors | ZDNet

Let’s be clear here:

  1. There is no way for a backdoored system to be secure. This means that not only do you give access to the government police services, secret services, stazi and thought police who can persecute you for being jewish or thinking the “wrong way” (eg being homosexual or communist), you also give criminal networks, scam artists, discontented exes and foreign government free reign to run around  your private content
  2. You have a right to privacy and you need it. It’s fundamental to being able to think creatively  and the only way in which societies advance. If thought is policed by some random standard then deviations which lead  to change will be surpressed. Stasis leads to economic collapse among other things, even if those at the top will be collecting more and more wealth for themselves.
  3. We as a society cannot “win” or become “better” by emulating the societies that we are competing against, that represent values and behaviours that we disagree with. Becoming a police state doesn’t protect us from other police states.

Google is giving data to police based on search keywords: IPs of everyone who searched a certain thing. No warrant required.

There are few things as revealing as a person’s search history, and police typically need a warrant on a known suspect to demand that sensitive information. But a recently unsealed court document found that investigators can request such data in reverse order by asking Google to disclose everyone who searched a keyword rather than for information on a known suspect.

In August, police arrested Michael Williams, an associate of singer and accused sex offender R. Kelly, for allegedly setting fire to a witness’ car in Florida. Investigators linked Williams to the arson, as well as witness tampering, after sending a search warrant to Google that requested information on “users who had searched the address of the residence close in time to the arson.”

The July court filing was unsealed on Tuesday. Detroit News reporter Robert Snell tweeted about the filing after it was unsealed.

Court documents showed that Google provided the IP addresses of people who searched for the arson victim’s address, which investigators tied to a phone number belonging to Williams. Police then used the phone number records to pinpoint the location of Williams’ device near the arson, according to court documents.

The original warrant sent to Google is still sealed, but the report provides another example of a growing trend of data requests to the search engine giant in which investigators demand data on a large group of users rather than a specific request on a single suspect.

“This ‘keyword warrant’ evades the Fourth Amendment checks on police surveillance,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “When a court authorizes a data dump of every person who searched for a specific term or address, it’s likely unconstitutional.”

The keyword warrants are similar to geofence warrants, in which police make requests to Google for data on all devices logged in at a specific area and time. Google received 15 times more geofence warrant requests in 2018 compared with 2017, and five times more in 2019 than 2018. The rise in reverse requests from police have troubled Google staffers, according to internal emails.

[…]

Source: Google is giving data to police based on search keywords, court docs show – CNET

Europe’s top court confirms no mass surveillance without limits

Europe’s top court has delivered another slap-down to indiscriminate government mass surveillance regimes.

In a ruling today the CJEU has made it clear that national security concerns do not exclude EU Member States from the need to comply with general principles of EU law such as proportionality and respect for fundamental rights to privacy, data protection and freedom of expression.

However the court has also allowed for derogations, saying that a pressing national security threat can justify limited and temporary bulk data collection and retention — capped to ‘what is strictly necessary’.

While threats to public security or the need to combat serious crime may also allow for targeted retention of data provided it’s accompanied by ‘effective safeguards’ and reviewed by a court or independent authority.

 

The reference to the CJEU joined a number of cases, including legal challenges brought by rights advocacy group Privacy International to bulk collection powers baked into the UK’s Investigatory Powers Act; a La Quadrature du Net (and others’) challenge to a 2015 French decree related to specialized intelligence services; and a challenge to Belgium’s 2016 law on collection and retention of comms data.

Civil rights campaigners had been eagerly awaiting today’s judgements from the Grand Chamber, following an opinion by an advisor to the court in January which implied certain EU Member States’ surveillance regimes were breaching the law.

At the time of writing key complainants had yet to issue a response.

Of course a government agency’s definition of how much data collection is ‘strictly necessary’ in a national security context (or, indeed, what constitutes an ‘effective safeguard’) may be rather different to the benchmark of civil rights advocacy groups — so it seems unlikely this ruling will be the last time the CJEU is asked to clarify where the legal limits of mass surveillance lie.

 

Additionally, the judgement raises interesting questions over the UK’s chances of gaining a data protection adequacy agreement from the European Commission — as it leaves the EU in 2021 at the end of the brexit transition process this year — something it needs for digital data flows from the EU to continue uninterrupted as now.

The problem is the UK’s Investigatory Powers Act (IPA) gives government agencies broad powers to intercept and retain digital communications — but here the CJEU is making it clear that such bulk powers must be the exception, not the statutory rule.

So, again, a battle over definitions could be looming…

[…]

Another interesting component of today’s CJEU judgement suggests that in EU states with indiscriminate mass surveillance regimes there could be grounds for overturning individual criminal convictions which are based on evidence obtained via such illegal surveillance.

On this, the court writes in a press release: “As EU law currently stands, it is for national law alone to determine the rules relating to the admissibility and assessment, in criminal proceedings against persons suspected of having committed serious criminal offences, of information and evidence obtained by the retention of data in breach of EU law. However, the Court specifies that the directive on privacy and electronic communications, interpreted in the light of the principle of effectiveness, requires national criminal courts to disregard information and evidence obtained by means of the general and indiscriminate retention of traffic and location data in breach of EU law, in the context of such criminal proceedings, where those persons suspected of having committed criminal offences are not in a position to comment effectively on that information and evidence.”

Update: Privacy International has now responded to the CJEU judgements, saying the UK, French and Belgian surveillance regimes must be amended to be brought within EU law.

In a statement, legal director Caroline Wilson Palow said: “Today’s judgment reinforces the rule of law in the EU. In these turbulent times, it serves as a reminder that no government should be above the law. Democratic societies must place limits and controls on the surveillance powers of our police and intelligence agencies.

“While the Police and intelligence agencies play a very important role in keeping us safe, they must do so in line with certain safeguards to prevent abuses of their very considerable power. They should focus on providing us with effective, targeted surveillance systems that protect both our security and our fundamental rights.”

Source: Europe’s top court confirms no mass surveillance without limits | TechCrunch

The IRS Is Being Investigated for Using Bought Location Data Without a Warrant – Wait there’s a company called Venntel that sells this and that’s OK?

The body tasked with oversight of the IRS announced in a letter that it will investigate the agency’s use of location data harvested from ordinary apps installed on peoples’ phones, according to a copy of the letter obtained by Motherboard.

The move comes after Senators Ron Wyden and Elizabeth Warren demanded a formal investigation into how the IRS used the location data to track Americans without a warrant.

“We are going to conduct a review of this matter, and we are in the process of contacting the CI [Criminal Investigation] division about this review,” the letter, signed by J. Russell George, the Inspector General, and addressed to the Senators, reads. CI has a broad mandate to investigate abusive tax schemes, bankruptcy fraud, identity theft, and many more similar crimes. Wyden’s office provided Motherboard with a copy of the letter on Tuesday.

In June, officials from the IRS Criminal Investigation unit told Wyden’s office that it had purchased location data from a contractor called Venntel, and that the IRS had tried to use it to identify individual criminal suspects. Venntel obtains location data from innocuous looking apps such as games, weather, or e-commerce apps, and then sells access to the data to government clients.

A Wyden aide previously told Motherboard that the IRS wanted to find phones, track where they were at night, use that as a proxy as to where the individual lived, and then use other data sources to try and identify the person. A person who used to work for Venntel previously told Motherboard that Venntel customers can use the tool to see which devices are in a particular house, for instance.

The IRS’ attempts were not successful though, as the people the IRS was looking for weren’t included in the particular Venntel data set, the aide added.

But the IRS still obtained this data without a warrant, and the legal justification for doing so remains unclear. The aide said that the IRS received verbal approval to use the data, but stopped responding to their office’s inquiries.

[…]

Source: The IRS Is Being Investigated for Using Location Data Without a Warrant

Facebook revenue chief says ad-supported model is ‘under assault’ – boo hoo, turns out people like their privacy

Facebook Chief Revenue Officer David Fischer said Tuesday that the economic models that rely on personalized advertising are “under assault” as Apple readies a change that would limit the ability of Facebook and other companies to target ads and estimate how well they work.

The change to Apple’s identifier for advertisers, or IDFA, will give iPhone users the option to block tracking when opening an app. It was originally planned for iOS 14, the version of the iPhone operating system that was released last month. But Apple said last month it was delaying the rollout until 2021 “to give developers time to make necessary changes.”

Fischer, speaking at a virtual Advertising Week session Tuesday, spoke about the changes after being asked about Facebook’s vulnerability to the companies that control mobile platforms, such as Apple and Google, which runs Android.

Fischer argued that though there’s “angst and concern” about the risks of technology, personalized and targeted advertising has been essential to help the internet grow.

“The economic model that not just we at Facebook but so many businesses rely on, this model is worth preserving, one that makes content freely available, and the business that makes it run and hum, is via advertising,” he said.

“And right now, frankly, some of that is under assault, that the very tools that entrepreneurs, that businesses are relying on right now are being threatened. To me, the changes that Apple has proposed, pretty sweeping changes, are going to hurt developers and businesses the most.”

Apple frames the change as preserving users’ privacy, rather than as an attack on the advertising industry, and has been promoting its privacy features as a core reason to get an iPhone. It comes as consumers are increasingly wary about their online privacy following scandals with various companies, including Facebook.

[…]

Source: Facebook revenue chief says ad-supported model is ‘under assault’

Who watches the watchers? Samsung does so it can fling ads at owners of its smart TVs

Samsung brags to advertisers that “first screen ads”, seen by all users of its Smart TVs when they turn on, are 100 per cent viewable, audience targeted, and seen 400 times per TV per month. Some users are not happy.

“Dear Samsung, why are you showing Ads on my Smart TV without my consent? I didn’t agree to this in the privacy settings but I keep on getting this, why?” said a user on Samsung’s TV forum, adding last week that “there is no mention of advertising on any of their brand new boxes”.

As noted by TV site flatpanelshd, a visit to Samsung’s site pitching to advertisers is eye-opening. It is not just that the ads appear, but also that the company continually profiles its customers, using a technology called Automatic Content Recognition (ACR), which works by detecting what kind of content a viewer is watching.

Samsung’s Tom Focetta, VP Ad Sales and Operations in the US, said in an interview: “Our platform is built on the largest source of TV data from more than 50 million smart TVs. And we have amassed over 60 per cent of the US ACR footprint.” Focetta added that ACR data is “not sold, rented or distributed” but used exclusively by Samsung to target advertising.

The first screen ad unit was introduced five years ago, Focetta explained, and the company has since “added video, different types of target audience engagement, different ways to execute in terms of tactics like audience takeovers, roadblocks”. A “roadblock” is defined as “100 per cent ownership of first screen ad impressions across all Samsung TVs”. According to a Samsung support, quoted by flatpanelshd: “In general, the banner cannot be deactivated in the Smart Hub.”

Advertising does not stop there since Samsung also offers TV Plus, “a free ad-supported TV service”. Viewers are familiar with this deal, though, since ad-supported broadcasting is long established. What perturbs them is that when spending a large sum of money on TV hardware, they were unknowingly agreeing to advertising baked into its operating menu, every time they switch on.

The advent of internet-connected TVs means that viewers now divide their time between traditional TV delivered by cable or over the air, and streaming content, with an increasing share going to streaming. Viewers who have cancelled subscription TV services in favour of streaming are known as cord-cutters.

Even viewers who have chosen to watch only ad-free content do not escape. “30 per cent of streamers spend all of their streaming time in non-ad supported apps. This, however, does not mean ‘The Lost 30’ are unreachable,” said Samsung in a paper.

[…]

Source: Who watches the watchers? Samsung does so it can fling ads at owners of its smart TVs • The Register

Blowback Time: China Says TikTok Deal Is A Model For How It Should Deal With US Companies In China

We’ve already covered what a ridiculous, pathetic grift the Oracle/TikTok deal was. Despite it being premised on a “national security threat” from China, because the app might share some data (all of which is easily buyable from data brokers) with Chinese officials, the final deal cured none of that, left the Chinese firm ByteDance with 80% ownership of TikTok, and gave Trump supporters at Oracle a fat contract — and allowed Trump to pretend he did something.

Of course, what he really did was hand China a huge gift. In response to the deal, state media in China is now highlighting how the Chinese government can use this deal as a model for the Chinese to force the restructuring of US tech companies, and force the data to be controlled by local companies in China. This is from the editor-in-chief of The Global Times, a Chinese, state-sponsored newspaper:

That says:

The US restructuring of TikTok’s stake and actual control should be used as a model and promoted globally. Overseas operation of companies such as Google, Facebook shall all undergo such restructure and be under actual control of local companies for security concerns.

So, beyond doing absolutely nothing to solve the “problem” that politicians in the US laid out, the deal works in reverse. It’s given justification for China to mess with American companies in the same way, and push to expose more data to the Chinese government.

Great work, Trump. Hell of a deal.

Meanwhile, the same Twitter feed says that it’s expected that officials in Beijing are going to reject the deal from their end, and seek to negotiate one even more favorable to China’s “national security interests and dignity.”

So, beyond everything else, Trump’s “deal” has probably done more to help China, and harm data privacy and protection, while also handing China a justification playbook to do so: “See, we’re just following your lead!”

Source: Blowback Time: China Says TikTok Deal Is A Model For How It Should Deal With US Companies In China | Techdirt

Spain’s highway agency is monitoring speeding hotspots using bulk phone location data – is that even allowed here?

Spain’s highways agency is using bulk mobile phone data for monitoring speeding hotspots, according to local reports.

Equipped with data on customers handed over by local mobile phone operators, Spain’s Directorate-General for Traffic (DGT) may be gathering data on “which roads and at what specific kilometer points the speed limits are usually exceeded,” according to Granadan newspaper Ideal (en español).

“In fact, Traffic has data on this since the end of last year when the National Statistics Institution (INE) reached an agreement with mobile operators to obtain information about the movements of citizens,” reported the paper.

The data-harvesting agreement was first signed late last year to coincide with a national census (as El Reg reported at the time) and is now being used to monitor drivers’ speeds.

National newspaper El Pais reported in October 2019 that the trial would involve dividing Spain “into 3,500 cells with a minimum of 5,000 people in each of them” with the locations of phones being sampled continuously between 9am and 6pm, with further location snapshots being taken at 12am and 6am.

The newspaper explained: “With this information it will be possible to know how many citizens move from a dormitory municipality to a city; how many people work in the same neighbourhood where you live or in a different one; where do the people who work in an area come from, or how the population fluctuates in a box throughout the day.”

The INE insisted that data collected back then had been anonymised and was “aimed at getting a better idea of where Spaniards go during the day and night”, as the BBC summarised the scheme. Mobile networks Vodafone, Movistar, and Orange were all said to be handing over user data to the INE, with the bulk information fetching €500,000 – a sum split between all three firms.

Let me interject here that it’s practically impossible to anonymise data – and location data is incredibly personal, private and dangerous as seen by the US military having secret bases being exposed.

In April the initiative was reactivated for the so-called DataCovid plan, where the same type of bulk location data was used to identify areas where Spaniards were ignoring COVID-19 lockdown laws.

“The goal is to analyse the effect which the (confinement) measures have had on people’s movements, and see if people’s movements across the land are increasing or decreasing,” Spain’s government said at the time, as reported by expat news service The Local’s Iberian offshoot.

The DGT then apparently hit on the idea of using speed data derived from cell tower pings (in the same way that Google Maps, Waze, and other online services derive average road speed and congestion information) to identify locations where drivers may have been breaking the speed limit.

The Ideal news website seemed to put the obvious fears to bed in its report of the traffic police initiative when it posed the obvious, rhetorical, question: whether drivers can be fined based on mobile data.

“The answer is clear and direct: it is not possible,” it concluded. “The DGT can only fine us through the fixed and mobile radars that it has installed throughout the country.”

While the direction of travel here seems obvious to anyone with any experience of living in a western country that implements this type of dragnet mass surveillance, so far there is little evidence of an explicit link between mobile phone data-slurping and speed cameras or fines.

Back in 2016, TfL ran a “trial” tracking people’s movements by analysing where their MAC addresses popped up within the Tube network, also hoping to use this data to get higher prices for advertising spots at busy areas inside Tube stations. Dedicated public Wi-Fi spots on train platforms is now a permanent fixture in all but a few of the London Underground stations. The service is operated by Virgin Media, which is “free” to use by customers of the four mobile network operators, but collects your mobile number at the point of signing up.

And here you can see the ease with which mission creep comes out and people start using your data for all kinds of non-related things once they have it. This is why we shouldn’t allow governments or anyone else to get their grubby little hands on it and why we should be glad that at least at EU level, data privacy is taken seriously with GDPR and other laws.

Source: Spain’s highway agency is monitoring speeding hotspots using bulk phone location data • The Register

Firefox usage is down 85% despite Mozilla’s top exec pay going up 400%

Mozilla recently announced that they would be dismissing 250 people. That’s a quarter of their workforce so there are some deep cuts to their work too. The victims include: the MDN docs (those are the web standards docs everyone likes better than w3schools), the Rust compiler and even some cuts to Firefox development. Like most people I want to see Mozilla do well but those three projects comprise pretty much what I think of as the whole point of Mozilla, so this news is a a big let down.

The stated reason for the cuts is falling income. Mozilla largely relies on “royalties” for funding. In return for payment, Mozilla allows big technology companies to choose the default search engine in Firefox – the technology companies are ultimately paying to increase the number of searches Firefox users make with them. Mozilla haven’t been particularly transparent about why these royalties are being reduced, except to blame the coronavirus.

I’m sure the coronavirus is not a great help but I suspect the bigger problem is that Firefox’s market share is now a tiny fraction of its previous size and so the royalties will be smaller too – fewer users, so fewer searches and therefore less money for Mozilla.

The real problem is not the royalty cuts, though. Mozilla has already received more than enough money to set themselves up for financial independence. Mozilla received up to half a billion dollars a year (each year!) for many years. The real problem is that Mozilla didn’t use that money to achieve financial independence and instead just spent it each year, doing the organisational equivalent of living hand-to-mouth.

Despite their slightly contrived legal structure as a non-profit that owns a for-profit, Mozilla are an NGO just like any other. In this article I want to apply the traditional measures that are applied to other NGOs to Mozilla in order to show what’s wrong.

These three measures are: overheads, ethics and results.

Overheads

One of the most popular and most intuitive ways to evaluate an NGO is to judge how much of their spending is on their programme of works (or “mission”) and how much is on other things, like administration and fundraising. If you give money to a charity for feeding people in the third world you hope that most of the money you give them goes on food – and not, for example, on company cars for head office staff.

Mozilla looks bad when considered in this light. Fully 30% of all expenditure goes on administration. Charity Navigator, an organisation that measures NGO effectiveness, would give them zero out of ten on the relevant metric. For context, to achieve 5/10 on that measure Mozilla admin would need to be under 25% of spending and, for 10/10, under 15%.

Senior executives have also done very well for themselves. Mitchell Baker, Mozilla’s top executive, was paid $2.4m in 2018, a sum I personally think of as instant inter-generational wealth. Payments to Baker have more than doubled in the last five years.

As far as I can find, there is no UK-based NGO whose top executive makes more than £1m ($1.3m) a year. The UK certainly has its fair share of big international NGOs – many much bigger and more significant than Mozilla.

I’m aware that some people dislike overheads as a measure and argue that it’s possible for administration spending to increase effectiveness. I think it’s hard to argue that Mozilla’s overheads are correlated with any improvement in effectiveness.

Ethics

Mozilla now thinks of itself less as a custodian of the old Netscape suite and more as a ‘privacy NGO’. One slogan inside Mozilla is: “Beyond the Browser”.

Regardless of how they view themselves, most of their income comes from helping to direct traffic to Google by making that search engine the default in Firefox. Google make money off that traffic via a big targeted advertising system that tracks people across the web and largely without their consent. Indeed, one of the reasons this income is falling is because as Firefox’s usage falls less traffic is being directed Google’s way and so Google will pay less.

There is, as yet, no outbreak of agreement among the moral philosophers as to a universal code of ethics. However I think most people would recognise hypocrisy in Mozilla’s relationship with Google. Beyond the ethical problems, the relationship certainly seems to create conflicts of interest. Anyone would think that a privacy NGO would build anti-tracking countermeasures into their browser right from the start. In fact, this was only added relatively recently (in 2019), after both Apple (in 2017) and Brave (since release) paved the way. It certainly seems like Mozilla’s status as a Google vassal has played a role in the absence of anti-tracking features in Firefox for so long.

Another ethical issue is Mozilla’s big new initiative to move into VPNs. This doesn’t make a lot of sense from a privacy point of view. Broadly speaking: VPNs are not a useful privacy tool for people browsing the web. A VPN lets you access the internet through a proxy – so your requests superficially appear to come from somewhere other than they really do. This does nothing to address the main privacy problem for web users: that they are being passively tracked and de-anonymised on a massive scale by the baddies at Google and elsewhere. This tracking happens regardless of IP address.

When I tested Firefox through Mozilla VPN (a rebrand of Mullvad VPN) I found that I could be de-anonymised by browser fingerprinting – already a fairly widespread technique by which various elements of your browser are examined to create a “fingerprint” which can then be used to re-identify you later. Firefox, unlike some other browsers, does not include any countermeasures against this.

firefox's results on panopticlick - my browser has a unique fingerprint
Even when using Mozilla’s “secure and private” VPN, Firefox is trackable by browser fingerprinting, as demonstrated by the EFF’s Panopticlick tool. Other browsers use randomised fingerprints as a countermeasure against this tracking.

Another worry is that many of these privacy focused VPN services have a nasty habit of turning out to keep copious logs on user behaviour. A few months ago several “no log” VPN services inadvertently released terabytes of private user data that they had promised not to collect in a massive breach. VPN services are in a great position to eavesdrop – and even if they promise not to, your only option is to take them at their word.

Results

I’ve discussed the Mozilla chair’s impressive pay: $2.4m/year. Surely such impressive pay is justified by the equally impressive results Mozilla has achieved? Sadly on almost every measure of results both quantitative and qualitative, Mozilla is a dog.

Firefox is now so niche it is in danger of garnering a cult following: it has just 4% market share, down from 30% a decade ago. Mobile browsing numbers are bleak: Firefox barely exists on phones, with a market share of less than half a percent. This is baffling given that mobile Firefox has a rare feature for a mobile browser: it’s able to install extensions and so can block ads.

Yet despite the problems within their core business, Mozilla, instead of retrenching, has diversified rapidly. In recent years Mozilla has created:

  • a mobile app for making websites
  • a federated identity system
  • a large file transfer service
  • a password manager
  • an internet-of-things framework/standard
  • an email relay service
  • a completely new phone operating system
  • an AI division (but of course)
  • and spent $25 million buying the reading list management startup, Pocket

Many of the above are now abandoned.

Sadly Mozilla’s annual report doesn’t break down expenses on a per-project basis so it’s impossible to know how much of the spending that is on Mozilla’s programme is being spent on Firefox and how much is being spent on all these other side-projects.

What you can at least infer is that the side-projects are expensive. Software development always is. Each of the projects named above (and all the other ones that were never announced or that I don’t know about) will have required business analysts, designers, user researchers, developers, testers and all the other people you need in order to create a consumer web project.

The biggest cost of course is the opportunity cost of just spending that money on other stuff – or nothing: it could have been invested to build an endowment. Now Mozilla is in the situation where apparently there isn’t enough money left to fully fund Firefox development.

What now?

Mozilla can’t just continue as before. At the very least they need to reduce their expenses to go along with their now reduced income. That income is probably still pretty enormous though: likely hundreds of millions a year.

I’m a Firefox user (and one of the few on mobile, apparently) and I want to see Mozilla succeed. As such, I would hope that Mozilla would cut their cost of administration. I’d also hope that they’d increase spending on Firefox to make it faster and implement those privacy features that other browsers have. Most importantly: I’d like them to start building proper financial independence.

I doubt those things will happen. Instead they will likely keep the expensive management. They have already cut spending on Firefox. Their great hope is to continue trying new things, like using their brand to sell VPN services that, as I’ve discussed, do not solve the problem that their users have.

Instead of diversifying into yet more products and services Mozilla should probably just ask their users for money. For many years the Guardian newspaper (a similarly sized organisation to Mozilla in terms of staff) was a financial basket case. The Guardian started asking their readers for money a few years ago and seems to be on firmer financial footing since.

Getting money directly has also helped align the incentives of their organisation with those of their readers. Perhaps that would work for Mozilla. But then, things are different at the Guardian. Their chief exec makes a mere £360,000 a year.

Source: Firefox usage is down 85% despite Mozilla’s top exec pay going up 400%

MS Edge and Google Chrome are winning the renewed browser wars and this kind of financial playing isn’t helping Firefox, who I really want to win on ethical considerations. It’s just not helping.