France fines Google $120M and Amazon $42M for dropping tracking cookies without consent

France’s data protection agency, the CNIL, has slapped Google and Amazon with fines for dropping tracking cookies without consent.

Google has been hit with a total of €100 million ($120 million) for dropping cookies on Google.fr and Amazon €35 million (~$42 million) for doing so on the Amazon .fr domain under the penalty notices issued today.

The regulator carried out investigations of the websites over the past year and found tracking cookies were automatically dropped when a user visited the domains in breach of the country’s Data Protection Act.

In Google’s case the CNIL has found three consent violations related to dropping non-essential cookies.

“As this type of cookies cannot be deposited without the user having expressed his consent, the restricted committee considered that the companies had not complied with the requirement provided for by article 82 of the Data Protection Act and the prior collection of the consent before the deposit of non-essential cookies,” it writes in the penalty notice [which we’ve translated from French].

Amazon was found to have made two violations, per the CNIL penalty notice.

CNIL also found that the information about the cookies provided to site visitors was inadequate — noting that a banner displayed by Google did not provide specific information about the tracking cookies the Google.fr site had already dropped.

Under local French (and European) law, site users should have been clearly informed before the cookies were dropped and asked for their consent.

In Amazon’s case its French site displayed a banner informing arriving visitors that they agreed to its use of cookies. CNIL said this did not comply with transparency or consent requirements — since it was not clear to users that the tech giant was using cookies for ad tracking. Nor were users given the opportunity to consent.

The law on tracking cookie consent has been clear in Europe for years. But in October 2019 a CJEU ruling further clarified that consent must be obtained prior to storing or accessing non-essential cookies. As we reported at the time, sites that failed to ask for consent to track were risking a big fine under EU privacy laws.

Source: France fines Google $120M and Amazon $42M for dropping tracking cookies without consent | TechCrunch

As if Productivity Score wasn’t creepy enough, Microsoft has patented tech for ‘meeting quality monitoring devices’ – PS is being defanged though

The slightly creepy “Productivity Score” may not be all that’s in store for Microsoft 365 users, judging by a trawl of Redmond’s patents.

One that has popped up recently concerns a “Meeting Insight Computing System“, spotted first by GeekWire, created to give meetings a quality score with a view to improving upcoming get-togethers.

It all sounds innocent enough until you read about the requirement for “quality parameters” to be collected from “meeting quality monitoring devices”, which might give some pause for thought.

Productivity Score relies on metrics captured within Microsoft 365 to assess how productive a company and its workers are. Metrics include the take-up of messaging platforms versus email. And though Microsoft has been quick to insist the motives behind the tech are pure, others have cast more of a jaundiced eye over the technology.

[…]

Meeting Insights would take things further by plugging data from a variety of devices into an algorithm in order to score the meeting. Sampling of environmental data such as air quality and the like is all well and good, but proposed sensors such as “a microphone that may, for instance, detect speech patterns consistent with boredom, fatigue, etc” as well as measuring other metrics, such as how long a person spends speaking, could also provide data to be stirred into the mix.

And if that doesn’t worry attendees, how about some more metrics to measure how focused a person is? Are they taking care of emails, messaging or enjoying a surf of the internet when they should be paying attention to the speaker? Heck, if one is taking data from a user’s computer, one could even consider the physical location of the device.

[…]

Talking to The Reg, one privacy campaigner who asked to remain anonymous said of tools such as Productivity Score and the Meeting Insight Computing System patent: “There is a simple dictum in privacy: you cannot lose data you don’t have. In other words, if you collect it you have to protect it, and that sort of data is risky to start with.

“Who do you trust? The correct answer is ‘no one’.”

Source: As if Productivity Score wasn’t creepy enough, Microsoft has patented tech for ‘meeting quality monitoring devices’ • The Register

Since then, Microsoft will remove user names from ‘Productivity Score’ feature after privacy backlash ( Geekwire )

Microsoft says it will make changes in its new Productivity Score feature, including removing the ability for companies to see data about individual users, to address concerns from privacy experts that the tech giant had effectively rolled out a new tool for snooping on workers.

“Going forward, the communications, meetings, content collaboration, teamwork, and mobility measures in Productivity Score will only aggregate data at the organization level—providing a clear measure of organization-level adoption of key features,” wrote Jared Spataro, Microsoft 365 corporate vice president, in a post this morning. “No one in the organization will be able to use Productivity Score to access data about how an individual user is using apps and services in Microsoft 365.”

The company rolled out its new “Productivity Score” feature as part of Microsoft 365 in late October. It gives companies data to understand how workers are using and adopting different forms of technology. It made headlines over the past week as reports surfaced that the tool lets managers see individual user data by default.

As originally rolled out, Productivity Score turned Microsoft 365 into a “full-fledged workplace surveillance tool,” wrote Wolfie Christl of the independent Cracked Labs digital research institute in Vienna, Austria. “Employers/managers can analyze employee activities at the individual level (!), for example, the number of days an employee has been sending emails, using the chat, using ‘mentions’ in emails etc.”

The initial version of the Productivity Score tool allowed companies to see individual user data. (Screenshot via YouTube)

Spataro wrote this morning, “We appreciate the feedback we’ve heard over the last few days and are moving quickly to respond by removing user names entirely from the product. This change will ensure that Productivity Score can’t be used to monitor individual employees.”

Privacy campaigner flags concerns about Microsoft’s creepy Productivity Score now in 365

Microsoft’s Productivity Score has put in a public appearance in Microsoft 365 and attracted the ire of privacy campaigners and activists.

The Register had already noted the vaguely creepy-sounding technology back in May. The goal of it is to use telemetry captured by the Windows behemoth to track the productivity of an organisation through metrics such as a corporate obsession with interminable meetings or just how collaborative employees are being.

The whole thing sounds vaguely disturbing in spite of Microsoft’s insistence that it was for users’ own good.

As more details have emerged, so have concerns over just how granular the level of data capture is.

Vienna-based researcher (and co-creator of Data Dealer) Wolfie Christl suggested that the new features “turns Microsoft 365 into an full-fledged workplace surveillance tool.”

Christl went on to claim that the software allows employers to dig into employee activities, checking the usage of email versus Teams and looking into email threads with @mentions. “This is so problematic at many levels,” he noted, adding: “Managers evaluating individual-level employee data is a no go,” and that there was the danger that evaluating “productivity” data can shift power from employees to organisations.

Earlier this year we put it to Microsoft corporate vice president Brad Anderson that employees might find themselves under the gimlet gaze of HR thanks to this data.

He told us: “There is no PII [personally identifiable information] data in there… it’s a valid concern, and so we’ve been very careful that as we bring that telemetry back, you know, we bring back what we need, but we stay out of the PII world.”

Microsoft did concede that there could be granularity down to the individual level although exceptions could be configured. Melissa Grant, director of product marketing for Microsoft 365, told us that Microsoft had been asked if it was possible to use the tool to check, for example, that everyone was online and working by 8 but added: “We’re not in the business of monitoring employees.”

Christl’s concerns are not limited to the Productivity Score dashboard itself, but also regarding what is going on behind the scenes in the form of the Microsoft Graph. The People API, for example, is a handy jumping off point into all manner of employee data.

For its part, Microsoft has continued to insist that Productivity Score is not a stick with which to bash employees. In a recent blog on the matter, the company stated:

To be clear, Productivity Score is not designed as a tool for monitoring employee work output and activities. In fact, we safeguard against this type of use by not providing specific information on individualized actions, and instead only analyze user-level data aggregated over a 28-day period, so you can’t see what a specific employee is working on at a given time. Productivity Score was built to help you understand how people are using productivity tools and how well the underlying technology supports them in this.

In an email to The Register, Christl retorted: “The system *does* clearly monitor employee activities. And they call it ‘Productivity Score’, which is perhaps misleading, but will make managers use it in a way managers usually use tools that claim to measure ‘productivity’.”

He added that Microsoft’s own promotional video for the technology showed a list of clearly identifiable users, which corporate veep Jared Spataro said enabled companies to “find your top communicators across activities for the last four weeks.”

We put Christl’s concerns to Microsoft and asked the company if its good intentions extended to the APIs exposed by the Microsoft Graph.

While it has yet to respond to worries about the APIs, it reiterated that the tool was compliant with privacy laws and regulations, telling us: “Productivity Score is an opt-in experience that gives IT administrators insights about technology and infrastructure usage.

It added: “Insights are intended to help organizations make the most of their technology investments by addressing common pain points like long boot times, inefficient document collaboration, or poor network connectivity. Insights are shown in aggregate over a 28-day period and are provided at the user level so that an IT admin can provide technical support and guidance.”

Source: Privacy campaigner flags concerns about Microsoft’s creepy Productivity Score • The Register

IRS contracted to Search Warrantless Location Database Over 10,000 Times

The IRS was able to query a database of location data quietly harvested from ordinary smartphone apps over 10,000 times, according to a copy of the contract between IRS and the data provider obtained by Motherboard.

The document provides more insight into what exactly the IRS wanted to do with a tool purchased from Venntel, a government contractor that sells clients access to a database of smartphone movements. The Inspector General is currently investigating the IRS for using the data without a warrant to try to track the location of Americans.

“This contract makes clear that the IRS intended to use Venntel’s spying tool to identify specific smartphone users using data collected by apps and sold onwards to shady data brokers. The IRS would have needed a warrant to obtain this kind of sensitive information from AT&T or Google,” Senator Ron Wyden told Motherboard in a statement after reviewing the contract.

[…]

Venntel sources its location data from gaming, weather, and other innocuous looking apps. An aide for the office of Senator Ron Wyden, whose office has been investigating the location data industry, previously told Motherboard that officials from Customs and Border Protection (CBP), which has also purchased Venntel products, said they believe Venntel also obtains location information from the real-time bidding that occurs when advertisers push their adverts into users’ browsing sessions.

One of the new documents says Venntel sources the location information from its “advertising analytics network and other sources.” Venntel is a subsidiary of advertising firm Gravy Analytics.

The data is “global,” according to a document obtained from CBP.

[…]

Source: IRS Could Search Warrantless Location Database Over 10,000 Times

GM launches OnStar Insurance Services – uses your driving data to calculate insurance rate

Andrew Rose, president of OnStar Insurance Services commented: “OnStar Insurance will promote safety, security and peace of mind. We aim to be an industry leader, offering insurance in an innovative way.

“GM customers who have subscribed to OnStar and connected services will be eligible to receive discounts, while also receiving fully-integrated services from OnStar Insurance Services.”

The service has been developed to improve the experience for policyholders who have an OnStar Safety & Security plan, as Automatic Crash Response has been designed to notify an OnStar Emergency-certified Advisor who can send for help.

The service is currently working with its insurance carrier partners to remove biased insurance plans by focusing on factors within the customer’s control, which includes individual vehicle usage and rewarding smart driving habits that benefit road safety.

OnStar Insurance Services plans to provide customers with personalised vehicle care and promote safer driving habits, along with a data-backed analysis of driving behaviour.

Source: General Motors launches OnStar Insurance Services – Reinsurance News

What it doesn’t say is whether it could raise insurances or deny them entirely, how transparent the reward system will be or what else they will be doing with your data.

Australia’s spy agencies caught collecting COVID-19 app data

Australia’s intelligence agencies have been caught “incidentally” collecting data from the country’s COVIDSafe contact-tracing app during the first six months of its launch, a government watchdog has found.

The report, published Monday by the Australian government’s inspector general for the intelligence community, which oversees the government’s spy and eavesdropping agencies, said the app data was scooped up “in the course of the lawful collection of other data.”

But the watchdog said that there was “no evidence” that any agency “decrypted, accessed or used any COVID app data.”

Incidental collection is a common term used by spies to describe the data that was not deliberately targeted but collected as part of a wider collection effort. This kind of collection isn’t accidental, but more of a consequence of when spy agencies tap into fiber optic cables, for example, which carries an enormous firehose of data. An Australian government spokesperson told one outlet, which first reported the news, that incidental collection can also happen as a result of the “execution of warrants.”

The report did not say when the incidental collection stopped, but noted that the agencies were “taking active steps to ensure compliance” with the law, and that the data would be “deleted as soon as practicable,” without setting a firm date.

For some, fears that a government spy agency could access COVID-19 contact-tracing data was the worst possible outcome.

[…]

Source: Australia’s spy agencies caught collecting COVID-19 app data | TechCrunch

Amazon’s ad-hoc Ring, Echo mesh network can mooch off your neighbors’ Wi-Fi if needed – and it’s opt-out

Amazon is close to launching Sidewalk – its ad-hoc wireless network for smart-home devices that taps into people’s Wi-Fi – and it is pretty much an opt-out affair.

The gist of Sidewalk is this: nearby Amazon gadgets, regardless of who owns them, can automatically organize themselves into their own private wireless network mesh, communicating primarily using Bluetooth Low Energy over short distances, and 900MHz LoRa over longer ranges.

At least one device in a mesh will likely be connected to the internet via someone’s Wi-Fi, and so, every gadget in the mesh can reach the ‘net via that bridging device. This means all the gadgets within a mesh can be remotely controlled via an app or digital assistant, either through their owners’ internet-connected Wi-Fi or by going through a suitable bridge in the mesh. If your internet goes down, your Amazon home security gizmo should still be reachable, and send out alerts, via the mesh.

It also means if your neighbor loses broadband connectivity, their devices in the Sidewalk mesh can still work over the ‘net by routing through your Sidewalk bridging device and using your home ISP connection.

[…]

Amazon Echoes, Ring Floodlight Cams, and Ring Spotlight Cams will be the first Sidewalk bridging devices as well as Sidewalk endpoints. The internet giant hopes to encourage third-party manufacturers to produce equipment that is also Sidewalk compatible, extending meshes everywhere.

Crucially, it appears Sidewalk is opt-out for those who already have the hardware, and will be opt-in for those buying new gear.

[…]

if you already have, say, an Amazon Ring, it will soon get a software update that will automatically enable Sidewalk connectivity, and you’ll get an email explaining how to switch that off. When powering up a new gizmo, you’ll at least get the chance to opt in or out.

[…]

We’re told Sidewalk will only sip your internet connection rather than hog it, limiting itself to half a gigabyte a month. This policy appears to live in hope that people aren’t on stingy monthly data caps.

[…]

Just don’t forget that Ring and the police, in the US at least, have a rather cosy relationship. While Amazon stresses that Ring owners are in control of the footage recorded by their camera-fitted doorbells, homeowners are often pressured into turning their equipment into surveillance systems for the cops.

Source: Amazon’s ad-hoc Ring, Echo mesh network can mooch off your neighbors’ Wi-Fi if needed – and it’s opt-out • The Register

The ones who brought you Let’s Encrypt, bring you: Tools for gathering anonymized app usage metrics from netizens

The Internet Security Research Group (ISRG) has a plan to allow companies to collect information about how people are using their products while protecting the privacy of those generating the data.

Today, the California-based non-profit, which operates Let’s Encrypt, introduced Prio Services, a way to gather online product metrics without compromising the personal information of product users.

“Applications such as web browsers, mobile applications, and websites generate metrics,” said Josh Aas, founder and executive director of ISRG, and Tim Geoghegan, site reliability engineer, in an announcement. “Normally they would just send all of the metrics back to the application developer, but with Prio, applications split the metrics into two anonymized and encrypted shares and upload each share to different processors that do not share data with each other.”

Prio is described in a 2017 research paper [PDF] as “a privacy-preserving system for the collection of aggregate statistics.” The system was developed by Henry Corrigan-Gibbs, then a Stanford doctoral student and currently an MIT assistant professor, and Dan Boneh, a professor of computer science and electrical engineering at Stanford.

Prio implements a cryptographic approach called secret-shared non-interactive proofs (SNIPs). According to its creators, it handles data only 5.7x slower than systems with no privacy protection. That’s considerably better than the competition: client-generated non-interactive zero-knowledge proofs of correctness (NIZKs) are 267x slower than unprotected data processing and privacy methods based on succinct non-interactive arguments of knowledge (SNARKs) clock in at three orders of magnitude slower.

“With Prio, you can get both: the aggregate statistics needed to improve an application or service and maintain the privacy of the people who are providing that data,” said Boneh in a statement. “This system offers a robust solution to two growing demands in our tech-driven economy.”

In 2018 Mozilla began testing Prio to gather Firefox telemetry data and found the cryptographic scheme compelling enough to make it the basis of its Firefox Origin Telemetry service.

[…]

Source: The ones who brought you Let’s Encrypt, bring you: Tools for gathering anonymized app usage metrics from netizens • The Register

Google Will Make It a bit Easier to Turn Off Smart Features which track you, Slightly Harder for Regulators to Break Up Google

Soon, Google will present you with a clear choice to disable smart features, like Google assistant reminders to pay your bills and predictive text in Gmail. Whether you like the Gmail mindreader function that autofills “all the best” and “reaching out,” or have long dreaded the arrival of the machine staring back from the void,: it’s your world, Google’s just living in it. According to Google.

We’ve always been able to disable these functions if we bothered hunting through account settings. But “in the coming weeks” Google will show a new blanket setting to “turn off smart features” which will disable features like Smart Compose, Smart Reply, in apps like Gmail; the second half of the same prompt will disable whether additional Google products—like Maps or Assistant, for example—are allowed to be personalized based on data from Gmail, Meet, and Chat.

Google writes in its blog post about the new-ish settings that humans are not looking at your emails to enable smart features, and Google ads are “not based on your personal data in Gmail,” something CEO Sundar Pichai has likewise said time and again. Google claims to have stopped that practice in 2017, although the following year the Wall Street Journal reported that third-party app developers had freely perused inboxes with little oversight. (When asked whether this is still a problem, the spokesperson pointed us to Google’s 2018 effort to tighten security.)

A Google spokesperson emphasized that the company only uses email contents for security purposes like filtering spam and phishing attempts.

These personalization changes aren’t so much about tightening security as they are another informed consent defense which Google can use to repel the current regulatory siege being waged against it by lawmakers. It has expanded incognito mode for maps and auto-deleting data in location history or web and app activity and on YouTube (though after a period of a few months).

Inquiries in the U.S. and EU have found that Google’s privacy settings have historically presented the appearance of privacy, rather than privacy itself. After a 2018 AP article exposed the extent of Google’s location data harvesting, an investigation found that turning location off in Android was no guarantee that Google wouldn’t collect location data (though Google has denied this.) Plaintiffs in a $5 billion class-action lawsuit filed this summer alleged that “incognito mode” in Chrome didn’t prevent Google from capturing and sharing their browsing history. And last year, French regulators fined Google nearly $57 million for violating the General Data Protection Regulation (GDPR) by allegedly burying privacy controls beneath five or six layers of settings. (When asked, the spokesperson said Google has no additional comment on these cases.)

So this is nice, and also Google’s announcement reads as a letter to regulators. “This new setting is designed to reduce the work of understanding and managing [a choice over how data is processed], in view of what we’ve learned from user experience research and regulators’ emphasis on comprehensible, actionable user choices over data.”

Source: Google Will Make It Easier to Turn Off Smart Features

Apple hits back at European activist lawsuit against unauthorised tracking installs – says it doesn’t use it… but 3rd parties do

The group, led by campaigner Max Schrems, filed complaints with data protection watchdogs in Germany and Spain alleging that the tracking tool illegally enabled the $2 trillion U.S. tech giant to store users’ data without their consent.

Apple directly rebutted the claims filed by Noyb, the digital rights group founded by Schrems, saying they were “factually inaccurate and we look forward to making that clear to privacy regulators should they examine the complaint”.

Schrems is a prominent figure in Europe’s digital rights movement that has resisted intrusive data-gathering by Silicon Valley’s tech platforms. He has fought two cases against Facebook, winning landmark judgments that forced the social network to change how it handles user data.

Noyb’s complaints were brought against Apple’s use of a tracking code, known as the Identifier for Advertisers (IDFA), that is automatically generated on every iPhone when it is set up.

The code, stored on the device, makes it possible to track a user’s online behaviour and consumption preferences – vital in allowing companies to send targeted adverts.

“Apple places codes that are comparable to a cookie in its phones without any consent by the user. This is a clear breach of European Union privacy laws,” Noyb lawyer Stefano Rossetti said.

Rossetti referred to the EU’s e-Privacy Directive, which requires a user’s consent before installation and using such information.

Apple said in response that it “does not access or use the IDFA on a user’s device for any purpose”.

It said its aim was to protect the privacy of its users and that the latest release of its iOS 14 operating system gave users greater control over whether apps could link with third parties for the purposes of targeted advertising.

Source: Apple hits back at European activist complaints against tracking tool | Reuters

The complaint against Apple is that the IDFA is set at all without consent from the user. And it’s not the point that Apple accesses it or not, the point is that unspecified 3rd parties (advertisers, hackers, government, etc) can.

How the U.S. Military Buys Location Data from Ordinary Apps

The U.S. military is buying the granular movement data of people around the world, harvested from innocuous-seeming apps, Motherboard has learned. The most popular app among a group Motherboard analyzed connected to this sort of data sale is a Muslim prayer and Quran app that has more than 98 million downloads worldwide. Others include a Muslim dating app, a popular Craigslist app, an app for following storms, and a “level” app that can be used to help, for example, install shelves in a bedroom.

Through public records, interviews with developers, and technical analysis, Motherboard uncovered two separate, parallel data streams that the U.S. military uses, or has used, to obtain location data. One relies on a company called Babel Street, which creates a product called Locate X. U.S. Special Operations Command (USSOCOM), a branch of the military tasked with counterterrorism, counterinsurgency, and special reconnaissance, bought access to Locate X to assist on overseas special forces operations. The other stream is through a company called X-Mode, which obtains location data directly from apps, then sells that data to contractors, and by extension, the military.

The news highlights the opaque location data industry and the fact that the U.S. military, which has infamously used other location data to target drone strikes, is purchasing access to sensitive data. Many of the users of apps involved in the data supply chain are Muslim, which is notable considering that the United States has waged a decades-long war on predominantly Muslim terror groups in the Middle East, and has killed hundreds of thousands of civilians during its military operations in Pakistan, Afghanistan, and Iraq. Motherboard does not know of any specific operations in which this type of app-based location data has been used by the U.S. military.

[…]

In March, tech publication Protocol first reported that U.S. law enforcement agencies such as Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) were using Locate X. Motherboard then obtained an internal Secret Service document confirming the agency’s use of the technology. Some government agencies, including CBP and the Internal Revenue Service (IRS), have also purchased access to location data from another vendor called Venntel.

“In my opinion, it is practically certain that foreign entities will try to leverage (and are almost certainly actively exploiting) similar sources of private platform user data. I think it would be naïve to assume otherwise,” Mark Tallman, assistant professor at the Department of Emergency Management and Homeland Security at the Massachusetts Maritime Academy, told Motherboard in an email.

THE SUPPLY CHAIN

Some companies obtain app location data through bidstream data, which is information gathered from the real-time bidding that occurs when advertisers pay to insert their adverts into peoples’ browsing sessions. Firms also often acquire the data from software development kits (SDKs).

[…]

In a recent interview with CNN, X-Mode CEO Joshua Anton said the company tracks 25 million devices inside the United States every month, and 40 million elsewhere, including in the European Union, Latin America, and the Asia-Pacific region. X-Mode previously told Motherboard that its SDK is embedded in around 400 apps.

In October the Australian Competition & Consumer Commission published a report about data transfers by smartphone apps. A section of that report included the endpoint—the URL some apps use—to send location data back to X-Mode. Developers of the Guardian app, which is designed to protect users from the transfer of location data, also published the endpoint. Motherboard then used that endpoint to discover which specific apps were sending location data to the broker.

Motherboard used network analysis software to observe both the Android and iOS versions of the Muslim Pro app sending granular location data to the X-Mode endpoint multiple times. Will Strafach, an iOS researcher and founder of Guardian, said he also saw the iOS version of Muslim Pro sending location data to X-Mode.

The data transfer also included the name of the wifi network the phone was currently collected to, a timestamp, and information about the phone such as its model, according to Motherboard’s tests.

[…]

 

Source: How the U.S. Military Buys Location Data from Ordinary Apps

Your Computer isn’t Yours – Apple edition – how is it snooping on you, why can’t you start apps when their server is down

It’s here. It happened. Did you notice?

I’m speaking, of course, of the world that Richard Stallman predicted in 1997. The one Cory Doctorow also warned us about.

On modern versions of macOS, you simply can’t power on your computer, launch a text editor or eBook reader, and write or read, without a log of your activity being transmitted and stored.

It turns out that in the current version of the macOS, the OS sends to Apple a hash (unique identifier) of each and every program you run, when you run it. Lots of people didn’t realize this, because it’s silent and invisible and it fails instantly and gracefully when you’re offline, but today the server got really slow and it didn’t hit the fail-fast code path, and everyone’s apps failed to open if they were connected to the internet.

Because it does this using the internet, the server sees your IP, of course, and knows what time the request came in. An IP address allows for coarse, city-level and ISP-level geolocation, and allows for a table that has the following headings:

Date, Time, Computer, ISP, City, State, Application Hash

Apple (or anyone else) can, of course, calculate these hashes for common programs: everything in the App Store, the Creative Cloud, Tor Browser, cracking or reverse engineering tools, whatever.

This means that Apple knows when you’re at home. When you’re at work. What apps you open there, and how often. They know when you open Premiere over at a friend’s house on their Wi-Fi, and they know when you open Tor Browser in a hotel on a trip to another city.

“Who cares?” I hear you asking.

Well, it’s not just Apple. This information doesn’t stay with them:

  1. These OCSP requests are transmitted unencrypted. Everyone who can see the network can see these, including your ISP and anyone who has tapped their cables.
  2. These requests go to a third-party CDN run by another company, Akamai.
  3. Since October of 2012, Apple is a partner in the US military intelligence community’s PRISM spying program, which grants the US federal police and military unfettered access to this data without a warrant, any time they ask for it. In the first half of 2019 they did this over 18,000 times, and another 17,500+ times in the second half of 2019.

This data amounts to a tremendous trove of data about your life and habits, and allows someone possessing all of it to identify your movement and activity patterns. For some people, this can even pose a physical danger to them.

Now, it’s been possible up until today to block this sort of stuff on your Mac using a program called Little Snitch (really, the only thing keeping me using macOS at this point). In the default configuration, it blanket allows all of this computer-to-Apple communication, but you can disable those default rules and go on to approve or deny each of these connections, and your computer will continue to work fine without snitching on you to Apple.

The version of macOS that was released today, 11.0, also known as Big Sur, has new APIs that prevent Little Snitch from working the same way. The new APIs don’t permit Little Snitch to inspect or block any OS level processes. Additionally, the new rules in macOS 11 even hobble VPNs so that Apple apps will simply bypass them.

Mozilla *privacy not included tech buyers guide rated on creepy scale

This is a list of 130 Smart home gadgets, fitness trackers, toys and more, rated for their privacy & security. It’s a large list and shows you how basically anything by big tech is pretty creepy – anything by Amazon and Facebook is super creepy, Google pretty creepy, Apple only creepy. There are a few surprises, like Moleskine being super creepy. Fitness machinery is pretty bad as are some coffee makers… Nintendo Switches and PS5s (surprisingly) aren’t creepy at all…

Source: Mozilla – *privacy not included

Six Reasons Why Google Maps Is the Creepiest App On Your Phone

VICE has highlighted six reasons why Google Maps is the creepiest app on your phone. An anonymous reader shares an excerpt from the report: 1. Google Maps Wants Your Search History: Google’s “Web & App Activity” settings describe how the company collects data, such as user location, to create a faster and “more personalized” experience. In plain English, this means that every single place you’ve looked up in the app — whether it’s a strip club, a kebab shop or your moped-riding drug dealer’s location — is saved and integrated into Google’s search engine algorithm for a period of 18 months. Google knows you probably find this creepy. That’s why the company uses so-called “dark patterns” — user interfaces crafted to coax us into choosing options we might not otherwise, for example by highlighting an option with certain fonts or brighter colors.

2. Google Maps Limits Its Features If You Don’t Share Your Search History: If you open your Google Maps app, you’ll see a circle in the top right corner that signifies you’re logged in with your Google account. That’s not necessary, and you can simply log out. Of course, the log out button is slightly hidden, but can be found like this: click on the circle > Settings > scroll down > Log out of Google Maps. Unfortunately, Google Maps won’t let you save frequently visited places if you’re not logged into your Google account. If you choose not to log in, when you click on the search bar you get a “Tired of typing?” button, suggesting you sign in, and coaxing you towards more data collection.

3. Google Maps Can Snitch On You: Another problematic feature is the “Google Maps Timeline,” which “shows an estimate of places you may have been and routes you may have taken based on your Location History.” With this feature, you can look at your personal travel routes on Google Maps, including the means of transport you probably used, such as a car or a bike. The obvious downside is that your every move is known to Google, and to anyone with access to your account. And that’s not just hackers — Google may also share data with government agencies such as the police. […] If your “Location History” is on, your phone “saves where you go with your devices, even when you aren’t using a specific Google service,” as is explained in more detail on this page. This feature is useful if you lose your phone, but also turns it into a bonafide tracking device.

4. Google Maps Wants to Know Your Habits: Google Maps often asks users to share a quick public rating. “How was Berlin Burger? Help others know what to expect,” suggests the app after you’ve picked up your dinner. This feels like a casual, lighthearted question and relies on the positive feeling we get when we help others. But all this info is collected in your Google profile, making it easier for someone to figure out if you’re visiting a place briefly and occasionally (like on holiday) or if you live nearby.

5. Google Maps Doesn’t Like It When You’re Offline: Remember GPS navigation? It might have been clunky and slow, but it’s a good reminder that you don’t need to be connected to the internet to be directed. In fact, other apps offer offline navigation. On Google, you can download maps, but offline navigation is only available for cars. It seems fairly unlikely the tech giant can’t figure out how to direct pedestrians and cyclists without internet.

6. Google Makes It Seem Like This Is All for Your Own Good: “Providing useful, meaningful experiences is at the core of what Google does,” the company says on its website, adding that knowing your location is important for this reason. They say they use this data for all kinds of useful things, like “security” and “language settings” — and, of course, selling ads. Google also sells advertisers the possibility to evaluate how well their campaigns reached their target (that’s you!) and how often people visited their physical shops “in an anonymized and aggregated manner”. But only if you opt in (or you forget to opt out).

Source: Six Reasons Why Google Maps Is the Creepiest App On Your Phone – Slashdot

It Took Just 5 Minutes Of Movement Data To Identify ‘Anonymous’ VR Users

As companies and governments increasingly hoover up our personal data, a common refrain to keep people from worrying is the claim that nothing can go wrong because the data itself is “anonymized” — or stripped of personal identifiers like social security numbers. But time and time again, studies have shown how this really is cold comfort, given it takes only a little effort to pretty quickly identify a person based on access to other data sets. Yet most companies, many privacy policy folk, and even government officials still like to act as if “anonymizing” your data means something.

The latest case in point: new research out of Stanford (first spotted by the German website Mixed), found that it took researchers just five minutes of examining the movement data of VR users to identify them in the real world. The paper says participants using an HTC Vive headset and controllers watched five 20-second clips from a randomized set of 360-degree videos, then answered a set of questions in VR that were tracked in a separate research paper.

The movement data (including height, posture, head movement speed and what participants looked at and for how long) was then plugged into three machine learning algorithms, which, from a pool of 511 participants, was able to correctly identify 95% of users accurately “when trained on less than 5 min of tracking data per person.” The researchers went on to note that while VR headset makers (like every other company) assures users that “de-identified” or “anonymized” data would protect their identities, that’s really not the case:

“In both the privacy policy of Oculus and HTC, makers of two of the most popular VR headsets in 2020, the companies are permitted to share any de-identified data,” the paper notes. “If the tracking data is shared according to rules for de-identified data, then regardless of what is promised in principle, in practice taking one’s name off a dataset accomplishes very little.”

If you don’t like this study, there’s just an absolute ocean of research over the last decade making the same point: “anonymized” or “de-identified” doesn’t actually mean “anonymous.” Researchers from the University of Washington and the University of California, San Diego, for example, found that they could identify drivers based on just 15 minutes’ worth of data collected from brake pedal usage alone. Researchers from Stanford and Princeton universities found that they could correctly identify an “anonymized” user 70% of the time just by comparing their browsing data to their social media activity.

[…]

Source: It Took Just 5 Minutes Of Movement Data To Identify ‘Anonymous’ VR Users | Techdirt

Police Will Pilot a Program to Live-Stream Amazon Ring Cameras

This is not a drill. Red alert: The police surveillance center in Jackson, Mississippi, will be conducting a 45-day pilot program to live stream the Amazon Ring cameras of participating residents.

Since Ring first made a splash in the private security camera market, we’ve been warning of its potential to undermine the civil liberties of its users and their communities. We’ve been especially concerned with Ring’s 1,000+ partnerships with local police departments, which facilitate bulk footage requests directly from users without oversight or having to acquire a warrant.

While people buy Ring cameras and put them on their front door to keep their packages safe, police use them to build comprehensive CCTV camera networks blanketing whole neighborhoods. This  serves two police purposes. First, it allows police departments to avoid the cost of buying surveillance equipment and to put that burden onto consumers by convincing them they need cameras to keep their property safe. Second, it evades the natural reaction of fear and distrust that many people would have if they learned police were putting up dozens of cameras on their block, one for every house.

Now, our worst fears have been confirmed. Police in Jackson, Mississippi, have started a pilot program that would allow Ring owners to patch the camera streams from their front doors directly to a police Real Time Crime Center. The footage from your front door includes you coming and going from your house, your neighbors taking out the trash, and the dog walkers and delivery people who do their jobs in your street. In Jackson, this footage can now be live streamed directly onto a dozen monitors scrutinized by police around the clock. Even if you refuse to allow your footage to be used that way, your neighbor’s camera pointed at your house may still be transmitting directly to the police.

[…]

Source: Police Will Pilot a Program to Live-Stream Amazon Ring Cameras | Electronic Frontier Foundation

Brave browser first to nix CNAME deception, the sneaky DNS trick used by marketers to duck privacy controls

The Brave web browser will soon block CNAME cloaking, a technique used by online marketers to defy privacy controls designed to prevent the use of third-party cookies.

The browser security model makes a distinction between first-party domains – those being visited – and third-party domains – from the suppliers of things like image assets or tracking code, to the visited site. Many of the online privacy abuses over the years have come from third-party resources like scripts and cookies, which is why third-party cookies are now blocked by default in Brave, Firefox, Safari, and Tor Browser.

Microsoft Edge, meanwhile, has a tiered scheme that defaults to a “Balanced” setting, which blocks some third-party cookies. Google Chrome has implemented its SameSite cookie scheme as a prelude to its planned 2022 phase-out of third-party cookies, maybe.

While Google tries to win support for its various Privacy Sandbox proposals, which aim to provide marketers with ostensibly privacy-preserving alternatives to increasingly shunned third-party cookies, marketers have been relying on CNAME shenanigans to pass their third-party trackers off as first-party resources.

The developers behind open-source content blocking extension uBlock Origin implemented a defense against CNAME-based tracking in November and now Brave has done so as well.

CNAME by name, cookie by nature

In a blog post on Tuesday, Anton Lazarev, research engineer at Brave Software, and senior privacy researcher Peter Snyder, explain that online tracking scripts may use canonical name DNS records, known as CNAMEs, to make associated third-party tracking domains look like they’re part of the first-party websites actually being visited.

They point to the site https://mathon.fr as an example, noting that without CNAME uncloaking, Brave blocks six requests for tracking scripts served by ad companies like Google, Facebook, Criteo, Sirdan, and Trustpilot.

But the page also makes four requests via a script hosted at a randomized path under the first-party subdomain 16ao.mathon.fr.

“Inspection outside of the browser reveals that 16ao.mathon.fr actually has a canonical name of et5.eulerian.net, meaning it’s a third-party script served by Eulerian,” observe Lazarev and Snyder.

When Brave 1.17 ships next month (currently available as a developer build), it will be able to uncloak the CNAME deception and block the Eulerian script.

Other browser vendors are planning related defenses. Mozilla has been working on a fix in Firefox since last November. And in August, Apple’s Safari WebKit team proposed a way to prevent CNAME cloaking from being used to bypass the seven-day cookie lifetime imposed by WebKit’s Intelligent Tracking Protection system

Source: Brave browser first to nix CNAME deception, the sneaky DNS trick used by marketers to duck privacy controls • The Register

When you tell Chrome to wipe private data about you, it spares two websites from the purge: Google.com, YouTube

Google exempts its own websites from Chrome’s automatic data-scrubbing feature, allowing the ads giant to potentially track you even when you’ve told it not to.

Programmer Jeff Johnson noticed the unusual behavior, and this month documented the issue with screenshots. In his assessment of the situation, he noted that if you set up Chrome, on desktop at least, to automatically delete all cookies and so-called site data when you quit the browser, it deletes it all as expected – except your site data for Google.com and YouTube.com.

While cookies are typically used to identify you and store some of your online preferences when visiting websites, site data is on another level: it includes, among other things, a storage database in which a site can store personal information about you, on your computer, that can be accessed again by the site the next time you visit. Thus, while your Google and YouTube cookies may be wiped by Chrome, their site data remains on your computer, and it could, in future, be used to identify you.

Johnson noted that after he configured Chrome to wipe all cookies and site data when the application closed, everything was cleared as expected for sites like apple.com. Yet, the main Google search site and video service YouTube were allowed to keep their site data, though the cookies were gone. If Google chooses at some point to stash the equivalent of your Google cookies in the Google.com site data storage, they could be retrieved next time you visit Google, and identify you, even though you thought you’d told Chrome not to let that happen.

Ultimately, it potentially allows Google, and only Google, to continue tracking Chrome users who opted for some more privacy; something that is enormously valuable to the internet goliath in delivering ads. Many users set Chrome to automatically delete cookies-and-site-data on exit for that reason – to prevent being stalked around the web – even though it often requires them to log back into websites the next time they visit due to their per-session cookies being wiped.

Yet Google appears to have granted itself an exception. The situation recalls a similar issue over location tracking, where Google continued to track people’s location through their apps even when users actively selected the option to prevent that. Google had put the real option to start location tracking under a different setting that didn’t even include the word “location.”

In this case, “Clear cookies and site data when you quit Chrome” doesn’t actually mean what it says, at least not for Google.

There is a workaround: you can manually add “Google.com” and “YouTube.com” within the browser to a list of “Sites that can never use cookies.” In that case, no information, not even site data, is saved from those sites, which is all in all a little confusing.

[…]

 

Source: When you tell Chrome to wipe private data about you, it spares two websites from the purge: Google.com, YouTube • The Register

Thought the FBI were the only ones able to unlock encrypted phones? Pretty much every US cop can get the job done – and does

Never mind the Feds. American police forces routinely “circumvent most security features” in smartphones to extract mountains of personal information, according to a report that details the massive, ubiquitous cracking of devices by cops.

Two years of public records requests by Upturn, a Washington DC non-profit, has revealed that every one of the United States’ largest 50 police departments, as well as half of the largest sheriff’s offices and two-thirds of the largest prosecuting attorney’s offices, regularly use specialist hardware and software to access the contents of suspects’ handhelds. There isn’t a state in the Union that hasn’t got some advanced phone-cracking capabilities.

The report concludes that, far from modern phones being a bastion of privacy and security, there are in fact routinely rifled through for trivial crimes without a warrant in sight. In one case, the cops confiscated and searched the phones of two men who were caught arguing over a $70 debt in a McDonalds.

In another, officers witnessed “suspicious behavior” in a Whole Foods grocery store parking lot and claimed to have smelt “the odor of marijuana” coming from a car. The car was stopped and searched, and the driver’s phone was seized and searched for “further evidence of the nature of the suspected controlled substance exchange.”

A third example given saw police officers shot and kill a man after he “ran from the driver’s side of the vehicle” during a traffic stop. They apparently discovered a small orange prescription pill container next to the victim, and tested the pills, which contained acetaminophen and fentanyl. They also discovered a phone in the empty car, and searched it for evidence related to “counterfeit Oxycodone” and “evidence relating to… motives for fleeing from the police.”

The report gives numerous other examples of phones taken from their owners and searched for evidence, without a warrant – many in cases where the value of the information was negligible such as cases involving graffiti, shoplifting, marijuana possession, prostitution, vandalism, car crashes, parole violations, petty theft, and public intoxication.

Not what you imagined

That is a completely different picture to the one, we imagine, most Americans assumed, particularly given the high legal protections afforded smartphones in recent high-profile court cases.

In 2018, the Supreme Court ruled that the government needs a warrant to access its citizens’ cellphone location data and talked extensively about a citizen’s expectation of privacy limiting “official intrusion” when it comes to smartphones.

In 2014, the court decided a warrant was required to search a mobile phone, and that the “reasonable expectation of privacy” that people have in their “physical movements” should extend to records stored by third parties. But the reality on the grounds is that those grand words mean nothing if the cops decide they want to look through your phone.

The report was based on reports from 44 law enforcement agencies across the US and covered 50,000 extractions of data from cellphones between 2015 and 2019, a figure that Upturn notes “represents a severe undercount” of the actual number of cellphone extractions.

[…]

They include banning the use of “consent searches” where the police ask the owner if they can search their phone and then require no further approval to go through a device. “Courts pretend that ‘consent searches’ are voluntary, when they are effectively coerced,” the report argues and notes that most people are probably unaware they by agreeing to it, they can have their phone’s entire contents downloaded and perused at will later on.

It also reckons that the argument that the contents of a phone are in “plain view” because a police officer can see a phone when at the scene of a crime, an important legal distinction that allows the police to search phones, is legally untenable because people carry their phones with them as a rule, and the contents are not themselves also visible – only the device itself.

The report also argues for more extensive audit logs of phone searches so there is a degree of accountability, particularly if evidence turned up is later used in court. And it argues for better and clearer data deletion rules, as well as more reporting requirements around phone searches by law enforcement.

It concludes: “For too long, public debate and discussion regarding these tools has been abstracted to the rarest and most sensational cases in which law enforcement cannot gain access to cellphone data. We hope that this report will help recenter the conversation regarding law enforcement’s use of mobile device forensic tools to the on-the-ground reality of cellphone searches today in the United States.”

Source: Thought the FBI were the only ones able to unlock encrypted phones? Pretty much every US cop can get the job done • The Register

UK test and trace data can be handed to police, reveals memorandum – that mission crept quickly

As if things were not going badly enough for the UK’s COVID-19 test and trace service, it now seems police will be able to access some test data, prompting fear that the disclosure could deter people who should have tests from coming forward.

As revealed in the Health Service Journal (paywalled), Department for Health and Social Care (DHSC) guidance describing how testing data will be handled was updated on Friday.

The memorandum of understanding between DHSC and National Police Chiefs’ Council said forces could be allowed to access test information that tells them if a “specific individual” has been told to self-isolate.

A failure to self-isolate after getting a positive COVID-19 test or being in contact with someone who has tested positive, could result in a police fine of £1,000, or even a £10,000 penalty for those serial offenders or those seriously breaking the rules.

A Department of Health and Social Care spokesperson said: “It is a legal requirement for people who have tested positive for COVID-19 and their close contacts to self-isolate when formally notified to do so.

“The DHSC has agreed a memorandum of understanding with the National Police Chiefs Council to enable police forces to have access on a case-by-case basis to information that enables them to know if a specific individual has been notified to self-isolate.

[…]

The UK government’s emphasis should be on providing support to people – financial and otherwise – if they need to self-isolate, so that no one is deterred from coming forward for a test, the BMA spokesperson added.

The UK’s test and trace system, backed by £12bn in public money, was outsourced to Serco for £45m in June. Sitel is also a provider.

The service has had a bumpy ride to say the least. Earlier this month, it came to light that as many as 48,000 people were not informed they had come in close contact with people who had tested positive, as the service under-reported 15,841 novel coronavirus cases between 25 September and 2 October.

The use of Microsoft’s Excel spreadsheet program in transferring test results from labs to the health service to total up was at the heart of the problem. A plausible explanation emerged that test results were automatically fetched in CSV format by PHE from various commercial testing labs, and stored in rows in an older .XLS Excel format that limited the number of rows to 65,536 per spreadsheet, rather than the one-million row limit offered by the modern .XLSX file format.

But that was not the only miss-step. It has emerged that people in line for a coronavirus test were sent to a site in Sevenoaks, Kent, where, in fact, no test centre existed, according to reports.

Source: UK test and trace data can be handed to police, reveals memorandum • The Register

Remember when Zoom was rumbled for lousy crypto? Six months later it says end-to-end is ready – but it’s not

The world’s plague-time video meeting tool of choice, Zoom, says it’s figured out how to do end-to-end encryption sufficiently well to offer users a tech preview.

News of the trial comes after April 2020 awkwardness that followed the revelation that Zoom was fibbing about its service using end-to-end encryption.

As we reported at the time, Zoom ‘fessed up but brushed aside criticism with a semantic argument about what “end-to-end” means.

“When we use the phrase ‘End-to-end’ in our other literature, it is in reference to the connection being encrypted from Zoom end point to Zoom end point,” the company said. The commonly accepted definition of end-to-end encryption requires even the host of a service to be unable to access the content of a communication. As we explained at the time, Zoom’s use of TLS and HTTPS meant it could intercept and decrypt video chats.

Come May, Zoom quickly acquired secure messaging Keybase to give it the chops to build proper crypto.

To use it, customers must enable E2EE meetings at the account level and opt-in to E2EE on a per-meeting basis

Now Zoom reckons it has cracked the problem.

A Wednesday post revealed: “starting next week, Zoom’s end-to-end encryption (E2EE) offering will be available as a technical preview, which means we’re proactively soliciting feedback from users for the first 30 days.”

Sharp-eyed Reg readers have doubtless noticed that Zoom has referred to “E2EE”, not just the “E2E” contraction of “end-to-end”.

What’s up with that? The company has offered the following explanation:

“Zoom’s E2EE uses the same powerful GCM encryption you get now in a Zoom meeting. The only difference is where those encryption keys live.In typical meetings, Zoom’s cloud generates encryption keys and distributes them to meeting participants using Zoom apps as they join. With Zoom’s E2EE, the meeting’s host generates encryption keys and uses public key cryptography to distribute these keys to the other meeting participants. Zoom’s servers become oblivious relays and never see the encryption keys required to decrypt the meeting contents.”

Don’t go thinking the preview means Zoom has squared away security, because the company says: “To use it, customers must enable E2EE meetings at the account level and opt-in to E2EE on a per-meeting basis.”

With users having to be constantly reminded to use non-rubbish passwords, not to click on phish or leak business data on personal devices, they’ll almost certainly choose E2EE every time without ever having to be prompted, right?

Source: Remember when Zoom was rumbled for lousy crypto? Six months later it says end-to-end is ready • The Register

Five Eyes governments, India, and Japan make new call for encryption backdoors – insist that democracy is an insecure police state

Members of the intelligence-sharing alliance Five Eyes, along with government representatives for Japan and India, have published a statement over the weekend calling on tech companies to come up with a solution for law enforcement to access end-to-end encrypted communications.

The statement is the alliance’s latest effort to get tech companies to agree to encryption backdoors.

The Five Eyes alliance, comprised of the US, the UK, Canada, Australia, and New Zealand, have made similar calls to tech giants in 2018 and 2019, respectively.

Just like before, government officials claim tech companies have put themselves in a corner by incorporating end-to-end encryption (E2EE) into their products.

If properly implemented, E2EE lets users have secure conversations — may them be chat, audio, or video — without sharing the encryption key with the tech companies.

Representatives from the seven governments argue that the way E2EE encryption is currently supported on today’s major tech platforms prohibits law enforcement from investigating crime rings, but also the tech platforms themselves from enforcing their own terms of service.

Signatories argue that “particular implementations of encryption technology” are currently posing challenges to law enforcement investigations, as the tech platforms themselves can’t access some communications and provide needed data to investigators.

This, in turn, allows a safe haven for criminal activity and puts the safety of “highly vulnerable members of our societies like sexually exploited children” in danger, officials argued.

Source: Five Eyes governments, India, and Japan make new call for encryption backdoors | ZDNet

Let’s be clear here:

  1. There is no way for a backdoored system to be secure. This means that not only do you give access to the government police services, secret services, stazi and thought police who can persecute you for being jewish or thinking the “wrong way” (eg being homosexual or communist), you also give criminal networks, scam artists, discontented exes and foreign government free reign to run around  your private content
  2. You have a right to privacy and you need it. It’s fundamental to being able to think creatively  and the only way in which societies advance. If thought is policed by some random standard then deviations which lead  to change will be surpressed. Stasis leads to economic collapse among other things, even if those at the top will be collecting more and more wealth for themselves.
  3. We as a society cannot “win” or become “better” by emulating the societies that we are competing against, that represent values and behaviours that we disagree with. Becoming a police state doesn’t protect us from other police states.

Google is giving data to police based on search keywords: IPs of everyone who searched a certain thing. No warrant required.

There are few things as revealing as a person’s search history, and police typically need a warrant on a known suspect to demand that sensitive information. But a recently unsealed court document found that investigators can request such data in reverse order by asking Google to disclose everyone who searched a keyword rather than for information on a known suspect.

In August, police arrested Michael Williams, an associate of singer and accused sex offender R. Kelly, for allegedly setting fire to a witness’ car in Florida. Investigators linked Williams to the arson, as well as witness tampering, after sending a search warrant to Google that requested information on “users who had searched the address of the residence close in time to the arson.”

The July court filing was unsealed on Tuesday. Detroit News reporter Robert Snell tweeted about the filing after it was unsealed.

Court documents showed that Google provided the IP addresses of people who searched for the arson victim’s address, which investigators tied to a phone number belonging to Williams. Police then used the phone number records to pinpoint the location of Williams’ device near the arson, according to court documents.

The original warrant sent to Google is still sealed, but the report provides another example of a growing trend of data requests to the search engine giant in which investigators demand data on a large group of users rather than a specific request on a single suspect.

“This ‘keyword warrant’ evades the Fourth Amendment checks on police surveillance,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “When a court authorizes a data dump of every person who searched for a specific term or address, it’s likely unconstitutional.”

The keyword warrants are similar to geofence warrants, in which police make requests to Google for data on all devices logged in at a specific area and time. Google received 15 times more geofence warrant requests in 2018 compared with 2017, and five times more in 2019 than 2018. The rise in reverse requests from police have troubled Google staffers, according to internal emails.

[…]

Source: Google is giving data to police based on search keywords, court docs show – CNET

Europe’s top court confirms no mass surveillance without limits

Europe’s top court has delivered another slap-down to indiscriminate government mass surveillance regimes.

In a ruling today the CJEU has made it clear that national security concerns do not exclude EU Member States from the need to comply with general principles of EU law such as proportionality and respect for fundamental rights to privacy, data protection and freedom of expression.

However the court has also allowed for derogations, saying that a pressing national security threat can justify limited and temporary bulk data collection and retention — capped to ‘what is strictly necessary’.

While threats to public security or the need to combat serious crime may also allow for targeted retention of data provided it’s accompanied by ‘effective safeguards’ and reviewed by a court or independent authority.

 

The reference to the CJEU joined a number of cases, including legal challenges brought by rights advocacy group Privacy International to bulk collection powers baked into the UK’s Investigatory Powers Act; a La Quadrature du Net (and others’) challenge to a 2015 French decree related to specialized intelligence services; and a challenge to Belgium’s 2016 law on collection and retention of comms data.

Civil rights campaigners had been eagerly awaiting today’s judgements from the Grand Chamber, following an opinion by an advisor to the court in January which implied certain EU Member States’ surveillance regimes were breaching the law.

At the time of writing key complainants had yet to issue a response.

Of course a government agency’s definition of how much data collection is ‘strictly necessary’ in a national security context (or, indeed, what constitutes an ‘effective safeguard’) may be rather different to the benchmark of civil rights advocacy groups — so it seems unlikely this ruling will be the last time the CJEU is asked to clarify where the legal limits of mass surveillance lie.

 

Additionally, the judgement raises interesting questions over the UK’s chances of gaining a data protection adequacy agreement from the European Commission — as it leaves the EU in 2021 at the end of the brexit transition process this year — something it needs for digital data flows from the EU to continue uninterrupted as now.

The problem is the UK’s Investigatory Powers Act (IPA) gives government agencies broad powers to intercept and retain digital communications — but here the CJEU is making it clear that such bulk powers must be the exception, not the statutory rule.

So, again, a battle over definitions could be looming…

[…]

Another interesting component of today’s CJEU judgement suggests that in EU states with indiscriminate mass surveillance regimes there could be grounds for overturning individual criminal convictions which are based on evidence obtained via such illegal surveillance.

On this, the court writes in a press release: “As EU law currently stands, it is for national law alone to determine the rules relating to the admissibility and assessment, in criminal proceedings against persons suspected of having committed serious criminal offences, of information and evidence obtained by the retention of data in breach of EU law. However, the Court specifies that the directive on privacy and electronic communications, interpreted in the light of the principle of effectiveness, requires national criminal courts to disregard information and evidence obtained by means of the general and indiscriminate retention of traffic and location data in breach of EU law, in the context of such criminal proceedings, where those persons suspected of having committed criminal offences are not in a position to comment effectively on that information and evidence.”

Update: Privacy International has now responded to the CJEU judgements, saying the UK, French and Belgian surveillance regimes must be amended to be brought within EU law.

In a statement, legal director Caroline Wilson Palow said: “Today’s judgment reinforces the rule of law in the EU. In these turbulent times, it serves as a reminder that no government should be above the law. Democratic societies must place limits and controls on the surveillance powers of our police and intelligence agencies.

“While the Police and intelligence agencies play a very important role in keeping us safe, they must do so in line with certain safeguards to prevent abuses of their very considerable power. They should focus on providing us with effective, targeted surveillance systems that protect both our security and our fundamental rights.”

Source: Europe’s top court confirms no mass surveillance without limits | TechCrunch

The IRS Is Being Investigated for Using Bought Location Data Without a Warrant – Wait there’s a company called Venntel that sells this and that’s OK?

The body tasked with oversight of the IRS announced in a letter that it will investigate the agency’s use of location data harvested from ordinary apps installed on peoples’ phones, according to a copy of the letter obtained by Motherboard.

The move comes after Senators Ron Wyden and Elizabeth Warren demanded a formal investigation into how the IRS used the location data to track Americans without a warrant.

“We are going to conduct a review of this matter, and we are in the process of contacting the CI [Criminal Investigation] division about this review,” the letter, signed by J. Russell George, the Inspector General, and addressed to the Senators, reads. CI has a broad mandate to investigate abusive tax schemes, bankruptcy fraud, identity theft, and many more similar crimes. Wyden’s office provided Motherboard with a copy of the letter on Tuesday.

In June, officials from the IRS Criminal Investigation unit told Wyden’s office that it had purchased location data from a contractor called Venntel, and that the IRS had tried to use it to identify individual criminal suspects. Venntel obtains location data from innocuous looking apps such as games, weather, or e-commerce apps, and then sells access to the data to government clients.

A Wyden aide previously told Motherboard that the IRS wanted to find phones, track where they were at night, use that as a proxy as to where the individual lived, and then use other data sources to try and identify the person. A person who used to work for Venntel previously told Motherboard that Venntel customers can use the tool to see which devices are in a particular house, for instance.

The IRS’ attempts were not successful though, as the people the IRS was looking for weren’t included in the particular Venntel data set, the aide added.

But the IRS still obtained this data without a warrant, and the legal justification for doing so remains unclear. The aide said that the IRS received verbal approval to use the data, but stopped responding to their office’s inquiries.

[…]

Source: The IRS Is Being Investigated for Using Location Data Without a Warrant