New lawsuit: Why do Android phones mysteriously exchange 260MB a month with Google via cellular data when they’re not even in use? Also Apple + ad fraud

Google on Thursday was sued for allegedly stealing Android users’ cellular data allowances though unapproved, undisclosed transmissions to the web giant’s servers.

The lawsuit, Taylor et al v. Google [PDF], was filed in a US federal district court in San Jose on behalf of four plaintiffs based in Illinois, Iowa, and Wisconsin in the hope the case will be certified by a judge as a class action.

The complaint contends that Google is using Android users’ limited cellular data allowances without permission to transmit information about those individuals that’s unrelated to their use of Google services.

Data sent over Wi-Fi is not at issue, nor is data sent over a cellular connection in the absence of Wi-Fi when an Android user has chosen to use a network-connected application. What concerns the plaintiffs is data sent to Google’s servers that isn’t the result of deliberate interaction with a mobile device – we’re talking passive or background data transfers via cell network, here.

[…]

Android users have to accept four agreements to participate in the Google ecosystem: Terms of Service; the Privacy Policy; the Managed Google Play Agreement; and the Google Play Terms of Service. None of these, the court filing contends, disclose that Google spends users’ cellular data allowances for these background transfers.

To support the allegations, the plaintiff’s counsel tested a new Samsung Galaxy S7 phone running Android, with a signed-in Google Account and default setting, and found that when left idle, without a Wi-Fi connection, the phone “sent and received 8.88 MB/day of data, with 94 per cent of those communications occurring between Google and the device.”

The device, stationary, with all apps closed, transferred data to Google about 16 times an hour, or about 389 times in 24 hours. Assuming even half of that data is outgoing, Google would receive about 4.4MB per day or 130MB per month in this manner per device subject to the same test conditions.

Putting worries of what could be in that data to one side, based on an average price of $8 per GB of data in the US, that 130MB works out to about $1 lost to Google data gathering per month – if the device is disconnected from Wi-Fi the entire time and does all its passive transmission over a cellular connection.

An iPhone with Apple’s Safari browser open in the background transmits only about a tenth of that amount to Apple, according to the complaint.

Much of the transmitted data, it’s claimed, are log files that record network availability, open apps, and operating system metrics. Google could have delayed transmitting these files until a Wi-Fi connection was available, but chose instead to spend users’ cell data so it could gather data at all hours.

Vanderbilt University Professor Douglas C. Schmidt performed a similar study in 2018 – except that the Chrome browser was open – and found that Android devices made 900 passive transfers in 24 hours.

Under active use, Android devices transfer about 11.6MB of data to Google servers daily, or 350MB per month, it’s claimed, which is about half the amount transferred by an iPhone.

The complaint charges that Google conducts these undisclosed data transfers for further its advertising business, sending “tokens” that identify users for targeted advertising and preload ads that generate revenue even if they’re never displayed.

“Users often never view these pre-loaded ads, even though their cellular data was already consumed to download the ads from Google,” the legal filing claims. “And because these pre-loads can count as ad impressions, Google is paid for transmitting the ads.”

Source: New lawsuit: Why do Android phones mysteriously exchange 260MB a month with Google via cellular data when they’re not even in use? • The Register

Six Reasons Why Google Maps Is the Creepiest App On Your Phone

VICE has highlighted six reasons why Google Maps is the creepiest app on your phone. An anonymous reader shares an excerpt from the report: 1. Google Maps Wants Your Search History: Google’s “Web & App Activity” settings describe how the company collects data, such as user location, to create a faster and “more personalized” experience. In plain English, this means that every single place you’ve looked up in the app — whether it’s a strip club, a kebab shop or your moped-riding drug dealer’s location — is saved and integrated into Google’s search engine algorithm for a period of 18 months. Google knows you probably find this creepy. That’s why the company uses so-called “dark patterns” — user interfaces crafted to coax us into choosing options we might not otherwise, for example by highlighting an option with certain fonts or brighter colors.

2. Google Maps Limits Its Features If You Don’t Share Your Search History: If you open your Google Maps app, you’ll see a circle in the top right corner that signifies you’re logged in with your Google account. That’s not necessary, and you can simply log out. Of course, the log out button is slightly hidden, but can be found like this: click on the circle > Settings > scroll down > Log out of Google Maps. Unfortunately, Google Maps won’t let you save frequently visited places if you’re not logged into your Google account. If you choose not to log in, when you click on the search bar you get a “Tired of typing?” button, suggesting you sign in, and coaxing you towards more data collection.

3. Google Maps Can Snitch On You: Another problematic feature is the “Google Maps Timeline,” which “shows an estimate of places you may have been and routes you may have taken based on your Location History.” With this feature, you can look at your personal travel routes on Google Maps, including the means of transport you probably used, such as a car or a bike. The obvious downside is that your every move is known to Google, and to anyone with access to your account. And that’s not just hackers — Google may also share data with government agencies such as the police. […] If your “Location History” is on, your phone “saves where you go with your devices, even when you aren’t using a specific Google service,” as is explained in more detail on this page. This feature is useful if you lose your phone, but also turns it into a bonafide tracking device.

4. Google Maps Wants to Know Your Habits: Google Maps often asks users to share a quick public rating. “How was Berlin Burger? Help others know what to expect,” suggests the app after you’ve picked up your dinner. This feels like a casual, lighthearted question and relies on the positive feeling we get when we help others. But all this info is collected in your Google profile, making it easier for someone to figure out if you’re visiting a place briefly and occasionally (like on holiday) or if you live nearby.

5. Google Maps Doesn’t Like It When You’re Offline: Remember GPS navigation? It might have been clunky and slow, but it’s a good reminder that you don’t need to be connected to the internet to be directed. In fact, other apps offer offline navigation. On Google, you can download maps, but offline navigation is only available for cars. It seems fairly unlikely the tech giant can’t figure out how to direct pedestrians and cyclists without internet.

6. Google Makes It Seem Like This Is All for Your Own Good: “Providing useful, meaningful experiences is at the core of what Google does,” the company says on its website, adding that knowing your location is important for this reason. They say they use this data for all kinds of useful things, like “security” and “language settings” — and, of course, selling ads. Google also sells advertisers the possibility to evaluate how well their campaigns reached their target (that’s you!) and how often people visited their physical shops “in an anonymized and aggregated manner”. But only if you opt in (or you forget to opt out).

Source: Six Reasons Why Google Maps Is the Creepiest App On Your Phone – Slashdot

It Took Just 5 Minutes Of Movement Data To Identify ‘Anonymous’ VR Users

As companies and governments increasingly hoover up our personal data, a common refrain to keep people from worrying is the claim that nothing can go wrong because the data itself is “anonymized” — or stripped of personal identifiers like social security numbers. But time and time again, studies have shown how this really is cold comfort, given it takes only a little effort to pretty quickly identify a person based on access to other data sets. Yet most companies, many privacy policy folk, and even government officials still like to act as if “anonymizing” your data means something.

The latest case in point: new research out of Stanford (first spotted by the German website Mixed), found that it took researchers just five minutes of examining the movement data of VR users to identify them in the real world. The paper says participants using an HTC Vive headset and controllers watched five 20-second clips from a randomized set of 360-degree videos, then answered a set of questions in VR that were tracked in a separate research paper.

The movement data (including height, posture, head movement speed and what participants looked at and for how long) was then plugged into three machine learning algorithms, which, from a pool of 511 participants, was able to correctly identify 95% of users accurately “when trained on less than 5 min of tracking data per person.” The researchers went on to note that while VR headset makers (like every other company) assures users that “de-identified” or “anonymized” data would protect their identities, that’s really not the case:

“In both the privacy policy of Oculus and HTC, makers of two of the most popular VR headsets in 2020, the companies are permitted to share any de-identified data,” the paper notes. “If the tracking data is shared according to rules for de-identified data, then regardless of what is promised in principle, in practice taking one’s name off a dataset accomplishes very little.”

If you don’t like this study, there’s just an absolute ocean of research over the last decade making the same point: “anonymized” or “de-identified” doesn’t actually mean “anonymous.” Researchers from the University of Washington and the University of California, San Diego, for example, found that they could identify drivers based on just 15 minutes’ worth of data collected from brake pedal usage alone. Researchers from Stanford and Princeton universities found that they could correctly identify an “anonymized” user 70% of the time just by comparing their browsing data to their social media activity.

[…]

Source: It Took Just 5 Minutes Of Movement Data To Identify ‘Anonymous’ VR Users | Techdirt

Analysis of Trump’s tweets reveals systematic diversion of the media

President Donald Trump’s controversial use of social media is widely known and theories abound about its ulterior motives. New research published today in Nature Communications claims to provide the first evidence-based analysis demonstrating the US President’s Twitter account has been routinely deployed to divert attention away from a topic potentially harmful to his reputation, in turn suppressing negative related media coverage.

The international study, led by the University of Bristol in the UK, tested two hypotheses: whether an increase in harmful media coverage was followed by increased diversionary Twitter activity, and if such diversion successfully reduced subsequent media coverage of the harmful topic.

[…]

The study focused on Trump’s first two years in office, scrutinising the Robert Mueller investigation into potential collusion with Russia in the 2016 Presidential Election, as this was politically harmful to the President. The team analysed content relating to Russia and the Mueller investigation in two of the country’s most politically neutral media outlets, New York Times (NYT) and ABC World News Tonight (ABC). The team also selected a set of keywords judged to play to Trump’s preferred topics at the time, which were hypothesized to be likely to appear in diversionary tweets. The keywords related to “jobs”, “China”, and “immigration”; topics representing the president’s supposed political strengths.

The researchers hypothesized that the more ABC and NYT reported on the Mueller investigation, the more Trump’s tweets would mention jobs, China, and immigration, which in turn would result in less coverage of the Mueller investigation by ABC and NYT.

In support of their hypotheses, the team found that every five additional ABC headlines relating to the Mueller investigation was associated with one more mention of a keyword in Trump’s tweets. In turn, two additional mentions of one of the keywords in a Trump was associated with roughly one less mention of the Mueller investigation in the following day’s NYT.

Such a pattern did not emerge with placebo topics that presented no threat to the President, for instance Brexit or other non-political issues such as football or gardening.

[…]

Professor Lewandowsky said: “It’s unclear whether President Trump, or whoever is at the helm of his Twitter account, engages in such tactics intentionally or if it’s mere intuition. Either way, we hope these results serve as a helpful reminder to the that they have the power to set the news agenda, focusing on the topics they deem most important, while perhaps not paying so much attention to the Twitter-sphere.”

Source: Analysis of Trump’s tweets reveals systematic diversion of the media

To Prevent Free, Frictionless Access To Human Knowledge, Publishers Want Librarians To Be Afraid, Very Afraid

After many years of fierce resistance to open access, academic publishers have largely embraced — and extended — the idea, ensuring that their 35-40% profit margins live on. In the light of this subversion of the original hopes for open access, people have come up with other ways to provide free and frictionless access to knowledge — most of which is paid for by taxpayers around the world. One is preprints, which are increasingly used by researchers to disseminate their results widely, without needing to worry about payment or gatekeepers. The other is through sites that have taken it upon themselves to offer immediate access to large numbers of academic papers — so-called “shadow libraries”. The most famous of these sites is Sci-Hub, created by Alexandra Elbakyan. At the time of writing, Sci-Hub claims to hold 79 million papers.

Even academics with access to publications through their institutional subscriptions often prefer to use Sci-Hub, because it is so much simpler and quicker. In this respect, Sci-Hub stands as a constant reproach to academic publishers, emphasizing that their products aren’t very good in terms of serving libraries, which are paying expensive subscriptions for access. Not surprisingly, then, Sci-Hub has become Enemy No. 1 for academic publishers in general, and the leading company Elsevier in particular. The German site Netzpolitik has spotted the latest approach being taken by publishers to tackle this inconvenient and hugely successful rival, and other shadow libraries. At its heart lies the Scholarly Networks Security Initiative (SNSI), which was founded by Elsevier and other large publishers earlier this year. Netzpolitik explains that the idea is to track and analyze every access to libraries, because “security”

[…]

Since academic publishers can’t compete against Sci-Hub on ease of use or convenience, they are trying the old “security risk” angle — also used by traditional software companies against open source in the early days. Yes, they say, Sci-Hub/open source may seem free and better, but think of the terrible security risks… An FAQ on the main SNSI site provides an “explanation” of why Sci-Hub is supposedly a security risk

[…]

As Techdirt pointed out when that Washington Post article came out, there is no evidence of any connections between Elbakyan and Russian Intelligence. Indeed, it’s hard not to see the investigation as simply the result of whining academic publishers making the same baseless accusation, and demanding that something be “done“. An article in Research Information provides more details about what those “wider ramifications than just getting access to content that sits behind a paywall” might be:

In the specific case of Sci-Hub, academic content (journal articles and books) is illegally harvested using a variety of methods, such as abusing legitimate log in credentials to access the secure computer networks of major universities and by hijacking “proxy” credentials of legitimate users that facilitate off campus remote access to university computer systems and databases. These actions result in a front door being opened up into universities’ networks through which Sci-Hub, and potentially others, can gain access to other valuable institutional databases such as personnel and medical records, patent information, and grant details.

But that’s not how things work in this context. The credentials of legitimate users that Sci-Hub draws on — often gladly “lent” by academics who believe papers should be made widely available — are purely to access articles held on the system. They do not provide access to “other valuable institutional databases” — and certainly not sensitive information such as “personnel and medical records” — unless they are designed by complete idiots. That is pure scaremongering, while this further claim is just ridiculous:

Such activities threaten the scholarly communications ecosystem and the integrity of the academic record. Sci-Hub has no incentive to ensure the accuracy of the research articles being accessed, no incentive to ensure research meets ethical standards, and no incentive to retract or correct if issues arise.

Sci-Hub simply provides free, frictionless access for everyone to existing articles from academic publishers. The articles are still as accurate and ethical as they were when they first appeared. To accuse Sci-Hub of “threatening” the scholarly communications ecosystem by providing universal access is absurd. It’s also revealing of the traditional publishers’ attitude to the uncontrolled dissemination of publicly-funded human knowledge, which is what they really fear and are attacking with the new SNSI campaign.

Source: To Prevent Free, Frictionless Access To Human Knowledge, Publishers Want Librarians To Be Afraid, Very Afraid | Techdirt

Police Will Pilot a Program to Live-Stream Amazon Ring Cameras

This is not a drill. Red alert: The police surveillance center in Jackson, Mississippi, will be conducting a 45-day pilot program to live stream the Amazon Ring cameras of participating residents.

Since Ring first made a splash in the private security camera market, we’ve been warning of its potential to undermine the civil liberties of its users and their communities. We’ve been especially concerned with Ring’s 1,000+ partnerships with local police departments, which facilitate bulk footage requests directly from users without oversight or having to acquire a warrant.

While people buy Ring cameras and put them on their front door to keep their packages safe, police use them to build comprehensive CCTV camera networks blanketing whole neighborhoods. This  serves two police purposes. First, it allows police departments to avoid the cost of buying surveillance equipment and to put that burden onto consumers by convincing them they need cameras to keep their property safe. Second, it evades the natural reaction of fear and distrust that many people would have if they learned police were putting up dozens of cameras on their block, one for every house.

Now, our worst fears have been confirmed. Police in Jackson, Mississippi, have started a pilot program that would allow Ring owners to patch the camera streams from their front doors directly to a police Real Time Crime Center. The footage from your front door includes you coming and going from your house, your neighbors taking out the trash, and the dog walkers and delivery people who do their jobs in your street. In Jackson, this footage can now be live streamed directly onto a dozen monitors scrutinized by police around the clock. Even if you refuse to allow your footage to be used that way, your neighbor’s camera pointed at your house may still be transmitting directly to the police.

[…]

Source: Police Will Pilot a Program to Live-Stream Amazon Ring Cameras | Electronic Frontier Foundation

Brave browser first to nix CNAME deception, the sneaky DNS trick used by marketers to duck privacy controls

The Brave web browser will soon block CNAME cloaking, a technique used by online marketers to defy privacy controls designed to prevent the use of third-party cookies.

The browser security model makes a distinction between first-party domains – those being visited – and third-party domains – from the suppliers of things like image assets or tracking code, to the visited site. Many of the online privacy abuses over the years have come from third-party resources like scripts and cookies, which is why third-party cookies are now blocked by default in Brave, Firefox, Safari, and Tor Browser.

Microsoft Edge, meanwhile, has a tiered scheme that defaults to a “Balanced” setting, which blocks some third-party cookies. Google Chrome has implemented its SameSite cookie scheme as a prelude to its planned 2022 phase-out of third-party cookies, maybe.

While Google tries to win support for its various Privacy Sandbox proposals, which aim to provide marketers with ostensibly privacy-preserving alternatives to increasingly shunned third-party cookies, marketers have been relying on CNAME shenanigans to pass their third-party trackers off as first-party resources.

The developers behind open-source content blocking extension uBlock Origin implemented a defense against CNAME-based tracking in November and now Brave has done so as well.

CNAME by name, cookie by nature

In a blog post on Tuesday, Anton Lazarev, research engineer at Brave Software, and senior privacy researcher Peter Snyder, explain that online tracking scripts may use canonical name DNS records, known as CNAMEs, to make associated third-party tracking domains look like they’re part of the first-party websites actually being visited.

They point to the site https://mathon.fr as an example, noting that without CNAME uncloaking, Brave blocks six requests for tracking scripts served by ad companies like Google, Facebook, Criteo, Sirdan, and Trustpilot.

But the page also makes four requests via a script hosted at a randomized path under the first-party subdomain 16ao.mathon.fr.

“Inspection outside of the browser reveals that 16ao.mathon.fr actually has a canonical name of et5.eulerian.net, meaning it’s a third-party script served by Eulerian,” observe Lazarev and Snyder.

When Brave 1.17 ships next month (currently available as a developer build), it will be able to uncloak the CNAME deception and block the Eulerian script.

Other browser vendors are planning related defenses. Mozilla has been working on a fix in Firefox since last November. And in August, Apple’s Safari WebKit team proposed a way to prevent CNAME cloaking from being used to bypass the seven-day cookie lifetime imposed by WebKit’s Intelligent Tracking Protection system

Source: Brave browser first to nix CNAME deception, the sneaky DNS trick used by marketers to duck privacy controls • The Register

Another eBay exec pleads guilty after couple stalked, harassed for daring to criticize the internet tat bazaar – pig corpese involved

Philip Cooke, 55, oversaw eBay’s security operations in Europe and Asia and was a former police captain in Santa Clara, California. He pleaded guilty this week to conspiracy to commit cyberstalking and conspiracy to tamper with witnesses.

Cooke, based in San Jose, was just one of seven employees, including one manager, accused of targeting a married couple living on the other side of the United States, in Massachusetts, because they didn’t like their criticisms of eBay in the newsletter.

It’s said the team would post aggressive anonymous comments on the couple’s newsletter website, and at some point planned a concerted campaign against the pair including cyberstalking and harassment. Among other things, prosecutors noted, “several of the defendants ordered anonymous and disturbing deliveries to the victims’ home, including a preserved fetal pig, a bloody pig Halloween mask and a book on surviving the loss of a spouse.”

[…]

But it was when the couple noticed they were under surveillance in their own home they finally went to the cops in Natick, where they lived, and officers opened an investigation.

It was Cooke’s behavior at that point that led to the subsequent charge of conspiracy to tamper with a witness: he formulated a plan to give the Natick police a false lead in an effort to prevent them from discovering proof that his team had sent the pig’s head and other items. The eBay employees also deleted digital evidence that showed their involvement, prosecutors said, obstructing an investigation and breaking another law.

[…]

Source: Another eBay exec pleads guilty after couple stalked, harassed for daring to criticize the internet tat bazaar • The Register

Palo Alto Networks threatens to sue security startup for comparison review, says it breaks software EULA. 1 EULA? 2 WTF?

Palo Alto Networks has threatened a startup with legal action after the smaller biz published a comparison review of one of its products.

Israel-based Orca Security received a cease-and-desist letter from a lawyer representing Palo Alto after Orca uploaded a series of online videos reviewing of one of Palo Alto’s products and compared it to its own. Orca sees itself as a competitor of Palo Alto Networks (PAN).

“What we expected is that others will also create such materials … but instead we received a letter from Palo Alto’s lawyers claiming we were not allowed to do that,” Orca chief exec Avi Shua told The Register this week. “We believe these are empty legal threats.”

In a note on its website, Orca lamented at length the “outrageous” behavior of PAN, as well as posting a copy of the lawyer’s letter for world-plus-dog to read. That letter claimed Orca infringed PAN’s trademarks by using its name and logo in the review as well as breaching non-review clauses in the End-User License Agreement (EULA) of PAN’s product.

As such, the lawyer demanded the removal of the comparison material, and that the startup stop using PAN’s logo and name. We note the videos are still online, hosted by YouTube.

“It’s outrageous that the world’s largest cybersecurity vendor, its products being used by over 65,000 organizations according to its website, believes that its users aren’t entitled to share any benchmark or performance comparison of its products,” said Orca.

The lawyer’s letter [PDF] claimed Orca violated PAN’s EULA fine-print, something deputy general counsel Melinda Thompson described in her missive as “a clear breach” of terms “prohibiting an end user from disclosing, publishing or otherwise making publicly available any benchmark, performance or comparison tests… run on Palo Alto Networks products, in whole or in part.”

Shua told The Register Orca tried to give its rival a fair crack of the whip: “Even if we tried to be objective, we would have some biases. But we did try to do it as objectively as possible, by showing it to users: creating labs, screenshots, and showing how it looks like.” The fairness of the review, we note, is not what is at issue here: PAN forbids any kind of benchmarking and comparison of its gear.

Palo Alto networks declined to comment when contacted by The Register.

Source: Palo Alto Networks threatens to sue security startup for comparison review, says it breaks software EULA • The Register

1 Who reads EULAs anyway? Are they in any way, shape or form defensible apart from maybe some ant fucker friendless lawyers?

2 Is PAN so very worried about the poor quality of their product that they feel they want to kill any and all benchmarks / comparisons?

Twitch Suddenly Mass-Deletes Thousands of Videos, Citing Music Copyright Claims – yes, copyright really doesn’t provide for  innovation at all

“It’s finally happening: Twitch is taking action against copyrighted music — long a norm among streamers — in response to music industry pressure,” reports Kotaku.

But the Verge reports “there’s some funny stuff going on here.” First, Twitch is telling streamers that some of their content has been identified as violating copyright and that instead of letting streamers file counterclaims, it’s deleting the content; second, the company is telling streamers it’s giving them warnings, as opposed to outright copyright strikes…

Weirdly Twitch decided to bulk delete infringing material instead of allowing streamers to archive their content or submit counterclaims. To me, that suggests that there are tons of infringements, and that Twitch needed to act very quickly and/or face a lawsuit it wouldn’t be able to win over its adherence to the safe harbor provision of the DMCA.
The email Twitch sent to their users “encourages them to delete additional content — up to and including using a new tool to unilaterally delete all previous clips,” reports Kotaku. One business streamer complains that it’s “insane” that Twitch basically informs them “that there is more content in violation despite having no identification system to find out what it is. Their solution to DMCA is for creators to delete their life’s work. This is pure, gross negligence.”

Or, as esports consultant Rod “Slasher” Breslau puts it, “It is absolutely insane that record labels have put Twitch in a position to force streamers to delete their entire life’s work, for some 10+ years of memories, and that Twitch has been incapable of preventing or aiding streamers for this situation. a total failure all around.”

Twitch’s response? It is crucial that we protect the rights of songwriters, artists and other music industry partners. We continue to develop tools and resources to further educate our creators and empower them with more control over their content while partnering with industry-recognized vendors in the copyright space to help us achieve these goals.

Source: Twitch Suddenly Mass-Deletes Thousands of Videos, Citing Music Copyright Claims – Slashdot

Of course, the money raised by these music companies doesn’t really go to the artists much – it’s basically swallowed up by the music companies themselves.

Oculus owners forced on Facebook accounts, will have purchases be wiped, device bricked, if they ever leave FB. Who would have guessed?

Oculus users, already fuming at Facebook chaining their VR headsets to their Facebook accounts, have been warned they could lose all their Oculus purchases and account information in future if they ever delete their profile on the social network.

The rule is a further binding of the gaming company that Facebook bought in 2014 to the mothership, and comes just two months after Facebook decided all new Oculus users require Facebook accounts to use their VR gizmos, and all current Oculus users will need a Facebook account by 2023. Failure to do so may cause apps installed on the headsets to no longer work as expected.

The decision to cement together what many users see as largely unrelated activities – playing video games and social media posts – has led to a wave of anger among Oculus users, and a renewed online effort to jailbreak new Oculus headgear to bypass Facebook’s growing restrictions.

That outrage was fueled when Facebook initially said that if people attempted to connect more than one Oculus headset to a single Facebook account, something families in particular want to do as it avoids having to install the same app over and over, it would ban them from the service.

Facebook has since dropped that threat, and said it is working on allowing multiple devices and accounts to connect. But the control-freak instincts of the internet giant were yet again on full display, something that was noted by the man who first drew attention to Oculus’s new terms and conditions, CEO of fitness gaming company Yur, Cix Liv.

“My favorite line is ‘While I do completely understand your concerns, we do need to have you comply with the Facebook terms of service’ like Facebook thinks they are some authoritarian government,” he tweeted.

[,,,]

Source: Oculus owners told not only to get Facebook accounts, purchases will be wiped if they ever leave social network • The Register

When you tell Chrome to wipe private data about you, it spares two websites from the purge: Google.com, YouTube

Google exempts its own websites from Chrome’s automatic data-scrubbing feature, allowing the ads giant to potentially track you even when you’ve told it not to.

Programmer Jeff Johnson noticed the unusual behavior, and this month documented the issue with screenshots. In his assessment of the situation, he noted that if you set up Chrome, on desktop at least, to automatically delete all cookies and so-called site data when you quit the browser, it deletes it all as expected – except your site data for Google.com and YouTube.com.

While cookies are typically used to identify you and store some of your online preferences when visiting websites, site data is on another level: it includes, among other things, a storage database in which a site can store personal information about you, on your computer, that can be accessed again by the site the next time you visit. Thus, while your Google and YouTube cookies may be wiped by Chrome, their site data remains on your computer, and it could, in future, be used to identify you.

Johnson noted that after he configured Chrome to wipe all cookies and site data when the application closed, everything was cleared as expected for sites like apple.com. Yet, the main Google search site and video service YouTube were allowed to keep their site data, though the cookies were gone. If Google chooses at some point to stash the equivalent of your Google cookies in the Google.com site data storage, they could be retrieved next time you visit Google, and identify you, even though you thought you’d told Chrome not to let that happen.

Ultimately, it potentially allows Google, and only Google, to continue tracking Chrome users who opted for some more privacy; something that is enormously valuable to the internet goliath in delivering ads. Many users set Chrome to automatically delete cookies-and-site-data on exit for that reason – to prevent being stalked around the web – even though it often requires them to log back into websites the next time they visit due to their per-session cookies being wiped.

Yet Google appears to have granted itself an exception. The situation recalls a similar issue over location tracking, where Google continued to track people’s location through their apps even when users actively selected the option to prevent that. Google had put the real option to start location tracking under a different setting that didn’t even include the word “location.”

In this case, “Clear cookies and site data when you quit Chrome” doesn’t actually mean what it says, at least not for Google.

There is a workaround: you can manually add “Google.com” and “YouTube.com” within the browser to a list of “Sites that can never use cookies.” In that case, no information, not even site data, is saved from those sites, which is all in all a little confusing.

[…]

 

Source: When you tell Chrome to wipe private data about you, it spares two websites from the purge: Google.com, YouTube • The Register

Thought the FBI were the only ones able to unlock encrypted phones? Pretty much every US cop can get the job done – and does

Never mind the Feds. American police forces routinely “circumvent most security features” in smartphones to extract mountains of personal information, according to a report that details the massive, ubiquitous cracking of devices by cops.

Two years of public records requests by Upturn, a Washington DC non-profit, has revealed that every one of the United States’ largest 50 police departments, as well as half of the largest sheriff’s offices and two-thirds of the largest prosecuting attorney’s offices, regularly use specialist hardware and software to access the contents of suspects’ handhelds. There isn’t a state in the Union that hasn’t got some advanced phone-cracking capabilities.

The report concludes that, far from modern phones being a bastion of privacy and security, there are in fact routinely rifled through for trivial crimes without a warrant in sight. In one case, the cops confiscated and searched the phones of two men who were caught arguing over a $70 debt in a McDonalds.

In another, officers witnessed “suspicious behavior” in a Whole Foods grocery store parking lot and claimed to have smelt “the odor of marijuana” coming from a car. The car was stopped and searched, and the driver’s phone was seized and searched for “further evidence of the nature of the suspected controlled substance exchange.”

A third example given saw police officers shot and kill a man after he “ran from the driver’s side of the vehicle” during a traffic stop. They apparently discovered a small orange prescription pill container next to the victim, and tested the pills, which contained acetaminophen and fentanyl. They also discovered a phone in the empty car, and searched it for evidence related to “counterfeit Oxycodone” and “evidence relating to… motives for fleeing from the police.”

The report gives numerous other examples of phones taken from their owners and searched for evidence, without a warrant – many in cases where the value of the information was negligible such as cases involving graffiti, shoplifting, marijuana possession, prostitution, vandalism, car crashes, parole violations, petty theft, and public intoxication.

Not what you imagined

That is a completely different picture to the one, we imagine, most Americans assumed, particularly given the high legal protections afforded smartphones in recent high-profile court cases.

In 2018, the Supreme Court ruled that the government needs a warrant to access its citizens’ cellphone location data and talked extensively about a citizen’s expectation of privacy limiting “official intrusion” when it comes to smartphones.

In 2014, the court decided a warrant was required to search a mobile phone, and that the “reasonable expectation of privacy” that people have in their “physical movements” should extend to records stored by third parties. But the reality on the grounds is that those grand words mean nothing if the cops decide they want to look through your phone.

The report was based on reports from 44 law enforcement agencies across the US and covered 50,000 extractions of data from cellphones between 2015 and 2019, a figure that Upturn notes “represents a severe undercount” of the actual number of cellphone extractions.

[…]

They include banning the use of “consent searches” where the police ask the owner if they can search their phone and then require no further approval to go through a device. “Courts pretend that ‘consent searches’ are voluntary, when they are effectively coerced,” the report argues and notes that most people are probably unaware they by agreeing to it, they can have their phone’s entire contents downloaded and perused at will later on.

It also reckons that the argument that the contents of a phone are in “plain view” because a police officer can see a phone when at the scene of a crime, an important legal distinction that allows the police to search phones, is legally untenable because people carry their phones with them as a rule, and the contents are not themselves also visible – only the device itself.

The report also argues for more extensive audit logs of phone searches so there is a degree of accountability, particularly if evidence turned up is later used in court. And it argues for better and clearer data deletion rules, as well as more reporting requirements around phone searches by law enforcement.

It concludes: “For too long, public debate and discussion regarding these tools has been abstracted to the rarest and most sensational cases in which law enforcement cannot gain access to cellphone data. We hope that this report will help recenter the conversation regarding law enforcement’s use of mobile device forensic tools to the on-the-ground reality of cellphone searches today in the United States.”

Source: Thought the FBI were the only ones able to unlock encrypted phones? Pretty much every US cop can get the job done • The Register

Do algorithms make us even more radical? Filter bubbles and echo chambers

‘Technology ensures that we’re all served our own personalised news cycle. As a result, we only get to hear the opinions that correspond to our own. The result is polarisation’. Or so the oft-heard theory goes. But in practice, it seems this isn’t really true, or at least not for the average Dutch person. However, according to communication scientist Judith Möller, the influence of filter bubbles, as they are known, could indeed be stronger when it comes to groups with radical opinions.

Judith Möller: ‘My theory is that filter bubbles do indeed exist, but that we’re looking for them in the wrong place.’

First of all, we need to differentiate between the so-called echo chamber and the filter bubble. As an individual, you voluntarily take your place in an echo chamber (such as in the form of a forum, or a Facebook or WhatsApp group), meaning you surround yourself with people who tend towards the same opinion as yourself. ‘Call it the modern form of compartmentalisation’, says communication scientist Judith Möller, who recently received a Veni grant for her research. ‘People have always had the tendency to surround themselves with like-minded people, and that’s no different on social media.’

Various news sources in parallel prevent a filter bubble

In the filter bubble, you are presented only with news and opinions that match you as an individual, on the basis of algorithms and without you being aware of this process. It’s said that this bubble is leading to the polarisation of society. Everyone is constantly exposed to ‘their own truth’, while other news gets filtered out. But Möller says that there is no evidence to support this, at least in the Netherlands. ‘We use various news sources in parallel – meaning not only Facebook and Twitter, but also radio, television and newspapers, so we run little risk of ending up in a filter bubble. Besides that: the amount of “news” on an average Facebook timeline is less than 5%. Moreover, it turns out that many people on social media are actually more likely to encounter news that they normally wouldn’t read or search out, so that’s almost a bubble in reverse.’

Bubbles at the fringes of the opinion spectrum

Nonetheless, a great deal of money is being invested in the use of algorithms and artificial intelligence, such as during election periods. Möller: ‘So there must be something in it. My theory is that filter bubbles do indeed exist, but that we’re looking for them in the wrong place. We shouldn’t look at the mainstream, but at groups with radical and/or divergent opinions who don’t fit into the “centre”. This is where we see the formation of ‘fringe bubbles’, as I call them – filters at the edges of the opinion spectrum.’

People with fringe opinions can suddenly become very visible

From spiral of silence to spiral of noise

As one example, the researcher cites the anti-vaccination movement. ‘Previously, this group was confronted with the ‘spiral of silence’: if you said in public, for instance to friends or family, that you were sceptical about vaccination, you wouldn’t get a positive response. And so, you’d keep quiet about it. But this group found each other on social media, and as a consequence of filter technology, the proponents of this view encountered the ‘spiral of noise’: suddenly it seems as if a huge number of people agree with you.’

The news value of radical and divergent opinions

And so, it can happen that people with fringe, radical or divergent opinions suddenly become very vocal and visible. ‘Then they become newsworthy, they appear in normal news media and hence are able to address a wider public. The fringe bubble shifts towards the centre. This has been the case with the anti-vaccination movement, the climate sceptics and the yellow vests, but it also happened with the group who opposed the Dutch Intelligence and Security Services Act – no-one was interested initially, but in the end, it became major news and it even resulted in a referendum.’

Consequences can be both positive and negative

‘In my research I aim to go in search of divergent opinions like these, and then I’ll try to determine how algorithms influence radical groups, to what extent filter bubbles exist and why groups with radical opinions ultimately manage, or don’t manage, to appear in news media.’
The consequences of these processes can be both positive and negative, believes Möller. ‘Some people claim that this attention leads people from the “centre” to feel attracted to the fringe areas of society, in turn leading to more extreme opinions and a reduction in social cohesion, which is certainly possible. On the other hand, this process also brings advantages: after all, in a democracy we also need to listen to minority opinions.’

Source: Do algorithms make us even more radical? – University of Amsterdam

To find out how researchers track the filter bubble, read about fbtrex here (pdf)

Personalisation algorithms and elections: Breaking free of the filter bubble

In recent years, we have been witnessing a fundamental shift in the form how news and current affairs are disseminated and mediated. Due to the exponential increase in available content online and technological development in the field of recommendation systems, more and more citizens are informing themselves through customized and curated sources, while turning away from mass-mediated information sources like TV news and newspapers. Algorithmic recommendation systems provide news users with tools to navigate the information overload and identify important and relevant information. They do so by performing a task that was once a key part of the journalistic profession: keeping the gates. In a way, news recommendation algorithm can create highly individualized gates, through which only information and news fit that serves the user best. In theory, this is a great achievement that can make news exposure more efficient and interesting. In practice, there are many pitfalls when the power to select what we hear from the news shifts from professional editorial boards that select the news according to professional standards to opaque algorithms who are reigned by their own logic, the logic of advertisers or consumes personal preferences.

Beyond the filter bubble: Concepts, myths, evidence and issues for future debates

Filter bubbles in the Netherlands?

Some fear that personalised communication can lead to information cocoons or filter bubbles. For instance, a personalised news website could give more prominence to conservative or liberal media items, based on the (assumed) political interests of the user. As a result, users may encounter only a limited range of political ideas. We synthesise empirical research on the extent and effects of self-selected personalisation, where people actively choose which content they receive, and pre-selected personalisation, where algorithms personalise content for users without any deliberate user choice. We conclude that at present there is little empirical evidence that warrants any worries about filter bubbles.

Should We Worry about Filter Bubbles?

Pop the filter bubble: Exposure Diversity as a Design Principle for Search and Social Media

Michael Bang Peterson and a few others from the US have some interesting counterpoints to this.

Source: New Research Shows Social Media Doesn’t Turn People Into Assholes (They Already Were), And Everyone’s Wrong About Echo Chambers

UK test and trace data can be handed to police, reveals memorandum – that mission crept quickly

As if things were not going badly enough for the UK’s COVID-19 test and trace service, it now seems police will be able to access some test data, prompting fear that the disclosure could deter people who should have tests from coming forward.

As revealed in the Health Service Journal (paywalled), Department for Health and Social Care (DHSC) guidance describing how testing data will be handled was updated on Friday.

The memorandum of understanding between DHSC and National Police Chiefs’ Council said forces could be allowed to access test information that tells them if a “specific individual” has been told to self-isolate.

A failure to self-isolate after getting a positive COVID-19 test or being in contact with someone who has tested positive, could result in a police fine of £1,000, or even a £10,000 penalty for those serial offenders or those seriously breaking the rules.

A Department of Health and Social Care spokesperson said: “It is a legal requirement for people who have tested positive for COVID-19 and their close contacts to self-isolate when formally notified to do so.

“The DHSC has agreed a memorandum of understanding with the National Police Chiefs Council to enable police forces to have access on a case-by-case basis to information that enables them to know if a specific individual has been notified to self-isolate.

[…]

The UK government’s emphasis should be on providing support to people – financial and otherwise – if they need to self-isolate, so that no one is deterred from coming forward for a test, the BMA spokesperson added.

The UK’s test and trace system, backed by £12bn in public money, was outsourced to Serco for £45m in June. Sitel is also a provider.

The service has had a bumpy ride to say the least. Earlier this month, it came to light that as many as 48,000 people were not informed they had come in close contact with people who had tested positive, as the service under-reported 15,841 novel coronavirus cases between 25 September and 2 October.

The use of Microsoft’s Excel spreadsheet program in transferring test results from labs to the health service to total up was at the heart of the problem. A plausible explanation emerged that test results were automatically fetched in CSV format by PHE from various commercial testing labs, and stored in rows in an older .XLS Excel format that limited the number of rows to 65,536 per spreadsheet, rather than the one-million row limit offered by the modern .XLSX file format.

But that was not the only miss-step. It has emerged that people in line for a coronavirus test were sent to a site in Sevenoaks, Kent, where, in fact, no test centre existed, according to reports.

Source: UK test and trace data can be handed to police, reveals memorandum • The Register

Remember when Zoom was rumbled for lousy crypto? Six months later it says end-to-end is ready – but it’s not

The world’s plague-time video meeting tool of choice, Zoom, says it’s figured out how to do end-to-end encryption sufficiently well to offer users a tech preview.

News of the trial comes after April 2020 awkwardness that followed the revelation that Zoom was fibbing about its service using end-to-end encryption.

As we reported at the time, Zoom ‘fessed up but brushed aside criticism with a semantic argument about what “end-to-end” means.

“When we use the phrase ‘End-to-end’ in our other literature, it is in reference to the connection being encrypted from Zoom end point to Zoom end point,” the company said. The commonly accepted definition of end-to-end encryption requires even the host of a service to be unable to access the content of a communication. As we explained at the time, Zoom’s use of TLS and HTTPS meant it could intercept and decrypt video chats.

Come May, Zoom quickly acquired secure messaging Keybase to give it the chops to build proper crypto.

To use it, customers must enable E2EE meetings at the account level and opt-in to E2EE on a per-meeting basis

Now Zoom reckons it has cracked the problem.

A Wednesday post revealed: “starting next week, Zoom’s end-to-end encryption (E2EE) offering will be available as a technical preview, which means we’re proactively soliciting feedback from users for the first 30 days.”

Sharp-eyed Reg readers have doubtless noticed that Zoom has referred to “E2EE”, not just the “E2E” contraction of “end-to-end”.

What’s up with that? The company has offered the following explanation:

“Zoom’s E2EE uses the same powerful GCM encryption you get now in a Zoom meeting. The only difference is where those encryption keys live.In typical meetings, Zoom’s cloud generates encryption keys and distributes them to meeting participants using Zoom apps as they join. With Zoom’s E2EE, the meeting’s host generates encryption keys and uses public key cryptography to distribute these keys to the other meeting participants. Zoom’s servers become oblivious relays and never see the encryption keys required to decrypt the meeting contents.”

Don’t go thinking the preview means Zoom has squared away security, because the company says: “To use it, customers must enable E2EE meetings at the account level and opt-in to E2EE on a per-meeting basis.”

With users having to be constantly reminded to use non-rubbish passwords, not to click on phish or leak business data on personal devices, they’ll almost certainly choose E2EE every time without ever having to be prompted, right?

Source: Remember when Zoom was rumbled for lousy crypto? Six months later it says end-to-end is ready • The Register

Your Edge Browser Installed Microsoft Office Without Asking. NO!

Edge Chromium started out as a respectable alternative to Google Chrome on Windows, but it didn’t take long for Microsoft to turn it into a nuisance. To top it off, it looks like Edge is now a vector for installing (even more) Microsoft stuff on your PC—without you asking for it, of course.

We don’t like bloatware, or those pre-installed apps that come on your computer or smartphone. Some of these apps are worthwhile, but most just take up space and can’t be fully removed in some cases. Some companies are worse about bloatware than others, but Microsoft is notorious for slipping extra software into Windows. And now, Windows Insiders testing the most recent Edge Chromium preview caught the browser installing Microsoft Office web apps without permission.

The reports have only come from Windows Insiders so far, but it’s unlikely these backdoor installations are an early-release bug. And this isn’t just a Microsoft problem. For example, Chrome can install Google Docs and other G Suite apps without any notification, too.

Source: Why Your Edge Browser Installed Microsoft Office Without Asking

Please don’t EVER install stuff on my computer without asking! I paid for the OS, I didn’t ask for a SaaS.

Five Eyes governments, India, and Japan make new call for encryption backdoors – insist that democracy is an insecure police state

Members of the intelligence-sharing alliance Five Eyes, along with government representatives for Japan and India, have published a statement over the weekend calling on tech companies to come up with a solution for law enforcement to access end-to-end encrypted communications.

The statement is the alliance’s latest effort to get tech companies to agree to encryption backdoors.

The Five Eyes alliance, comprised of the US, the UK, Canada, Australia, and New Zealand, have made similar calls to tech giants in 2018 and 2019, respectively.

Just like before, government officials claim tech companies have put themselves in a corner by incorporating end-to-end encryption (E2EE) into their products.

If properly implemented, E2EE lets users have secure conversations — may them be chat, audio, or video — without sharing the encryption key with the tech companies.

Representatives from the seven governments argue that the way E2EE encryption is currently supported on today’s major tech platforms prohibits law enforcement from investigating crime rings, but also the tech platforms themselves from enforcing their own terms of service.

Signatories argue that “particular implementations of encryption technology” are currently posing challenges to law enforcement investigations, as the tech platforms themselves can’t access some communications and provide needed data to investigators.

This, in turn, allows a safe haven for criminal activity and puts the safety of “highly vulnerable members of our societies like sexually exploited children” in danger, officials argued.

Source: Five Eyes governments, India, and Japan make new call for encryption backdoors | ZDNet

Let’s be clear here:

  1. There is no way for a backdoored system to be secure. This means that not only do you give access to the government police services, secret services, stazi and thought police who can persecute you for being jewish or thinking the “wrong way” (eg being homosexual or communist), you also give criminal networks, scam artists, discontented exes and foreign government free reign to run around  your private content
  2. You have a right to privacy and you need it. It’s fundamental to being able to think creatively  and the only way in which societies advance. If thought is policed by some random standard then deviations which lead  to change will be surpressed. Stasis leads to economic collapse among other things, even if those at the top will be collecting more and more wealth for themselves.
  3. We as a society cannot “win” or become “better” by emulating the societies that we are competing against, that represent values and behaviours that we disagree with. Becoming a police state doesn’t protect us from other police states.

Apple made ProtonMail add in-app purchases, even though it had been free for years – this App store shakedown has a long scared list of victims

one app developer revealed to Congress that it — just like WordPress — had been forced to monetize a largely free app. That developer testified that Apple had demanded in-app purchases (IAP), even though Apple had approved its app without them two years earlier — and that when the dev dared send an email to customers notifying them of the change, Apple threatened to remove the app and blocked all updates.

That developer was ProtonMail, makers of an encrypted email app, and CEO Andy Yen had some fiery words for Apple in an interview with The Verge this week.

We’ve known for months that WordPress and Hey weren’t alone in being strong-armed by the most valuable company in the world, ever since Stratechery’s Ben Thompson reported that 21 different app developers quietly told him they’d been pushed to retroactively add IAP in the wake of those two controversies. But until now, we hadn’t heard of many devs willing to publicly admit it. They were scared.

And they’re still scared, says Yen. Even though Apple changed its rules on September 11th to exempt “free apps acting as a stand-alone companion to a paid web based tool” from the IAP requirement — Apple explicitly said email apps are exempt — ProtonMail still hasn’t removed its own in-app purchases because it fears retaliation from Apple, he says.

He claims other developers feel the same way: “There’s a lot of fear in the space right now; people are completely petrified to say anything.”

He might know. ProtonMail is one of the founding partners of the Coalition for App Fairness, a group that also includes Epic Games, Spotify, Tile, Match, and others who banded together to protest Apple’s rules after having those rules used against them. It’s a group that tried to pull together as many developers as it could to form a united front, but some weren’t as ready to risk Apple’s wrath.

That’s clearly not the case for Yen, though — in our interview, he compares Apple’s tactics to a Mafia protection racket.

“For the first two years we were in the App Store, that was fine, no issues there,” he says. (They’d launched on iOS in 2016.) “But a common practice we see … as you start getting significant uptake in uploads and downloads, they start looking at your situation more carefully, and then as any good Mafia extortion goes, they come to shake you down for some money.”

“We didn’t offer a paid version in the App Store, it was free to download … it wasn’t like Epic where you had an alternative payment option, you couldn’t pay at all,” he relates.

Yen says Apple’s demand came suddenly in 2018. “Out of the blue, one day they said you have to add in-app purchase to stay in the App Store,” he says. “They stumbled upon something in the app that mentioned there were paid plans, they went to the website and saw there was a subscription you could purchase, and then turned around and demanded we add IAP.”

“There’s nothing you can say to that. They are judge, jury, and executioner on their platform, and you can take it or leave it. You can’t get any sort of fair hearing to determine whether it’s justifiable or not justifiable, anything they say goes.”

[…]

Source: Apple made ProtonMail add in-app purchases, even though it had been free for years – The Verge

This is what monopolies will do for you. I have been talking about how big tech is involved in this since 2019 and it’s good to see it finally really coming out of the woodwork

Google is giving data to police based on search keywords: IPs of everyone who searched a certain thing. No warrant required.

There are few things as revealing as a person’s search history, and police typically need a warrant on a known suspect to demand that sensitive information. But a recently unsealed court document found that investigators can request such data in reverse order by asking Google to disclose everyone who searched a keyword rather than for information on a known suspect.

In August, police arrested Michael Williams, an associate of singer and accused sex offender R. Kelly, for allegedly setting fire to a witness’ car in Florida. Investigators linked Williams to the arson, as well as witness tampering, after sending a search warrant to Google that requested information on “users who had searched the address of the residence close in time to the arson.”

The July court filing was unsealed on Tuesday. Detroit News reporter Robert Snell tweeted about the filing after it was unsealed.

Court documents showed that Google provided the IP addresses of people who searched for the arson victim’s address, which investigators tied to a phone number belonging to Williams. Police then used the phone number records to pinpoint the location of Williams’ device near the arson, according to court documents.

The original warrant sent to Google is still sealed, but the report provides another example of a growing trend of data requests to the search engine giant in which investigators demand data on a large group of users rather than a specific request on a single suspect.

“This ‘keyword warrant’ evades the Fourth Amendment checks on police surveillance,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “When a court authorizes a data dump of every person who searched for a specific term or address, it’s likely unconstitutional.”

The keyword warrants are similar to geofence warrants, in which police make requests to Google for data on all devices logged in at a specific area and time. Google received 15 times more geofence warrant requests in 2018 compared with 2017, and five times more in 2019 than 2018. The rise in reverse requests from police have troubled Google staffers, according to internal emails.

[…]

Source: Google is giving data to police based on search keywords, court docs show – CNET

Facebook Just Forced Its Most Powerful Critics Offline

Facebook is using its vast legal muscle to silence one of its most prominent critics.

The Real Facebook Oversight Board, a group established last month in response to the tech giant’s failure to get its actual Oversight Board up and running before the presidential election, was forced offline on Wednesday night after Facebook wrote to the internet service provider demanding the group’s website — realfacebookoversight.org — be taken offline.

The group is made up of dozens of prominent academics, activists, lawyers, and journalists whose goal is to hold Facebook accountable in the run-up to the election next month. Facebook’s own Oversight Board, which was announced 13 months ago, will not meet for the first time until later this month, and won’t consider any issues related to the election.

In a letter sent to one of the founders of the RFOB, journalist Carole Cadwalladr, the ISP SupportNation said the website was being taken offline after Facebook complained that the site was involved in “phishing.”

[…]

It’s unclear what evidence Facebook presented to support its claim that RFOB was operating a phishing website.

Typically, ISPs have a dispute resolution process in place that allows the website operator to challenge the allegations. This process can normally take months and ultimately result in a court order being obtained to take a site offline. In this case, there was no warning given.

[…]

Facebook had previously forced another website the group set up — realfacebookoversight.com — offline over alleged copyright infringement.

Facebook denied that it was responsible for the website being taken offline. “This website was automatically flagged by a vendor because it contained the word “facebook” in the domain and action was taken without consulting with us,”  a spokesperson told VICE News.

But, an email from the ISP, SupportNation, sent to the Real Facebook Oversight Board and viewed by VICE News, links to a message from the original complainant sent in the early hours of Friday morning after the website was taken offline.

The message tells SupportNation that “notices of trademark abuse/trademark infringement were sent out in error.” The message comes from what appears to be a Facebook email address.

Screenshot 2020-10-08 at 16.21.09.png

Facebook said that while normally the ISP would confirm requests like this with Facebook first but “in this instance that did not happen.” A spokesperson added that the message to SupportNation was sent by “a generic email address used by the vendor.”

John Taylor, a spokesperson for Facebook’s actual Oversight Board told VICE News that the takedown wasn’t something it was “aware of or had any involvement in.” Taylor added that the group doesn’t “think this is a constructive approach. We continue to welcome these efforts and contributions to the debate.”

On Wednesday night, Facebook spokesperson Andy Stone responded to Cadwalladr’s post, saying: “Your fake thing that accuses us of fake things was caught in our thing to prevent fake things.”

Stone did not immediately respond to requests for comment to clarify what he meant by “fake things” in these instances.

“The most extraordinary thing about this whole affair is how it’s exposed the total Trumpification of Facebook’s corporate comms,” Cadwalladr told VICE News. “There is a brazen shamelessness at work here. It’s not just that a company that has used ‘free speech’ as a protective cloak would go after our ISP and drive us off the internet but that its official spokesman responds to such criticism by attacking and trolling journalists.”

[…]

Source: Facebook Just Forced Its Most Powerful Critics Offline

UK privacy watchdog wraps up probe into Cambridge Analytica and… it was all a little bit overblown, no?

The UK’s privacy watchdog has wrapped up its probe into Cambridge Analytica, saying it found no hard evidence to support claims the controversial biz used data scrapped from people’s Facebook profiles to influence the Brexit referendum nor the US 2016 presidential election. There was no clear evidence of Russian involvement, either.

However, the UK’s privacy watchdog acts in the interests of the UK and so it may be in their best  interest to say: nothing to see here, carry on please…

In a letter [PDF] this month to Julian Knight – chairman of Parliament’s Digital, Culture and Media and Sport Select Committee – the Information Commissioner’s Office detailed the findings of its investigation, having gone through 700TB and more than 300,000 documents seized from the now-defunct company.

Crucially, the watchdog said Cambridge Analytica pretty much dealt with information and tools that anyone could have purchased or used if they had the right budget and know-how: there were no special techniques nor hacking. Its raison d’etre – profiling voters to target them with influential ads – was achieved by tapping into Facebook’s highly problematic Graph API at the time, via a third-party quiz app people were encouraged to use, and downloading data from their profile pages and their friends’ pages.

Facebook subsequently dynamited its overly leaky API – the real scandal here – to end any further such slurpage, was fined half a million quid by the ICO, and ordered to cough up $5bn by America’s consumer protection regulator, the FTC. If Cambridge Analytica achieved anything at all, it was blowing the lid off Facebook’s slipshod and cavalier approach to safeguarding netizens’ privacy.

Information Commissioner Elizabeth Denham’s team characterized Cambridge Analytica, and its related outfit SCL Elections, as a bit of a smoke-and-mirrors operation that lacked the sort of game-changing insight it sold to clients, who were told they could use the database of Facebook addicts to micro-target particular key voters with specific advertising to swing their political opinion in one direction or another.

“In summary, we concluded that SCL/CA were purchasing significant volumes of commercially available personal data (at one estimate over 130 billion data points), in the main about millions of US voters, to combine it with the Facebook derived insight information they had obtained from an academic at Cambridge University, Dr Aleksandr Kogan, and elsewhere,” the ICO wrote. Kogan and his company Global Science Research (GSR) was tasked with harvesting 87 million Facebook users’ personal data from the aforementioned quiz app.

“In the main their models were also built from ‘off the shelf’ analytical tools and there was evidence that their own staff were concerned about some of the public statements the leadership of the company were making about their impact and influence.”

El Reg has heard on good authority from sources in British political circles that Cambridge Analytica’s advertised powers of online suggestion were rather overblown and in fact mostly useless. In the end, it was skewered by its own hype, accused of tangibly influencing the Brexit and presidential votes on behalf of political parties and campaigners using Facebook data. Yet, no evidence could be found supporting those claims.

On Brexit, the ICO reckoned Cambridge Analytica just had information on Americans from the social network:

It was suggested that some of the data was utilised for political campaigning associated with the Brexit Referendum. However, our view on review of the evidence is that the data from GSR could not have been used in the Brexit Referendum as the data shared with SCL/Cambridge Analytica by Dr Kogan related to US registered voters.

Cambridge Analytica did appear to do a limited amount of work for Leave.EU but this involved the analysis of UKIP membership data rather than data obtained from Facebook or GSR.

For what it’s worth, the ICO observed that a Canadian outfit called AggregateIQ, which was closely linked to Cambridge Analytica, was recruited by pro-Brexit campaigners to target adverts at British Facebook users.

And on the US elections, we’re told a database of voters was assembled from Facebook records, and that “targeted advertising was ultimately likely the final purpose of the data gathering but whether or which specific data from GSR was then used in any specific part of campaign has not been possible to determine from the digital evidence reviewed.”

And as for Russia: “We did not find any additional evidence of Russian involvement in our analysis of material contained in the SCL / CA servers we obtained,” the ICO stated, adding that this is kinda outside its remit and something for the UK’s National Crime Agency to probe.

Were Cambridge Analytica still around, we imagine some details of the report would be a little embarrassing. Alas, it shut down all operations (sort of) back in 2018.

Their models were also built from ‘off the shelf’ analytical tools and there was evidence that their own staff were concerned about some of the public statements the leadership of the company were making about their impact and influence

The ICO report noted how Cambridge Analytica was probably also less than honest with the sales pitches it made to both the Trump and Leave EU campaigns, overstating the amount of data it had collected.

“SCL’s own marketing material claimed they had ‘Over 5,000 data points per individual on 230 million adult Americans’,” the ICO noted. “However, based on what we found it appears that this may have been an exaggeration.”

The company was also taken to task for poor data practices that, even had the political marketing stuff not blown up in public, likely would have landed it in hot water with the ICO.

While Cambridge Analytica may be gone and the ICO investigation concluded, Denham also warned that the tools and techniques it claimed could tip elections are not going away, and are likely to be used in the very near future… and may even work this time.

“What is clear is that the use of digital campaign techniques are a permanent fixture of our elections and the wider democratic process and will only continue to grow in the future,” the commissioner wrote. “The COVID-19 pandemic is only likely to accelerate this process as political parties and campaigns seek to engage with voters in a safe and socially distanced way.”

Source: UK privacy watchdog wraps up probe into Cambridge Analytica and… it was all a little bit overblown, no? • The Register

 

Europe’s top court confirms no mass surveillance without limits

Europe’s top court has delivered another slap-down to indiscriminate government mass surveillance regimes.

In a ruling today the CJEU has made it clear that national security concerns do not exclude EU Member States from the need to comply with general principles of EU law such as proportionality and respect for fundamental rights to privacy, data protection and freedom of expression.

However the court has also allowed for derogations, saying that a pressing national security threat can justify limited and temporary bulk data collection and retention — capped to ‘what is strictly necessary’.

While threats to public security or the need to combat serious crime may also allow for targeted retention of data provided it’s accompanied by ‘effective safeguards’ and reviewed by a court or independent authority.

 

The reference to the CJEU joined a number of cases, including legal challenges brought by rights advocacy group Privacy International to bulk collection powers baked into the UK’s Investigatory Powers Act; a La Quadrature du Net (and others’) challenge to a 2015 French decree related to specialized intelligence services; and a challenge to Belgium’s 2016 law on collection and retention of comms data.

Civil rights campaigners had been eagerly awaiting today’s judgements from the Grand Chamber, following an opinion by an advisor to the court in January which implied certain EU Member States’ surveillance regimes were breaching the law.

At the time of writing key complainants had yet to issue a response.

Of course a government agency’s definition of how much data collection is ‘strictly necessary’ in a national security context (or, indeed, what constitutes an ‘effective safeguard’) may be rather different to the benchmark of civil rights advocacy groups — so it seems unlikely this ruling will be the last time the CJEU is asked to clarify where the legal limits of mass surveillance lie.

 

Additionally, the judgement raises interesting questions over the UK’s chances of gaining a data protection adequacy agreement from the European Commission — as it leaves the EU in 2021 at the end of the brexit transition process this year — something it needs for digital data flows from the EU to continue uninterrupted as now.

The problem is the UK’s Investigatory Powers Act (IPA) gives government agencies broad powers to intercept and retain digital communications — but here the CJEU is making it clear that such bulk powers must be the exception, not the statutory rule.

So, again, a battle over definitions could be looming…

[…]

Another interesting component of today’s CJEU judgement suggests that in EU states with indiscriminate mass surveillance regimes there could be grounds for overturning individual criminal convictions which are based on evidence obtained via such illegal surveillance.

On this, the court writes in a press release: “As EU law currently stands, it is for national law alone to determine the rules relating to the admissibility and assessment, in criminal proceedings against persons suspected of having committed serious criminal offences, of information and evidence obtained by the retention of data in breach of EU law. However, the Court specifies that the directive on privacy and electronic communications, interpreted in the light of the principle of effectiveness, requires national criminal courts to disregard information and evidence obtained by means of the general and indiscriminate retention of traffic and location data in breach of EU law, in the context of such criminal proceedings, where those persons suspected of having committed criminal offences are not in a position to comment effectively on that information and evidence.”

Update: Privacy International has now responded to the CJEU judgements, saying the UK, French and Belgian surveillance regimes must be amended to be brought within EU law.

In a statement, legal director Caroline Wilson Palow said: “Today’s judgment reinforces the rule of law in the EU. In these turbulent times, it serves as a reminder that no government should be above the law. Democratic societies must place limits and controls on the surveillance powers of our police and intelligence agencies.

“While the Police and intelligence agencies play a very important role in keeping us safe, they must do so in line with certain safeguards to prevent abuses of their very considerable power. They should focus on providing us with effective, targeted surveillance systems that protect both our security and our fundamental rights.”

Source: Europe’s top court confirms no mass surveillance without limits | TechCrunch

The IRS Is Being Investigated for Using Bought Location Data Without a Warrant – Wait there’s a company called Venntel that sells this and that’s OK?

The body tasked with oversight of the IRS announced in a letter that it will investigate the agency’s use of location data harvested from ordinary apps installed on peoples’ phones, according to a copy of the letter obtained by Motherboard.

The move comes after Senators Ron Wyden and Elizabeth Warren demanded a formal investigation into how the IRS used the location data to track Americans without a warrant.

“We are going to conduct a review of this matter, and we are in the process of contacting the CI [Criminal Investigation] division about this review,” the letter, signed by J. Russell George, the Inspector General, and addressed to the Senators, reads. CI has a broad mandate to investigate abusive tax schemes, bankruptcy fraud, identity theft, and many more similar crimes. Wyden’s office provided Motherboard with a copy of the letter on Tuesday.

In June, officials from the IRS Criminal Investigation unit told Wyden’s office that it had purchased location data from a contractor called Venntel, and that the IRS had tried to use it to identify individual criminal suspects. Venntel obtains location data from innocuous looking apps such as games, weather, or e-commerce apps, and then sells access to the data to government clients.

A Wyden aide previously told Motherboard that the IRS wanted to find phones, track where they were at night, use that as a proxy as to where the individual lived, and then use other data sources to try and identify the person. A person who used to work for Venntel previously told Motherboard that Venntel customers can use the tool to see which devices are in a particular house, for instance.

The IRS’ attempts were not successful though, as the people the IRS was looking for weren’t included in the particular Venntel data set, the aide added.

But the IRS still obtained this data without a warrant, and the legal justification for doing so remains unclear. The aide said that the IRS received verbal approval to use the data, but stopped responding to their office’s inquiries.

[…]

Source: The IRS Is Being Investigated for Using Location Data Without a Warrant

Facebook revenue chief says ad-supported model is ‘under assault’ – boo hoo, turns out people like their privacy

Facebook Chief Revenue Officer David Fischer said Tuesday that the economic models that rely on personalized advertising are “under assault” as Apple readies a change that would limit the ability of Facebook and other companies to target ads and estimate how well they work.

The change to Apple’s identifier for advertisers, or IDFA, will give iPhone users the option to block tracking when opening an app. It was originally planned for iOS 14, the version of the iPhone operating system that was released last month. But Apple said last month it was delaying the rollout until 2021 “to give developers time to make necessary changes.”

Fischer, speaking at a virtual Advertising Week session Tuesday, spoke about the changes after being asked about Facebook’s vulnerability to the companies that control mobile platforms, such as Apple and Google, which runs Android.

Fischer argued that though there’s “angst and concern” about the risks of technology, personalized and targeted advertising has been essential to help the internet grow.

“The economic model that not just we at Facebook but so many businesses rely on, this model is worth preserving, one that makes content freely available, and the business that makes it run and hum, is via advertising,” he said.

“And right now, frankly, some of that is under assault, that the very tools that entrepreneurs, that businesses are relying on right now are being threatened. To me, the changes that Apple has proposed, pretty sweeping changes, are going to hurt developers and businesses the most.”

Apple frames the change as preserving users’ privacy, rather than as an attack on the advertising industry, and has been promoting its privacy features as a core reason to get an iPhone. It comes as consumers are increasingly wary about their online privacy following scandals with various companies, including Facebook.

[…]

Source: Facebook revenue chief says ad-supported model is ‘under assault’