How the U.S. Military Buys Location Data from Ordinary Apps

The U.S. military is buying the granular movement data of people around the world, harvested from innocuous-seeming apps, Motherboard has learned. The most popular app among a group Motherboard analyzed connected to this sort of data sale is a Muslim prayer and Quran app that has more than 98 million downloads worldwide. Others include a Muslim dating app, a popular Craigslist app, an app for following storms, and a “level” app that can be used to help, for example, install shelves in a bedroom.

Through public records, interviews with developers, and technical analysis, Motherboard uncovered two separate, parallel data streams that the U.S. military uses, or has used, to obtain location data. One relies on a company called Babel Street, which creates a product called Locate X. U.S. Special Operations Command (USSOCOM), a branch of the military tasked with counterterrorism, counterinsurgency, and special reconnaissance, bought access to Locate X to assist on overseas special forces operations. The other stream is through a company called X-Mode, which obtains location data directly from apps, then sells that data to contractors, and by extension, the military.

The news highlights the opaque location data industry and the fact that the U.S. military, which has infamously used other location data to target drone strikes, is purchasing access to sensitive data. Many of the users of apps involved in the data supply chain are Muslim, which is notable considering that the United States has waged a decades-long war on predominantly Muslim terror groups in the Middle East, and has killed hundreds of thousands of civilians during its military operations in Pakistan, Afghanistan, and Iraq. Motherboard does not know of any specific operations in which this type of app-based location data has been used by the U.S. military.

[…]

In March, tech publication Protocol first reported that U.S. law enforcement agencies such as Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) were using Locate X. Motherboard then obtained an internal Secret Service document confirming the agency’s use of the technology. Some government agencies, including CBP and the Internal Revenue Service (IRS), have also purchased access to location data from another vendor called Venntel.

“In my opinion, it is practically certain that foreign entities will try to leverage (and are almost certainly actively exploiting) similar sources of private platform user data. I think it would be naïve to assume otherwise,” Mark Tallman, assistant professor at the Department of Emergency Management and Homeland Security at the Massachusetts Maritime Academy, told Motherboard in an email.

THE SUPPLY CHAIN

Some companies obtain app location data through bidstream data, which is information gathered from the real-time bidding that occurs when advertisers pay to insert their adverts into peoples’ browsing sessions. Firms also often acquire the data from software development kits (SDKs).

[…]

In a recent interview with CNN, X-Mode CEO Joshua Anton said the company tracks 25 million devices inside the United States every month, and 40 million elsewhere, including in the European Union, Latin America, and the Asia-Pacific region. X-Mode previously told Motherboard that its SDK is embedded in around 400 apps.

In October the Australian Competition & Consumer Commission published a report about data transfers by smartphone apps. A section of that report included the endpoint—the URL some apps use—to send location data back to X-Mode. Developers of the Guardian app, which is designed to protect users from the transfer of location data, also published the endpoint. Motherboard then used that endpoint to discover which specific apps were sending location data to the broker.

Motherboard used network analysis software to observe both the Android and iOS versions of the Muslim Pro app sending granular location data to the X-Mode endpoint multiple times. Will Strafach, an iOS researcher and founder of Guardian, said he also saw the iOS version of Muslim Pro sending location data to X-Mode.

The data transfer also included the name of the wifi network the phone was currently collected to, a timestamp, and information about the phone such as its model, according to Motherboard’s tests.

[…]

 

Source: How the U.S. Military Buys Location Data from Ordinary Apps

GitHub Restores YouTube Downloader Following DMCA Takedown, starts to protect developers from DMCA misuse

Last month, GitHub removed a popular tool that is used to download videos from websites like YouTube after it received a DMCA takedown notice from the Recording Industry Association of America. For a moment, it seemed that GitHub might throw developers under the bus in the same fashion that Twitch has recently treated its streamers. But on Monday, GitHub went on the offense by reinstating the offending tool and saying it would take a more aggressive line on protecting developers’ projects.

Youtube-dl is a command-line program that could, hypothetically, be used to make unauthorized copies of copyrighted material. This potential for abuse prompted the RIAA to send GitHub a scary takedown notice because that’s what the RIAA does all day. The software development platform complied with the notice and unleashed a user outcry over the loss of one of the most popular repositories on the site. Many developers started re-uploading the code to GitHub in protest. After taking some time to review the case, GitHub now says that youtube-dl is all good.

In a statement, GitHub’s Director of Platform Policy Abby Vollmer wrote that there are two reasons that it was able to reverse the decision. The first reason is that the RIAA cited one repo that used the youtube-dl source code and contained references to a few copyrighted songs on YouTube. This was only part of a unit test that the code performs. It listens to a few seconds of the song to verify that everything is working properly but it doesn’t download or distribute any material. Regardless, GitHub worked with the developer to patch out the references and stay on the safe side.

As for the primary youtube-dl source code, lawyers at the Electronic Frontier Foundation decided to represent the developers and presented an argument that satisfied GitHub’s concerns that the code circumvents technical measures to protect copyrighted material in violation of Section 1201 of the Digital Millennium Copyright Act. The EFF explained that youtube-dl doesn’t decrypt anything or breakthrough any anti-copying measures. From a technical standpoint, it isn’t much different than a web browser receiving information as intended, and there are plenty of fair use applications for making a copy of materials.

Among the “many legitimate purposes” for using youtube-dl, GitHub listed: “changing playback speeds for accessibility, preserving evidence in the fight for human rights, aiding journalists in fact-checking, and downloading Creative Commons-licensed or public domain videos.” The EFF cited some of the same practical uses and had a few unique additions to its list of benefits, saying that it could be used by “educators to save videos for classroom use, by YouTubers to save backup copies of their own uploaded videos, and by users worldwide to watch videos on hardware that can’t run a standard web browser, or to watch videos in their full resolution over slow or unreliable Internet connections.”

It’s nice to see GitHub evaluating the argument and moving forward without waiting for a legal process to play out, but the company went further in announcing a new eight-step process for evaluating claims related to Section 1201 that will err on the side of developers. GitHub is also establishing a million-dollar legal fund to provide assistance to open source developers fighting off unwarranted takedown notices. Mea culpa, mea culpa!

Finally, the company said that it would work to improve the law around DMCA notices and it will be “advocating specifically on the anti-circumvention provisions of the DMCA to promote developers’ freedom to build socially beneficial tools like youtube-dl.”

Along with today’s announcement, GitHub CEO Nat Friedman tweeted, “Section 1201 of the DMCA is broken and needs to be fixed. Developers should have the freedom to tinker.”

Source: GitHub Restores YouTube Downloader Following DMCA Takedown

It’s nice to see a large company come down on the right side of copyright for a change.

Your Computer isn’t Yours – Apple edition – how is it snooping on you, why can’t you start apps when their server is down

It’s here. It happened. Did you notice?

I’m speaking, of course, of the world that Richard Stallman predicted in 1997. The one Cory Doctorow also warned us about.

On modern versions of macOS, you simply can’t power on your computer, launch a text editor or eBook reader, and write or read, without a log of your activity being transmitted and stored.

It turns out that in the current version of the macOS, the OS sends to Apple a hash (unique identifier) of each and every program you run, when you run it. Lots of people didn’t realize this, because it’s silent and invisible and it fails instantly and gracefully when you’re offline, but today the server got really slow and it didn’t hit the fail-fast code path, and everyone’s apps failed to open if they were connected to the internet.

Because it does this using the internet, the server sees your IP, of course, and knows what time the request came in. An IP address allows for coarse, city-level and ISP-level geolocation, and allows for a table that has the following headings:

Date, Time, Computer, ISP, City, State, Application Hash

Apple (or anyone else) can, of course, calculate these hashes for common programs: everything in the App Store, the Creative Cloud, Tor Browser, cracking or reverse engineering tools, whatever.

This means that Apple knows when you’re at home. When you’re at work. What apps you open there, and how often. They know when you open Premiere over at a friend’s house on their Wi-Fi, and they know when you open Tor Browser in a hotel on a trip to another city.

“Who cares?” I hear you asking.

Well, it’s not just Apple. This information doesn’t stay with them:

  1. These OCSP requests are transmitted unencrypted. Everyone who can see the network can see these, including your ISP and anyone who has tapped their cables.
  2. These requests go to a third-party CDN run by another company, Akamai.
  3. Since October of 2012, Apple is a partner in the US military intelligence community’s PRISM spying program, which grants the US federal police and military unfettered access to this data without a warrant, any time they ask for it. In the first half of 2019 they did this over 18,000 times, and another 17,500+ times in the second half of 2019.

This data amounts to a tremendous trove of data about your life and habits, and allows someone possessing all of it to identify your movement and activity patterns. For some people, this can even pose a physical danger to them.

Now, it’s been possible up until today to block this sort of stuff on your Mac using a program called Little Snitch (really, the only thing keeping me using macOS at this point). In the default configuration, it blanket allows all of this computer-to-Apple communication, but you can disable those default rules and go on to approve or deny each of these connections, and your computer will continue to work fine without snitching on you to Apple.

The version of macOS that was released today, 11.0, also known as Big Sur, has new APIs that prevent Little Snitch from working the same way. The new APIs don’t permit Little Snitch to inspect or block any OS level processes. Additionally, the new rules in macOS 11 even hobble VPNs so that Apple apps will simply bypass them.

Google CEO apologises for document outlining how to counter new EU rules by attacking rulemaker, EU’s Breton warns internet is not Wild West

Alphabet GOOGL.O CEO Sundar Pichai has apologised to Europe’s industry chief Thierry Breton over a leaked internal document proposing tactics to counter the EU’s tough new rules on internet companies and lobby against the EU commissioner.

[…]

The call came after a Google internal document outlined a 60-day strategy to attack the European Union’s push for the new rules by getting U.S. allies to push back against Breton.

[…]

The incident underlines the intense lobbying by tech companies against the proposed EU rules, which could impede their businesses and force changes in how they operate.

Breton also warned Pichai about the excesses of the internet.

“The Internet cannot remain a ‘Wild West’: we need clear and transparent rules, a predictable environment and balanced rights and obligations,” he told Pichai.

Breton will announce new draft rules known as the Digital Services Act and the Digital Markets Act together with European Competition Commissioner Margrethe Vestager on Dec. 2.

The rules will set out a list of do’s and don’ts for gatekeepers – online companies with market power – forcing them to share data with rivals and regulators and not to promote their services and products unfairly.

EU antitrust chief Margrethe Vestager has levied fines totalling 8.25 billion euros ($9.7 billion) against Google in the past three years for abusing its market power to favour its shopping comparison service, its Android mobile operating system and its advertising business.

Breton told Pichai that he would increase the EU’s power to curb unfair behaviour by gatekeeping platforms, so that the Internet does not just benefit a handful of companies but also Europe’s small- and medium-sized enterprises and entrepreneurs.

Source: Google CEO apologises for document, EU’s Breton warns internet is not Wild West | Reuters

Mozilla *privacy not included tech buyers guide rated on creepy scale

This is a list of 130 Smart home gadgets, fitness trackers, toys and more, rated for their privacy & security. It’s a large list and shows you how basically anything by big tech is pretty creepy – anything by Amazon and Facebook is super creepy, Google pretty creepy, Apple only creepy. There are a few surprises, like Moleskine being super creepy. Fitness machinery is pretty bad as are some coffee makers… Nintendo Switches and PS5s (surprisingly) aren’t creepy at all…

Source: Mozilla – *privacy not included

New lawsuit: Why do Android phones mysteriously exchange 260MB a month with Google via cellular data when they’re not even in use? Also Apple + ad fraud

Google on Thursday was sued for allegedly stealing Android users’ cellular data allowances though unapproved, undisclosed transmissions to the web giant’s servers.

The lawsuit, Taylor et al v. Google [PDF], was filed in a US federal district court in San Jose on behalf of four plaintiffs based in Illinois, Iowa, and Wisconsin in the hope the case will be certified by a judge as a class action.

The complaint contends that Google is using Android users’ limited cellular data allowances without permission to transmit information about those individuals that’s unrelated to their use of Google services.

Data sent over Wi-Fi is not at issue, nor is data sent over a cellular connection in the absence of Wi-Fi when an Android user has chosen to use a network-connected application. What concerns the plaintiffs is data sent to Google’s servers that isn’t the result of deliberate interaction with a mobile device – we’re talking passive or background data transfers via cell network, here.

[…]

Android users have to accept four agreements to participate in the Google ecosystem: Terms of Service; the Privacy Policy; the Managed Google Play Agreement; and the Google Play Terms of Service. None of these, the court filing contends, disclose that Google spends users’ cellular data allowances for these background transfers.

To support the allegations, the plaintiff’s counsel tested a new Samsung Galaxy S7 phone running Android, with a signed-in Google Account and default setting, and found that when left idle, without a Wi-Fi connection, the phone “sent and received 8.88 MB/day of data, with 94 per cent of those communications occurring between Google and the device.”

The device, stationary, with all apps closed, transferred data to Google about 16 times an hour, or about 389 times in 24 hours. Assuming even half of that data is outgoing, Google would receive about 4.4MB per day or 130MB per month in this manner per device subject to the same test conditions.

Putting worries of what could be in that data to one side, based on an average price of $8 per GB of data in the US, that 130MB works out to about $1 lost to Google data gathering per month – if the device is disconnected from Wi-Fi the entire time and does all its passive transmission over a cellular connection.

An iPhone with Apple’s Safari browser open in the background transmits only about a tenth of that amount to Apple, according to the complaint.

Much of the transmitted data, it’s claimed, are log files that record network availability, open apps, and operating system metrics. Google could have delayed transmitting these files until a Wi-Fi connection was available, but chose instead to spend users’ cell data so it could gather data at all hours.

Vanderbilt University Professor Douglas C. Schmidt performed a similar study in 2018 – except that the Chrome browser was open – and found that Android devices made 900 passive transfers in 24 hours.

Under active use, Android devices transfer about 11.6MB of data to Google servers daily, or 350MB per month, it’s claimed, which is about half the amount transferred by an iPhone.

The complaint charges that Google conducts these undisclosed data transfers for further its advertising business, sending “tokens” that identify users for targeted advertising and preload ads that generate revenue even if they’re never displayed.

“Users often never view these pre-loaded ads, even though their cellular data was already consumed to download the ads from Google,” the legal filing claims. “And because these pre-loads can count as ad impressions, Google is paid for transmitting the ads.”

Source: New lawsuit: Why do Android phones mysteriously exchange 260MB a month with Google via cellular data when they’re not even in use? • The Register

Six Reasons Why Google Maps Is the Creepiest App On Your Phone

VICE has highlighted six reasons why Google Maps is the creepiest app on your phone. An anonymous reader shares an excerpt from the report: 1. Google Maps Wants Your Search History: Google’s “Web & App Activity” settings describe how the company collects data, such as user location, to create a faster and “more personalized” experience. In plain English, this means that every single place you’ve looked up in the app — whether it’s a strip club, a kebab shop or your moped-riding drug dealer’s location — is saved and integrated into Google’s search engine algorithm for a period of 18 months. Google knows you probably find this creepy. That’s why the company uses so-called “dark patterns” — user interfaces crafted to coax us into choosing options we might not otherwise, for example by highlighting an option with certain fonts or brighter colors.

2. Google Maps Limits Its Features If You Don’t Share Your Search History: If you open your Google Maps app, you’ll see a circle in the top right corner that signifies you’re logged in with your Google account. That’s not necessary, and you can simply log out. Of course, the log out button is slightly hidden, but can be found like this: click on the circle > Settings > scroll down > Log out of Google Maps. Unfortunately, Google Maps won’t let you save frequently visited places if you’re not logged into your Google account. If you choose not to log in, when you click on the search bar you get a “Tired of typing?” button, suggesting you sign in, and coaxing you towards more data collection.

3. Google Maps Can Snitch On You: Another problematic feature is the “Google Maps Timeline,” which “shows an estimate of places you may have been and routes you may have taken based on your Location History.” With this feature, you can look at your personal travel routes on Google Maps, including the means of transport you probably used, such as a car or a bike. The obvious downside is that your every move is known to Google, and to anyone with access to your account. And that’s not just hackers — Google may also share data with government agencies such as the police. […] If your “Location History” is on, your phone “saves where you go with your devices, even when you aren’t using a specific Google service,” as is explained in more detail on this page. This feature is useful if you lose your phone, but also turns it into a bonafide tracking device.

4. Google Maps Wants to Know Your Habits: Google Maps often asks users to share a quick public rating. “How was Berlin Burger? Help others know what to expect,” suggests the app after you’ve picked up your dinner. This feels like a casual, lighthearted question and relies on the positive feeling we get when we help others. But all this info is collected in your Google profile, making it easier for someone to figure out if you’re visiting a place briefly and occasionally (like on holiday) or if you live nearby.

5. Google Maps Doesn’t Like It When You’re Offline: Remember GPS navigation? It might have been clunky and slow, but it’s a good reminder that you don’t need to be connected to the internet to be directed. In fact, other apps offer offline navigation. On Google, you can download maps, but offline navigation is only available for cars. It seems fairly unlikely the tech giant can’t figure out how to direct pedestrians and cyclists without internet.

6. Google Makes It Seem Like This Is All for Your Own Good: “Providing useful, meaningful experiences is at the core of what Google does,” the company says on its website, adding that knowing your location is important for this reason. They say they use this data for all kinds of useful things, like “security” and “language settings” — and, of course, selling ads. Google also sells advertisers the possibility to evaluate how well their campaigns reached their target (that’s you!) and how often people visited their physical shops “in an anonymized and aggregated manner”. But only if you opt in (or you forget to opt out).

Source: Six Reasons Why Google Maps Is the Creepiest App On Your Phone – Slashdot

It Took Just 5 Minutes Of Movement Data To Identify ‘Anonymous’ VR Users

As companies and governments increasingly hoover up our personal data, a common refrain to keep people from worrying is the claim that nothing can go wrong because the data itself is “anonymized” — or stripped of personal identifiers like social security numbers. But time and time again, studies have shown how this really is cold comfort, given it takes only a little effort to pretty quickly identify a person based on access to other data sets. Yet most companies, many privacy policy folk, and even government officials still like to act as if “anonymizing” your data means something.

The latest case in point: new research out of Stanford (first spotted by the German website Mixed), found that it took researchers just five minutes of examining the movement data of VR users to identify them in the real world. The paper says participants using an HTC Vive headset and controllers watched five 20-second clips from a randomized set of 360-degree videos, then answered a set of questions in VR that were tracked in a separate research paper.

The movement data (including height, posture, head movement speed and what participants looked at and for how long) was then plugged into three machine learning algorithms, which, from a pool of 511 participants, was able to correctly identify 95% of users accurately “when trained on less than 5 min of tracking data per person.” The researchers went on to note that while VR headset makers (like every other company) assures users that “de-identified” or “anonymized” data would protect their identities, that’s really not the case:

“In both the privacy policy of Oculus and HTC, makers of two of the most popular VR headsets in 2020, the companies are permitted to share any de-identified data,” the paper notes. “If the tracking data is shared according to rules for de-identified data, then regardless of what is promised in principle, in practice taking one’s name off a dataset accomplishes very little.”

If you don’t like this study, there’s just an absolute ocean of research over the last decade making the same point: “anonymized” or “de-identified” doesn’t actually mean “anonymous.” Researchers from the University of Washington and the University of California, San Diego, for example, found that they could identify drivers based on just 15 minutes’ worth of data collected from brake pedal usage alone. Researchers from Stanford and Princeton universities found that they could correctly identify an “anonymized” user 70% of the time just by comparing their browsing data to their social media activity.

[…]

Source: It Took Just 5 Minutes Of Movement Data To Identify ‘Anonymous’ VR Users | Techdirt

Analysis of Trump’s tweets reveals systematic diversion of the media

President Donald Trump’s controversial use of social media is widely known and theories abound about its ulterior motives. New research published today in Nature Communications claims to provide the first evidence-based analysis demonstrating the US President’s Twitter account has been routinely deployed to divert attention away from a topic potentially harmful to his reputation, in turn suppressing negative related media coverage.

The international study, led by the University of Bristol in the UK, tested two hypotheses: whether an increase in harmful media coverage was followed by increased diversionary Twitter activity, and if such diversion successfully reduced subsequent media coverage of the harmful topic.

[…]

The study focused on Trump’s first two years in office, scrutinising the Robert Mueller investigation into potential collusion with Russia in the 2016 Presidential Election, as this was politically harmful to the President. The team analysed content relating to Russia and the Mueller investigation in two of the country’s most politically neutral media outlets, New York Times (NYT) and ABC World News Tonight (ABC). The team also selected a set of keywords judged to play to Trump’s preferred topics at the time, which were hypothesized to be likely to appear in diversionary tweets. The keywords related to “jobs”, “China”, and “immigration”; topics representing the president’s supposed political strengths.

The researchers hypothesized that the more ABC and NYT reported on the Mueller investigation, the more Trump’s tweets would mention jobs, China, and immigration, which in turn would result in less coverage of the Mueller investigation by ABC and NYT.

In support of their hypotheses, the team found that every five additional ABC headlines relating to the Mueller investigation was associated with one more mention of a keyword in Trump’s tweets. In turn, two additional mentions of one of the keywords in a Trump was associated with roughly one less mention of the Mueller investigation in the following day’s NYT.

Such a pattern did not emerge with placebo topics that presented no threat to the President, for instance Brexit or other non-political issues such as football or gardening.

[…]

Professor Lewandowsky said: “It’s unclear whether President Trump, or whoever is at the helm of his Twitter account, engages in such tactics intentionally or if it’s mere intuition. Either way, we hope these results serve as a helpful reminder to the that they have the power to set the news agenda, focusing on the topics they deem most important, while perhaps not paying so much attention to the Twitter-sphere.”

Source: Analysis of Trump’s tweets reveals systematic diversion of the media

To Prevent Free, Frictionless Access To Human Knowledge, Publishers Want Librarians To Be Afraid, Very Afraid

After many years of fierce resistance to open access, academic publishers have largely embraced — and extended — the idea, ensuring that their 35-40% profit margins live on. In the light of this subversion of the original hopes for open access, people have come up with other ways to provide free and frictionless access to knowledge — most of which is paid for by taxpayers around the world. One is preprints, which are increasingly used by researchers to disseminate their results widely, without needing to worry about payment or gatekeepers. The other is through sites that have taken it upon themselves to offer immediate access to large numbers of academic papers — so-called “shadow libraries”. The most famous of these sites is Sci-Hub, created by Alexandra Elbakyan. At the time of writing, Sci-Hub claims to hold 79 million papers.

Even academics with access to publications through their institutional subscriptions often prefer to use Sci-Hub, because it is so much simpler and quicker. In this respect, Sci-Hub stands as a constant reproach to academic publishers, emphasizing that their products aren’t very good in terms of serving libraries, which are paying expensive subscriptions for access. Not surprisingly, then, Sci-Hub has become Enemy No. 1 for academic publishers in general, and the leading company Elsevier in particular. The German site Netzpolitik has spotted the latest approach being taken by publishers to tackle this inconvenient and hugely successful rival, and other shadow libraries. At its heart lies the Scholarly Networks Security Initiative (SNSI), which was founded by Elsevier and other large publishers earlier this year. Netzpolitik explains that the idea is to track and analyze every access to libraries, because “security”

[…]

Since academic publishers can’t compete against Sci-Hub on ease of use or convenience, they are trying the old “security risk” angle — also used by traditional software companies against open source in the early days. Yes, they say, Sci-Hub/open source may seem free and better, but think of the terrible security risks… An FAQ on the main SNSI site provides an “explanation” of why Sci-Hub is supposedly a security risk

[…]

As Techdirt pointed out when that Washington Post article came out, there is no evidence of any connections between Elbakyan and Russian Intelligence. Indeed, it’s hard not to see the investigation as simply the result of whining academic publishers making the same baseless accusation, and demanding that something be “done“. An article in Research Information provides more details about what those “wider ramifications than just getting access to content that sits behind a paywall” might be:

In the specific case of Sci-Hub, academic content (journal articles and books) is illegally harvested using a variety of methods, such as abusing legitimate log in credentials to access the secure computer networks of major universities and by hijacking “proxy” credentials of legitimate users that facilitate off campus remote access to university computer systems and databases. These actions result in a front door being opened up into universities’ networks through which Sci-Hub, and potentially others, can gain access to other valuable institutional databases such as personnel and medical records, patent information, and grant details.

But that’s not how things work in this context. The credentials of legitimate users that Sci-Hub draws on — often gladly “lent” by academics who believe papers should be made widely available — are purely to access articles held on the system. They do not provide access to “other valuable institutional databases” — and certainly not sensitive information such as “personnel and medical records” — unless they are designed by complete idiots. That is pure scaremongering, while this further claim is just ridiculous:

Such activities threaten the scholarly communications ecosystem and the integrity of the academic record. Sci-Hub has no incentive to ensure the accuracy of the research articles being accessed, no incentive to ensure research meets ethical standards, and no incentive to retract or correct if issues arise.

Sci-Hub simply provides free, frictionless access for everyone to existing articles from academic publishers. The articles are still as accurate and ethical as they were when they first appeared. To accuse Sci-Hub of “threatening” the scholarly communications ecosystem by providing universal access is absurd. It’s also revealing of the traditional publishers’ attitude to the uncontrolled dissemination of publicly-funded human knowledge, which is what they really fear and are attacking with the new SNSI campaign.

Source: To Prevent Free, Frictionless Access To Human Knowledge, Publishers Want Librarians To Be Afraid, Very Afraid | Techdirt

Police Will Pilot a Program to Live-Stream Amazon Ring Cameras

This is not a drill. Red alert: The police surveillance center in Jackson, Mississippi, will be conducting a 45-day pilot program to live stream the Amazon Ring cameras of participating residents.

Since Ring first made a splash in the private security camera market, we’ve been warning of its potential to undermine the civil liberties of its users and their communities. We’ve been especially concerned with Ring’s 1,000+ partnerships with local police departments, which facilitate bulk footage requests directly from users without oversight or having to acquire a warrant.

While people buy Ring cameras and put them on their front door to keep their packages safe, police use them to build comprehensive CCTV camera networks blanketing whole neighborhoods. This  serves two police purposes. First, it allows police departments to avoid the cost of buying surveillance equipment and to put that burden onto consumers by convincing them they need cameras to keep their property safe. Second, it evades the natural reaction of fear and distrust that many people would have if they learned police were putting up dozens of cameras on their block, one for every house.

Now, our worst fears have been confirmed. Police in Jackson, Mississippi, have started a pilot program that would allow Ring owners to patch the camera streams from their front doors directly to a police Real Time Crime Center. The footage from your front door includes you coming and going from your house, your neighbors taking out the trash, and the dog walkers and delivery people who do their jobs in your street. In Jackson, this footage can now be live streamed directly onto a dozen monitors scrutinized by police around the clock. Even if you refuse to allow your footage to be used that way, your neighbor’s camera pointed at your house may still be transmitting directly to the police.

[…]

Source: Police Will Pilot a Program to Live-Stream Amazon Ring Cameras | Electronic Frontier Foundation

Brave browser first to nix CNAME deception, the sneaky DNS trick used by marketers to duck privacy controls

The Brave web browser will soon block CNAME cloaking, a technique used by online marketers to defy privacy controls designed to prevent the use of third-party cookies.

The browser security model makes a distinction between first-party domains – those being visited – and third-party domains – from the suppliers of things like image assets or tracking code, to the visited site. Many of the online privacy abuses over the years have come from third-party resources like scripts and cookies, which is why third-party cookies are now blocked by default in Brave, Firefox, Safari, and Tor Browser.

Microsoft Edge, meanwhile, has a tiered scheme that defaults to a “Balanced” setting, which blocks some third-party cookies. Google Chrome has implemented its SameSite cookie scheme as a prelude to its planned 2022 phase-out of third-party cookies, maybe.

While Google tries to win support for its various Privacy Sandbox proposals, which aim to provide marketers with ostensibly privacy-preserving alternatives to increasingly shunned third-party cookies, marketers have been relying on CNAME shenanigans to pass their third-party trackers off as first-party resources.

The developers behind open-source content blocking extension uBlock Origin implemented a defense against CNAME-based tracking in November and now Brave has done so as well.

CNAME by name, cookie by nature

In a blog post on Tuesday, Anton Lazarev, research engineer at Brave Software, and senior privacy researcher Peter Snyder, explain that online tracking scripts may use canonical name DNS records, known as CNAMEs, to make associated third-party tracking domains look like they’re part of the first-party websites actually being visited.

They point to the site https://mathon.fr as an example, noting that without CNAME uncloaking, Brave blocks six requests for tracking scripts served by ad companies like Google, Facebook, Criteo, Sirdan, and Trustpilot.

But the page also makes four requests via a script hosted at a randomized path under the first-party subdomain 16ao.mathon.fr.

“Inspection outside of the browser reveals that 16ao.mathon.fr actually has a canonical name of et5.eulerian.net, meaning it’s a third-party script served by Eulerian,” observe Lazarev and Snyder.

When Brave 1.17 ships next month (currently available as a developer build), it will be able to uncloak the CNAME deception and block the Eulerian script.

Other browser vendors are planning related defenses. Mozilla has been working on a fix in Firefox since last November. And in August, Apple’s Safari WebKit team proposed a way to prevent CNAME cloaking from being used to bypass the seven-day cookie lifetime imposed by WebKit’s Intelligent Tracking Protection system

Source: Brave browser first to nix CNAME deception, the sneaky DNS trick used by marketers to duck privacy controls • The Register

Another eBay exec pleads guilty after couple stalked, harassed for daring to criticize the internet tat bazaar – pig corpese involved

Philip Cooke, 55, oversaw eBay’s security operations in Europe and Asia and was a former police captain in Santa Clara, California. He pleaded guilty this week to conspiracy to commit cyberstalking and conspiracy to tamper with witnesses.

Cooke, based in San Jose, was just one of seven employees, including one manager, accused of targeting a married couple living on the other side of the United States, in Massachusetts, because they didn’t like their criticisms of eBay in the newsletter.

It’s said the team would post aggressive anonymous comments on the couple’s newsletter website, and at some point planned a concerted campaign against the pair including cyberstalking and harassment. Among other things, prosecutors noted, “several of the defendants ordered anonymous and disturbing deliveries to the victims’ home, including a preserved fetal pig, a bloody pig Halloween mask and a book on surviving the loss of a spouse.”

[…]

But it was when the couple noticed they were under surveillance in their own home they finally went to the cops in Natick, where they lived, and officers opened an investigation.

It was Cooke’s behavior at that point that led to the subsequent charge of conspiracy to tamper with a witness: he formulated a plan to give the Natick police a false lead in an effort to prevent them from discovering proof that his team had sent the pig’s head and other items. The eBay employees also deleted digital evidence that showed their involvement, prosecutors said, obstructing an investigation and breaking another law.

[…]

Source: Another eBay exec pleads guilty after couple stalked, harassed for daring to criticize the internet tat bazaar • The Register

Palo Alto Networks threatens to sue security startup for comparison review, says it breaks software EULA. 1 EULA? 2 WTF?

Palo Alto Networks has threatened a startup with legal action after the smaller biz published a comparison review of one of its products.

Israel-based Orca Security received a cease-and-desist letter from a lawyer representing Palo Alto after Orca uploaded a series of online videos reviewing of one of Palo Alto’s products and compared it to its own. Orca sees itself as a competitor of Palo Alto Networks (PAN).

“What we expected is that others will also create such materials … but instead we received a letter from Palo Alto’s lawyers claiming we were not allowed to do that,” Orca chief exec Avi Shua told The Register this week. “We believe these are empty legal threats.”

In a note on its website, Orca lamented at length the “outrageous” behavior of PAN, as well as posting a copy of the lawyer’s letter for world-plus-dog to read. That letter claimed Orca infringed PAN’s trademarks by using its name and logo in the review as well as breaching non-review clauses in the End-User License Agreement (EULA) of PAN’s product.

As such, the lawyer demanded the removal of the comparison material, and that the startup stop using PAN’s logo and name. We note the videos are still online, hosted by YouTube.

“It’s outrageous that the world’s largest cybersecurity vendor, its products being used by over 65,000 organizations according to its website, believes that its users aren’t entitled to share any benchmark or performance comparison of its products,” said Orca.

The lawyer’s letter [PDF] claimed Orca violated PAN’s EULA fine-print, something deputy general counsel Melinda Thompson described in her missive as “a clear breach” of terms “prohibiting an end user from disclosing, publishing or otherwise making publicly available any benchmark, performance or comparison tests… run on Palo Alto Networks products, in whole or in part.”

Shua told The Register Orca tried to give its rival a fair crack of the whip: “Even if we tried to be objective, we would have some biases. But we did try to do it as objectively as possible, by showing it to users: creating labs, screenshots, and showing how it looks like.” The fairness of the review, we note, is not what is at issue here: PAN forbids any kind of benchmarking and comparison of its gear.

Palo Alto networks declined to comment when contacted by The Register.

Source: Palo Alto Networks threatens to sue security startup for comparison review, says it breaks software EULA • The Register

1 Who reads EULAs anyway? Are they in any way, shape or form defensible apart from maybe some ant fucker friendless lawyers?

2 Is PAN so very worried about the poor quality of their product that they feel they want to kill any and all benchmarks / comparisons?

Twitch Suddenly Mass-Deletes Thousands of Videos, Citing Music Copyright Claims – yes, copyright really doesn’t provide for  innovation at all

“It’s finally happening: Twitch is taking action against copyrighted music — long a norm among streamers — in response to music industry pressure,” reports Kotaku.

But the Verge reports “there’s some funny stuff going on here.” First, Twitch is telling streamers that some of their content has been identified as violating copyright and that instead of letting streamers file counterclaims, it’s deleting the content; second, the company is telling streamers it’s giving them warnings, as opposed to outright copyright strikes…

Weirdly Twitch decided to bulk delete infringing material instead of allowing streamers to archive their content or submit counterclaims. To me, that suggests that there are tons of infringements, and that Twitch needed to act very quickly and/or face a lawsuit it wouldn’t be able to win over its adherence to the safe harbor provision of the DMCA.
The email Twitch sent to their users “encourages them to delete additional content — up to and including using a new tool to unilaterally delete all previous clips,” reports Kotaku. One business streamer complains that it’s “insane” that Twitch basically informs them “that there is more content in violation despite having no identification system to find out what it is. Their solution to DMCA is for creators to delete their life’s work. This is pure, gross negligence.”

Or, as esports consultant Rod “Slasher” Breslau puts it, “It is absolutely insane that record labels have put Twitch in a position to force streamers to delete their entire life’s work, for some 10+ years of memories, and that Twitch has been incapable of preventing or aiding streamers for this situation. a total failure all around.”

Twitch’s response? It is crucial that we protect the rights of songwriters, artists and other music industry partners. We continue to develop tools and resources to further educate our creators and empower them with more control over their content while partnering with industry-recognized vendors in the copyright space to help us achieve these goals.

Source: Twitch Suddenly Mass-Deletes Thousands of Videos, Citing Music Copyright Claims – Slashdot

Of course, the money raised by these music companies doesn’t really go to the artists much – it’s basically swallowed up by the music companies themselves.

Oculus owners forced on Facebook accounts, will have purchases be wiped, device bricked, if they ever leave FB. Who would have guessed?

Oculus users, already fuming at Facebook chaining their VR headsets to their Facebook accounts, have been warned they could lose all their Oculus purchases and account information in future if they ever delete their profile on the social network.

The rule is a further binding of the gaming company that Facebook bought in 2014 to the mothership, and comes just two months after Facebook decided all new Oculus users require Facebook accounts to use their VR gizmos, and all current Oculus users will need a Facebook account by 2023. Failure to do so may cause apps installed on the headsets to no longer work as expected.

The decision to cement together what many users see as largely unrelated activities – playing video games and social media posts – has led to a wave of anger among Oculus users, and a renewed online effort to jailbreak new Oculus headgear to bypass Facebook’s growing restrictions.

That outrage was fueled when Facebook initially said that if people attempted to connect more than one Oculus headset to a single Facebook account, something families in particular want to do as it avoids having to install the same app over and over, it would ban them from the service.

Facebook has since dropped that threat, and said it is working on allowing multiple devices and accounts to connect. But the control-freak instincts of the internet giant were yet again on full display, something that was noted by the man who first drew attention to Oculus’s new terms and conditions, CEO of fitness gaming company Yur, Cix Liv.

“My favorite line is ‘While I do completely understand your concerns, we do need to have you comply with the Facebook terms of service’ like Facebook thinks they are some authoritarian government,” he tweeted.

[,,,]

Source: Oculus owners told not only to get Facebook accounts, purchases will be wiped if they ever leave social network • The Register

When you tell Chrome to wipe private data about you, it spares two websites from the purge: Google.com, YouTube

Google exempts its own websites from Chrome’s automatic data-scrubbing feature, allowing the ads giant to potentially track you even when you’ve told it not to.

Programmer Jeff Johnson noticed the unusual behavior, and this month documented the issue with screenshots. In his assessment of the situation, he noted that if you set up Chrome, on desktop at least, to automatically delete all cookies and so-called site data when you quit the browser, it deletes it all as expected – except your site data for Google.com and YouTube.com.

While cookies are typically used to identify you and store some of your online preferences when visiting websites, site data is on another level: it includes, among other things, a storage database in which a site can store personal information about you, on your computer, that can be accessed again by the site the next time you visit. Thus, while your Google and YouTube cookies may be wiped by Chrome, their site data remains on your computer, and it could, in future, be used to identify you.

Johnson noted that after he configured Chrome to wipe all cookies and site data when the application closed, everything was cleared as expected for sites like apple.com. Yet, the main Google search site and video service YouTube were allowed to keep their site data, though the cookies were gone. If Google chooses at some point to stash the equivalent of your Google cookies in the Google.com site data storage, they could be retrieved next time you visit Google, and identify you, even though you thought you’d told Chrome not to let that happen.

Ultimately, it potentially allows Google, and only Google, to continue tracking Chrome users who opted for some more privacy; something that is enormously valuable to the internet goliath in delivering ads. Many users set Chrome to automatically delete cookies-and-site-data on exit for that reason – to prevent being stalked around the web – even though it often requires them to log back into websites the next time they visit due to their per-session cookies being wiped.

Yet Google appears to have granted itself an exception. The situation recalls a similar issue over location tracking, where Google continued to track people’s location through their apps even when users actively selected the option to prevent that. Google had put the real option to start location tracking under a different setting that didn’t even include the word “location.”

In this case, “Clear cookies and site data when you quit Chrome” doesn’t actually mean what it says, at least not for Google.

There is a workaround: you can manually add “Google.com” and “YouTube.com” within the browser to a list of “Sites that can never use cookies.” In that case, no information, not even site data, is saved from those sites, which is all in all a little confusing.

[…]

 

Source: When you tell Chrome to wipe private data about you, it spares two websites from the purge: Google.com, YouTube • The Register

Thought the FBI were the only ones able to unlock encrypted phones? Pretty much every US cop can get the job done – and does

Never mind the Feds. American police forces routinely “circumvent most security features” in smartphones to extract mountains of personal information, according to a report that details the massive, ubiquitous cracking of devices by cops.

Two years of public records requests by Upturn, a Washington DC non-profit, has revealed that every one of the United States’ largest 50 police departments, as well as half of the largest sheriff’s offices and two-thirds of the largest prosecuting attorney’s offices, regularly use specialist hardware and software to access the contents of suspects’ handhelds. There isn’t a state in the Union that hasn’t got some advanced phone-cracking capabilities.

The report concludes that, far from modern phones being a bastion of privacy and security, there are in fact routinely rifled through for trivial crimes without a warrant in sight. In one case, the cops confiscated and searched the phones of two men who were caught arguing over a $70 debt in a McDonalds.

In another, officers witnessed “suspicious behavior” in a Whole Foods grocery store parking lot and claimed to have smelt “the odor of marijuana” coming from a car. The car was stopped and searched, and the driver’s phone was seized and searched for “further evidence of the nature of the suspected controlled substance exchange.”

A third example given saw police officers shot and kill a man after he “ran from the driver’s side of the vehicle” during a traffic stop. They apparently discovered a small orange prescription pill container next to the victim, and tested the pills, which contained acetaminophen and fentanyl. They also discovered a phone in the empty car, and searched it for evidence related to “counterfeit Oxycodone” and “evidence relating to… motives for fleeing from the police.”

The report gives numerous other examples of phones taken from their owners and searched for evidence, without a warrant – many in cases where the value of the information was negligible such as cases involving graffiti, shoplifting, marijuana possession, prostitution, vandalism, car crashes, parole violations, petty theft, and public intoxication.

Not what you imagined

That is a completely different picture to the one, we imagine, most Americans assumed, particularly given the high legal protections afforded smartphones in recent high-profile court cases.

In 2018, the Supreme Court ruled that the government needs a warrant to access its citizens’ cellphone location data and talked extensively about a citizen’s expectation of privacy limiting “official intrusion” when it comes to smartphones.

In 2014, the court decided a warrant was required to search a mobile phone, and that the “reasonable expectation of privacy” that people have in their “physical movements” should extend to records stored by third parties. But the reality on the grounds is that those grand words mean nothing if the cops decide they want to look through your phone.

The report was based on reports from 44 law enforcement agencies across the US and covered 50,000 extractions of data from cellphones between 2015 and 2019, a figure that Upturn notes “represents a severe undercount” of the actual number of cellphone extractions.

[…]

They include banning the use of “consent searches” where the police ask the owner if they can search their phone and then require no further approval to go through a device. “Courts pretend that ‘consent searches’ are voluntary, when they are effectively coerced,” the report argues and notes that most people are probably unaware they by agreeing to it, they can have their phone’s entire contents downloaded and perused at will later on.

It also reckons that the argument that the contents of a phone are in “plain view” because a police officer can see a phone when at the scene of a crime, an important legal distinction that allows the police to search phones, is legally untenable because people carry their phones with them as a rule, and the contents are not themselves also visible – only the device itself.

The report also argues for more extensive audit logs of phone searches so there is a degree of accountability, particularly if evidence turned up is later used in court. And it argues for better and clearer data deletion rules, as well as more reporting requirements around phone searches by law enforcement.

It concludes: “For too long, public debate and discussion regarding these tools has been abstracted to the rarest and most sensational cases in which law enforcement cannot gain access to cellphone data. We hope that this report will help recenter the conversation regarding law enforcement’s use of mobile device forensic tools to the on-the-ground reality of cellphone searches today in the United States.”

Source: Thought the FBI were the only ones able to unlock encrypted phones? Pretty much every US cop can get the job done • The Register

Do algorithms make us even more radical? Filter bubbles and echo chambers

‘Technology ensures that we’re all served our own personalised news cycle. As a result, we only get to hear the opinions that correspond to our own. The result is polarisation’. Or so the oft-heard theory goes. But in practice, it seems this isn’t really true, or at least not for the average Dutch person. However, according to communication scientist Judith Möller, the influence of filter bubbles, as they are known, could indeed be stronger when it comes to groups with radical opinions.

Judith Möller: ‘My theory is that filter bubbles do indeed exist, but that we’re looking for them in the wrong place.’

First of all, we need to differentiate between the so-called echo chamber and the filter bubble. As an individual, you voluntarily take your place in an echo chamber (such as in the form of a forum, or a Facebook or WhatsApp group), meaning you surround yourself with people who tend towards the same opinion as yourself. ‘Call it the modern form of compartmentalisation’, says communication scientist Judith Möller, who recently received a Veni grant for her research. ‘People have always had the tendency to surround themselves with like-minded people, and that’s no different on social media.’

Various news sources in parallel prevent a filter bubble

In the filter bubble, you are presented only with news and opinions that match you as an individual, on the basis of algorithms and without you being aware of this process. It’s said that this bubble is leading to the polarisation of society. Everyone is constantly exposed to ‘their own truth’, while other news gets filtered out. But Möller says that there is no evidence to support this, at least in the Netherlands. ‘We use various news sources in parallel – meaning not only Facebook and Twitter, but also radio, television and newspapers, so we run little risk of ending up in a filter bubble. Besides that: the amount of “news” on an average Facebook timeline is less than 5%. Moreover, it turns out that many people on social media are actually more likely to encounter news that they normally wouldn’t read or search out, so that’s almost a bubble in reverse.’

Bubbles at the fringes of the opinion spectrum

Nonetheless, a great deal of money is being invested in the use of algorithms and artificial intelligence, such as during election periods. Möller: ‘So there must be something in it. My theory is that filter bubbles do indeed exist, but that we’re looking for them in the wrong place. We shouldn’t look at the mainstream, but at groups with radical and/or divergent opinions who don’t fit into the “centre”. This is where we see the formation of ‘fringe bubbles’, as I call them – filters at the edges of the opinion spectrum.’

People with fringe opinions can suddenly become very visible

From spiral of silence to spiral of noise

As one example, the researcher cites the anti-vaccination movement. ‘Previously, this group was confronted with the ‘spiral of silence’: if you said in public, for instance to friends or family, that you were sceptical about vaccination, you wouldn’t get a positive response. And so, you’d keep quiet about it. But this group found each other on social media, and as a consequence of filter technology, the proponents of this view encountered the ‘spiral of noise’: suddenly it seems as if a huge number of people agree with you.’

The news value of radical and divergent opinions

And so, it can happen that people with fringe, radical or divergent opinions suddenly become very vocal and visible. ‘Then they become newsworthy, they appear in normal news media and hence are able to address a wider public. The fringe bubble shifts towards the centre. This has been the case with the anti-vaccination movement, the climate sceptics and the yellow vests, but it also happened with the group who opposed the Dutch Intelligence and Security Services Act – no-one was interested initially, but in the end, it became major news and it even resulted in a referendum.’

Consequences can be both positive and negative

‘In my research I aim to go in search of divergent opinions like these, and then I’ll try to determine how algorithms influence radical groups, to what extent filter bubbles exist and why groups with radical opinions ultimately manage, or don’t manage, to appear in news media.’
The consequences of these processes can be both positive and negative, believes Möller. ‘Some people claim that this attention leads people from the “centre” to feel attracted to the fringe areas of society, in turn leading to more extreme opinions and a reduction in social cohesion, which is certainly possible. On the other hand, this process also brings advantages: after all, in a democracy we also need to listen to minority opinions.’

Source: Do algorithms make us even more radical? – University of Amsterdam

To find out how researchers track the filter bubble, read about fbtrex here (pdf)

Personalisation algorithms and elections: Breaking free of the filter bubble

In recent years, we have been witnessing a fundamental shift in the form how news and current affairs are disseminated and mediated. Due to the exponential increase in available content online and technological development in the field of recommendation systems, more and more citizens are informing themselves through customized and curated sources, while turning away from mass-mediated information sources like TV news and newspapers. Algorithmic recommendation systems provide news users with tools to navigate the information overload and identify important and relevant information. They do so by performing a task that was once a key part of the journalistic profession: keeping the gates. In a way, news recommendation algorithm can create highly individualized gates, through which only information and news fit that serves the user best. In theory, this is a great achievement that can make news exposure more efficient and interesting. In practice, there are many pitfalls when the power to select what we hear from the news shifts from professional editorial boards that select the news according to professional standards to opaque algorithms who are reigned by their own logic, the logic of advertisers or consumes personal preferences.

Beyond the filter bubble: Concepts, myths, evidence and issues for future debates

Filter bubbles in the Netherlands?

Some fear that personalised communication can lead to information cocoons or filter bubbles. For instance, a personalised news website could give more prominence to conservative or liberal media items, based on the (assumed) political interests of the user. As a result, users may encounter only a limited range of political ideas. We synthesise empirical research on the extent and effects of self-selected personalisation, where people actively choose which content they receive, and pre-selected personalisation, where algorithms personalise content for users without any deliberate user choice. We conclude that at present there is little empirical evidence that warrants any worries about filter bubbles.

Should We Worry about Filter Bubbles?

Pop the filter bubble: Exposure Diversity as a Design Principle for Search and Social Media

Michael Bang Peterson and a few others from the US have some interesting counterpoints to this.

Source: New Research Shows Social Media Doesn’t Turn People Into Assholes (They Already Were), And Everyone’s Wrong About Echo Chambers

UK test and trace data can be handed to police, reveals memorandum – that mission crept quickly

As if things were not going badly enough for the UK’s COVID-19 test and trace service, it now seems police will be able to access some test data, prompting fear that the disclosure could deter people who should have tests from coming forward.

As revealed in the Health Service Journal (paywalled), Department for Health and Social Care (DHSC) guidance describing how testing data will be handled was updated on Friday.

The memorandum of understanding between DHSC and National Police Chiefs’ Council said forces could be allowed to access test information that tells them if a “specific individual” has been told to self-isolate.

A failure to self-isolate after getting a positive COVID-19 test or being in contact with someone who has tested positive, could result in a police fine of £1,000, or even a £10,000 penalty for those serial offenders or those seriously breaking the rules.

A Department of Health and Social Care spokesperson said: “It is a legal requirement for people who have tested positive for COVID-19 and their close contacts to self-isolate when formally notified to do so.

“The DHSC has agreed a memorandum of understanding with the National Police Chiefs Council to enable police forces to have access on a case-by-case basis to information that enables them to know if a specific individual has been notified to self-isolate.

[…]

The UK government’s emphasis should be on providing support to people – financial and otherwise – if they need to self-isolate, so that no one is deterred from coming forward for a test, the BMA spokesperson added.

The UK’s test and trace system, backed by £12bn in public money, was outsourced to Serco for £45m in June. Sitel is also a provider.

The service has had a bumpy ride to say the least. Earlier this month, it came to light that as many as 48,000 people were not informed they had come in close contact with people who had tested positive, as the service under-reported 15,841 novel coronavirus cases between 25 September and 2 October.

The use of Microsoft’s Excel spreadsheet program in transferring test results from labs to the health service to total up was at the heart of the problem. A plausible explanation emerged that test results were automatically fetched in CSV format by PHE from various commercial testing labs, and stored in rows in an older .XLS Excel format that limited the number of rows to 65,536 per spreadsheet, rather than the one-million row limit offered by the modern .XLSX file format.

But that was not the only miss-step. It has emerged that people in line for a coronavirus test were sent to a site in Sevenoaks, Kent, where, in fact, no test centre existed, according to reports.

Source: UK test and trace data can be handed to police, reveals memorandum • The Register

Remember when Zoom was rumbled for lousy crypto? Six months later it says end-to-end is ready – but it’s not

The world’s plague-time video meeting tool of choice, Zoom, says it’s figured out how to do end-to-end encryption sufficiently well to offer users a tech preview.

News of the trial comes after April 2020 awkwardness that followed the revelation that Zoom was fibbing about its service using end-to-end encryption.

As we reported at the time, Zoom ‘fessed up but brushed aside criticism with a semantic argument about what “end-to-end” means.

“When we use the phrase ‘End-to-end’ in our other literature, it is in reference to the connection being encrypted from Zoom end point to Zoom end point,” the company said. The commonly accepted definition of end-to-end encryption requires even the host of a service to be unable to access the content of a communication. As we explained at the time, Zoom’s use of TLS and HTTPS meant it could intercept and decrypt video chats.

Come May, Zoom quickly acquired secure messaging Keybase to give it the chops to build proper crypto.

To use it, customers must enable E2EE meetings at the account level and opt-in to E2EE on a per-meeting basis

Now Zoom reckons it has cracked the problem.

A Wednesday post revealed: “starting next week, Zoom’s end-to-end encryption (E2EE) offering will be available as a technical preview, which means we’re proactively soliciting feedback from users for the first 30 days.”

Sharp-eyed Reg readers have doubtless noticed that Zoom has referred to “E2EE”, not just the “E2E” contraction of “end-to-end”.

What’s up with that? The company has offered the following explanation:

“Zoom’s E2EE uses the same powerful GCM encryption you get now in a Zoom meeting. The only difference is where those encryption keys live.In typical meetings, Zoom’s cloud generates encryption keys and distributes them to meeting participants using Zoom apps as they join. With Zoom’s E2EE, the meeting’s host generates encryption keys and uses public key cryptography to distribute these keys to the other meeting participants. Zoom’s servers become oblivious relays and never see the encryption keys required to decrypt the meeting contents.”

Don’t go thinking the preview means Zoom has squared away security, because the company says: “To use it, customers must enable E2EE meetings at the account level and opt-in to E2EE on a per-meeting basis.”

With users having to be constantly reminded to use non-rubbish passwords, not to click on phish or leak business data on personal devices, they’ll almost certainly choose E2EE every time without ever having to be prompted, right?

Source: Remember when Zoom was rumbled for lousy crypto? Six months later it says end-to-end is ready • The Register

Your Edge Browser Installed Microsoft Office Without Asking. NO!

Edge Chromium started out as a respectable alternative to Google Chrome on Windows, but it didn’t take long for Microsoft to turn it into a nuisance. To top it off, it looks like Edge is now a vector for installing (even more) Microsoft stuff on your PC—without you asking for it, of course.

We don’t like bloatware, or those pre-installed apps that come on your computer or smartphone. Some of these apps are worthwhile, but most just take up space and can’t be fully removed in some cases. Some companies are worse about bloatware than others, but Microsoft is notorious for slipping extra software into Windows. And now, Windows Insiders testing the most recent Edge Chromium preview caught the browser installing Microsoft Office web apps without permission.

The reports have only come from Windows Insiders so far, but it’s unlikely these backdoor installations are an early-release bug. And this isn’t just a Microsoft problem. For example, Chrome can install Google Docs and other G Suite apps without any notification, too.

Source: Why Your Edge Browser Installed Microsoft Office Without Asking

Please don’t EVER install stuff on my computer without asking! I paid for the OS, I didn’t ask for a SaaS.

Five Eyes governments, India, and Japan make new call for encryption backdoors – insist that democracy is an insecure police state

Members of the intelligence-sharing alliance Five Eyes, along with government representatives for Japan and India, have published a statement over the weekend calling on tech companies to come up with a solution for law enforcement to access end-to-end encrypted communications.

The statement is the alliance’s latest effort to get tech companies to agree to encryption backdoors.

The Five Eyes alliance, comprised of the US, the UK, Canada, Australia, and New Zealand, have made similar calls to tech giants in 2018 and 2019, respectively.

Just like before, government officials claim tech companies have put themselves in a corner by incorporating end-to-end encryption (E2EE) into their products.

If properly implemented, E2EE lets users have secure conversations — may them be chat, audio, or video — without sharing the encryption key with the tech companies.

Representatives from the seven governments argue that the way E2EE encryption is currently supported on today’s major tech platforms prohibits law enforcement from investigating crime rings, but also the tech platforms themselves from enforcing their own terms of service.

Signatories argue that “particular implementations of encryption technology” are currently posing challenges to law enforcement investigations, as the tech platforms themselves can’t access some communications and provide needed data to investigators.

This, in turn, allows a safe haven for criminal activity and puts the safety of “highly vulnerable members of our societies like sexually exploited children” in danger, officials argued.

Source: Five Eyes governments, India, and Japan make new call for encryption backdoors | ZDNet

Let’s be clear here:

  1. There is no way for a backdoored system to be secure. This means that not only do you give access to the government police services, secret services, stazi and thought police who can persecute you for being jewish or thinking the “wrong way” (eg being homosexual or communist), you also give criminal networks, scam artists, discontented exes and foreign government free reign to run around  your private content
  2. You have a right to privacy and you need it. It’s fundamental to being able to think creatively  and the only way in which societies advance. If thought is policed by some random standard then deviations which lead  to change will be surpressed. Stasis leads to economic collapse among other things, even if those at the top will be collecting more and more wealth for themselves.
  3. We as a society cannot “win” or become “better” by emulating the societies that we are competing against, that represent values and behaviours that we disagree with. Becoming a police state doesn’t protect us from other police states.

Apple made ProtonMail add in-app purchases, even though it had been free for years – this App store shakedown has a long scared list of victims

one app developer revealed to Congress that it — just like WordPress — had been forced to monetize a largely free app. That developer testified that Apple had demanded in-app purchases (IAP), even though Apple had approved its app without them two years earlier — and that when the dev dared send an email to customers notifying them of the change, Apple threatened to remove the app and blocked all updates.

That developer was ProtonMail, makers of an encrypted email app, and CEO Andy Yen had some fiery words for Apple in an interview with The Verge this week.

We’ve known for months that WordPress and Hey weren’t alone in being strong-armed by the most valuable company in the world, ever since Stratechery’s Ben Thompson reported that 21 different app developers quietly told him they’d been pushed to retroactively add IAP in the wake of those two controversies. But until now, we hadn’t heard of many devs willing to publicly admit it. They were scared.

And they’re still scared, says Yen. Even though Apple changed its rules on September 11th to exempt “free apps acting as a stand-alone companion to a paid web based tool” from the IAP requirement — Apple explicitly said email apps are exempt — ProtonMail still hasn’t removed its own in-app purchases because it fears retaliation from Apple, he says.

He claims other developers feel the same way: “There’s a lot of fear in the space right now; people are completely petrified to say anything.”

He might know. ProtonMail is one of the founding partners of the Coalition for App Fairness, a group that also includes Epic Games, Spotify, Tile, Match, and others who banded together to protest Apple’s rules after having those rules used against them. It’s a group that tried to pull together as many developers as it could to form a united front, but some weren’t as ready to risk Apple’s wrath.

That’s clearly not the case for Yen, though — in our interview, he compares Apple’s tactics to a Mafia protection racket.

“For the first two years we were in the App Store, that was fine, no issues there,” he says. (They’d launched on iOS in 2016.) “But a common practice we see … as you start getting significant uptake in uploads and downloads, they start looking at your situation more carefully, and then as any good Mafia extortion goes, they come to shake you down for some money.”

“We didn’t offer a paid version in the App Store, it was free to download … it wasn’t like Epic where you had an alternative payment option, you couldn’t pay at all,” he relates.

Yen says Apple’s demand came suddenly in 2018. “Out of the blue, one day they said you have to add in-app purchase to stay in the App Store,” he says. “They stumbled upon something in the app that mentioned there were paid plans, they went to the website and saw there was a subscription you could purchase, and then turned around and demanded we add IAP.”

“There’s nothing you can say to that. They are judge, jury, and executioner on their platform, and you can take it or leave it. You can’t get any sort of fair hearing to determine whether it’s justifiable or not justifiable, anything they say goes.”

[…]

Source: Apple made ProtonMail add in-app purchases, even though it had been free for years – The Verge

This is what monopolies will do for you. I have been talking about how big tech is involved in this since 2019 and it’s good to see it finally really coming out of the woodwork

Google is giving data to police based on search keywords: IPs of everyone who searched a certain thing. No warrant required.

There are few things as revealing as a person’s search history, and police typically need a warrant on a known suspect to demand that sensitive information. But a recently unsealed court document found that investigators can request such data in reverse order by asking Google to disclose everyone who searched a keyword rather than for information on a known suspect.

In August, police arrested Michael Williams, an associate of singer and accused sex offender R. Kelly, for allegedly setting fire to a witness’ car in Florida. Investigators linked Williams to the arson, as well as witness tampering, after sending a search warrant to Google that requested information on “users who had searched the address of the residence close in time to the arson.”

The July court filing was unsealed on Tuesday. Detroit News reporter Robert Snell tweeted about the filing after it was unsealed.

Court documents showed that Google provided the IP addresses of people who searched for the arson victim’s address, which investigators tied to a phone number belonging to Williams. Police then used the phone number records to pinpoint the location of Williams’ device near the arson, according to court documents.

The original warrant sent to Google is still sealed, but the report provides another example of a growing trend of data requests to the search engine giant in which investigators demand data on a large group of users rather than a specific request on a single suspect.

“This ‘keyword warrant’ evades the Fourth Amendment checks on police surveillance,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “When a court authorizes a data dump of every person who searched for a specific term or address, it’s likely unconstitutional.”

The keyword warrants are similar to geofence warrants, in which police make requests to Google for data on all devices logged in at a specific area and time. Google received 15 times more geofence warrant requests in 2018 compared with 2017, and five times more in 2019 than 2018. The rise in reverse requests from police have troubled Google staffers, according to internal emails.

[…]

Source: Google is giving data to police based on search keywords, court docs show – CNET