Microsoft received almost 25,000 requests for consumer data from law enforcement over the last six months

Microsoft has had a busy six months if its latest biannual digital trust report is anything to go by as law enforcement agencies crept closer to making 25,000 legal requests.

Requests for consumer data reached 24,798 during the second half of 2020, up from 24,093 during the previous six-month period, and quite a jump from the 21,781 for the same period in 2019.

“Non-content data” requests, which require a subpoena (or local equivalent), accounted for just over half of disclosures and were slightly down on the same period in 2019. Microsoft rejected 25.81 per cent of requests in the last six months of 2020, up on the 20.14 per cent of the same period in 2019.

As for where those requests came from, Microsoft highlighted a handful of countries including Brazil, France, Germany, the United Kingdom, and the United States. The US was the worst offender (going by quantity of requests) accounting for 5,682 (up from 4,315 for same period in 2019). Germany was not far behind with 4,976 (up from 3,310) while the UK submitted 3,558 requests (a small increase from 3,312 for the same period in 2019).

As well as consumer data, Microsoft received 109 requests from law enforcement agencies for enterprise cloud customer data in the second half of 2020. It was unable to bat back 40, where the company was “compelled” to provide some information. “19 cases,” it said, “required the disclosure of some customer content, and in 21 of the cases we were compelled to disclose non-content information only.”

Still, while that 25,000 figure may seem a little worrying, it is considerably less than the first sets of figures made available by Microsoft. For the latter half of 2013 the total requests were above 35,000.

Away from the criminal side of things, Microsoft also received a comparatively small number of emergency and civil legal requests. Of the latter, it rejected just over 75 per cent in the latter half of 2020.

The report makes for fascinating reading and, while the company is to be applauded for publishing it, the accompanying Privacy Report is an occasionally grim reminder of just how much information Microsoft can slurp from users. Particularly if the customer concerned decides to be helpful and check that Optional diagnostic data box.

[…]

Source: Microsoft received almost 25,000 requests for consumer data from law enforcement over the last six months • The Register

$291 Adobe Cancelation Fee Sees Twitter Users Argue it’s ‘Morally Correct’ to Pirate Software

A $291 Adobe cancelation fee has provoked fierce criticism of the creative software company.

A post from a customer has gone viral on Twitter, after he discovered that he would have to pay nearly $300 to bring his Creative Cloud subscription to an end.

It has sparked a discussion about Adobe’s practices, with many others coming forward to say that they too have faced extremely steep cancelation fees when they’ve tried to cut ties with the company.

A screenshot uploaded to the micro-blogging site by Twitter user @Mrdaddguy showed that they faced a $291.45 fee to cancel their Adobe Creative Cloud plan.

At the time of publication the tweet has attracted more than 13,000 retweets, more than 4,000 quote tweets, and more than 70,000 likes.

Twitter users have been almost universally in agreement in their criticism of the company, with some describing the cancelation fee as “absurd”, “disgusting,” and likening it to being held hostage by the company.

“Adobe has been holding me hostage for the better part of a year on a free trial that magically converted to a yearlong subscription with a wild cancellation fee,” wrote Twitter user Laura Hudson. “Blink twice if they have you too.”

Some have weighed into the conversation by suggesting alternatives to Adobe’s suite of products, such as Clip Studio Paint, Procreate, Blender, Krita, Paint tool Sai, many of which are either free to use or available as one-time purchases.

Others, meanwhile, are arguing that Adobe’s penalty fees are so severe that it should be considered “morally correct” to pirate the company’s software in revenge.

“Adobe on their hands and knees begging us to pirate their software,” wrote Twitter user JoshDeLearner.

“This thread is a great reminder of why it’s morally correct to pirate Adobe products,” wrote Dozing Starlight. A multitude of similar tweets can be found here.

Source: $291 Adobe Cancelation Fee Sees Twitter Users Argue it’s ‘Morally Correct’ to Pirate Software – Newsweek

FLoC, The Ad-Targeting Tech Google Plans To Drop On Us All might be using you as a test subject to spy on closely in Chrome

About two weeks ago, millions of Google Chrome users were signed up for an experiment they never agreed to be a part of. Google had just launched a test run for Federated Learning of Cohorts—or FLoC–a new kind of ad-targeting tech meant to be less invasive than the average cookie. In a blog post announcing the trial, the company noted that it would only impact a “small percentage” of random users across ten different countries, including the US, Mexico, and Canada, with plans to expand globally as the trials run on.

These users probably won’t notice anything different when they click around on Chrome, but behind the scenes, that browser is quietly keeping a close eye on every site they visit and ad they click on. These users will have their browsing habits profiled and packaged up, and shared with countless advertisers for profit. Sometime this month, Chrome will give users an option to opt-out of this experiment, according to Google’s blog post—but as of right now, their only option is to block all third-party cookies in the browser.

That is if they even know that these tests are happening in the first place. While I’ve written my fair share about FLoC up until this point, the loudest voices I’ve seen pipe up on the topic are either marketing nerds, policy nerds, or policy nerds that work in marketing. This might be due to the fact that—aside from a few blog posts here or there—the only breadcrumbs Google’s given to people looking to learn more about FLoC are inscrutable pages of code, an inscrutable GitHub repo, and inscrutable mailing lists. Even if Google bothered asking for consent before enrolling a random sample of its Chrome user base into this trial, there’s a good chance they wouldn’t know what they were consenting to.

(For the record, you can check whether you’ve been opted into this initial test using this handy tool from the Electronic Frontier Foundation.)

[…]

The trackers that FLoC is meant to replace are known as “third-party cookies.” We have a pretty in-depth guide to the way this sort of tech works, but in a nutshell: these are snippets of code from adtech companies that websites can bake into the code underpinning their pages. Those bits of code monitor your on-site behavior—and sometimes other personal details—before the adtech org behind that cookie beams that data back to its own servers.

[…]

The catch is that Google still has all that juicy user-level data because it controls Chrome. They’re also still free to keep doing what they’ve always been doing with that data: sharing it with federal agencies, accidentally leaking it, and, y’know, just being Google.

[…]

“Isn’t that kind of… anti-competitive?”

It depends on who you ask. Competition authorities in the UK certainly think so, as do trade groups here in the US. It’s also been wrapped up into a Congressional probe, at least one class action, and a massive multi-state antitrust case spearheaded by Texas Attorney General Ken Paxton. Their qualms with FLoC are pretty easy to understand. Google already controls about 30% of the digital ad market in the US, just slightly more than Facebook—the other half of the so-called Duopoly—that controls 25% (for context, Microsoft controls about 4%).

While that dominance has netted Google billions upon billions of dollars per year, it’s recently netted multiple mounting antitrust investigations against the company, too. And those investigations have pretty universally painted a picture of Google as a blatant autocrat of the ad-based economy, and one that largely got away with abhorrent behavior because smaller rivals were too afraid—or unable—to speak up. This is why many of them are speaking up about FLoC now.

“But at least it’s good for privacy, right?”

Again, it depends who you ask! Google thinks so, but the EFF sure doesn’t. In March, the EFF put out a detailed piece breaking down some of the biggest gaps in FLoC’s privacy promises. If a particular website prompts you to give up some sort of first-party data—by having you sign up with your email or phone number, for example—your FLoC identifier isn’t really anonymous anymore.

Aside from that hiccup, the EFF points out that your FLoC cohort follows you everywhere you go across the web. This isn’t a big deal if my cohort is just “people who like to reupholster furniture,” but it gets really dicey if that cohort happens to inadvertently mold itself around a person’s mental health disorder or their sexuality based on the sites that person browses. While Google’s pledged to keep FloC’s from creating cohorts based on these sorts of “sensitive categories,” the EFF again pointed out that Google’s approach was riddled with holes.

[…]

Source: What You Need To Know About FLoC, The Ad-Targeting Tech Google Plans To Drop On Us All

Apple Never Made iMessage for Android to Lock Users In: Epic v Apple

As part of the ongoing legal battle between Fortnite maker Epic and Apple, some new information has come to light confirming the most annoying thing about Apple’s iMessage app: that Apple could make a cross-platform version of iMessage for Android phones, but it won’t because it would be bad for business.

This info comes from testimony that appears in Epic’s brief against Apple, which was posted recently on Reddit. In the document, there are several statements from well-known Apple execs describing the reasons why Apple never made a cross-platform version of iMessage for Android devices.

In one quote dating back to 2013, Eddy Cue—who is now Apple’s senior vice president for internet software and services—said that Apple “could have made a version [of iMessage] on Android that worked with iOS,” providing the possibility that “users of both platforms would have been able to exchange messages with one another seamlessly.”

Sadly, it seems multiple Apple execs were concerned that doing so would make it too easy for iPhone owners to leave the Apple ecosystem, with Apple’s senior vice president of software engineering, Craig Federighi, having said, “iMessage on Android would simply serve to remove [an] obstacle to iPhone families giving their kids Android phones”—a sentiment Epic’s brief says was also shared by Phil Schiller, who back then was in charge of overseeing Apple’s App Store.

It seems these sentiments have been known within Apple for quite some time. The brief describes a 2016 comment from a former Apple employee who said “the #1 most difficult [reason] to leave the Apple universe app is iMessage … iMessage amounts to serious lock-in,” with Schiller having affirmed the comment by saying, “moving iMessage to Android will hurt us more than help us, this email illustrates why.”

[…]

Source: Apple Never Made iMessage for Android to Lock Users In: Epic v Apple

Facebook Says It’s Your Fault That Hackers Got Half a Billion User Phone Numbers

A database containing the phone numbers of more than half a billion Facebook users is being freely traded online, and Facebook is trying to pin the blame on everyone but themselves.

A blog post titled “The Facts on News Reports About Facebook Data,” published Tuesday evening, is designed to silence the growing criticism the company is facing for failing to protect the phone numbers and other personal information of 533 million users after a database containing that information was shared for free in low level hacking forums over the weekend, as first reported by Business Insider.

Facebook initially dismissed the reports as irrelevant, claiming the data was leaked years ago and so the fact it had all been collected into one uber database containing one in every 15 people on the planet—and was now being given away for free—didn’t really matter.

[…]

But, instead of owning up to its latest failure to protect user data, Facebook is pulling from a familiar playbook: just like it did during the Cambridge Analytica scandal in 2018, it’s attempting to reframe the security failure as merely a breach of its terms of service.

So instead of apologizing for failing to keep users’ data secure, Facebook’s product management director Mike Clark began his blog post by making a semantic point about how the data was leaked.

“It is important to understand that malicious actors obtained this data not through hacking our systems but by scraping it from our platform prior to September 2019,” Clark wrote.

This is the identical excuse given in 2018, when it was revealed that Facebook had given Cambridge Analytica the data of 87 million users without their permission, for use in political ads.

Clark goes on to explain that the people who collected this data—sorry, “scraped” this data—did so by using a feature designed to help new users find their friends on the platform.

“This feature was designed to help people easily find their friends to connect with on our services using their contact lists,” Clark explains.

The contact importer feature allowed new users to upload their contact lists and match those numbers against the numbers stored on people’s profiles. But like most of Facebook’s best features, the company left it wide open to abuse by hackers.

“Effectively, the attacker created an address book with every phone number on the planet and then asked Facebook if his ’friends’ are on Facebook,” security expert Mikko Hypponen explained in a tweet.

Clark’s blog post doesn’t say when the “scraping” took place or how many times the vulnerability was exploited, just that Facebook fixed the issue in August 2019. Clark also failed to mention that Facebook was informed of this vulnerability way back in 2017, when Inti De Ceukelaire, an ethical hacker from Belgium, disclosed the problem to the company.

And, the company hasn’t explained why a number of users who have deleted their accounts long before 2018 have seen their phone numbers turn up in this database.

[…]

“While we addressed the issue identified in 2019, it’s always good for everyone to make sure that their settings align with what they want to be sharing publicly,” Clark wrote.

“In this case, updating the ‘How People Find and Contact You’ control could be helpful. We also recommend people do regular privacy checkups to make sure that their settings are in the right place, including who can see certain information on their profile and enabling two-factor authentication.”

It’s an audacious move for a company worth over $300 billion, with $61 billion cash on hand, to ask its users to secure their own information, especially considering how byzantine and complex the company’s settings menus can be.

Thankfully for the half a billion Facebook users who’ve been impacted by the breach, there’s a more practical way to get help. Troy Hunt, a cyber security consultant and founder of Have I Been Pwned has uploaded the entire leaked database to his website that allows anyone to check whether their phone number is listed in the leaked database.

[…]

 

Source: Facebook Says It’s Your Fault That Hackers Got Half a Billion User Phone Numbers

Google illegally tracking Android users, according to new complaint by Max Schrems

Austrian privacy activist Max Schrems has filed a complaint against Google in France alleging that the US tech giant is illegally tracking users on Android phones without their consent.

Android phones generate unique advertising codes, similar to Apple’s Identifier for Advertisers (IDFA), that allow Google and third parties to track users’ browsing behavior in order to better target them with advertising.

In a complaint filed on Wednesday, Schrems’ campaign group Noyb argued that in creating and storing these codes without first obtaining explicit permission from users, Google was engaging in “illegal operations” that violate EU privacy laws.

Noyb urged France’s data privacy regulator to launch a probe into Google’s tracking practices and to force the company to comply with privacy rules. It argued that fines should be imposed on the tech giant if the watchdog finds evidence of wrongdoing.

“Through these hidden identifiers on your phone, Google and third parties can track users without their consent,” said Stefano Rossetti, privacy lawyer at Noyb. “It is like having powder on your hands and feet, leaving a trace of everything you do on your phone—from whether you swiped right or left to the song you downloaded.”

[…]

Last year, Schrems won a landmark case at Europe’s highest court that ruled a transatlantic agreement on transferring data between the bloc and the US used by thousands of corporations did not protect EU citizens’ privacy.

Source: Google illegally tracking Android users, according to new complaint | Ars Technica

Google Asked to Hide TorrentFreak Article Reporting that ‘The Mandalorian’ Was Widely Pirated

Google was asked to remove a TorrentFreak article from its search results this week. The article in question reported that “The Mandalorian” was the most pirated TV show of 2020.

This notice claims to identify several problematic URLs that allegedly infringe the copyrights of Disney’s hit series The Mandalorian. This is not unexpected, as The Mandalorian was the most pirated TV show of last year, as we reported in late December. However, we didn’t expect to see our article as one of the targeted links in the notice. Apparently, the news that The Mandalorian is widely pirated — which was repeated by dozens of other publications — is seen as copyright infringement?

Needless to say, we wholeheartedly disagree. This is not the way.
TorrentFreak specifies that the article in question “didn’t host or link to any infringing content.” (TorrentFreak’s article was even linked to by major sites including CNET, Forbes, Variety, and even Slashdot.)

TorrentFreak also reports that it wasn’t Disney who filed the takedown request, but GFM Films… At first, we thought that the German camera company GFM could have something to do with it, as they worked on The Mandalorian. However, earlier takedown notices from the same sender protected the film “The Last Witness,” which is linked to the UK company GFM Film Sales. Since we obviously don’t want to falsely accuse anyone, we’re not pointing fingers.
So what happens next? We will certainly put up a fight if Google decides to remove the page. At the time of writing, this has yet to happen. The search engine currently lists the takedown request as ‘pending,’ which likely means that there will be a manual review. The good news is that Google is usually pretty good at catching overbroad takedown requests. This is also true for TorrentFreak articles that were targeted previously, including our coverage on the Green Book screener leak.

Source: Google Asked to Hide TorrentFreak Article Reporting that ‘The Mandalorian’ Was Widely Pirated – Slashdot

SCO Linux FUD Returns From the Dead

The Courts IBM Red Hat Software Linux

SCO Linux FUD Returns From the Dead (zdnet.com) 115

wiredog shares a ZDNet report: I have literally been covering SCO’s legal attempts to prove that IBM illegally copied Unix’s source code into Linux for over 17 years. I’ve written well over 500 stories on this lawsuit and its variants. I really thought it was dead, done, and buried. I was wrong. Xinuos, which bought SCO’s Unix products and intellectual property (IP) in 2011, like a bad zombie movie, is now suing IBM and Red Hat [for] “illegally Copying Xinuos’ software code for its server operating systems.” For those of you who haven’t been around for this epic IP lawsuit, you can get the full story with “27 eight-by-ten color glossy photographs and circles and arrows and a paragraph on the back of each one” from Groklaw. If you’d rather not spend a couple of weeks going over the cases, here’s my shortened version. Back in 2001, SCO, a Unix company, joined forces with Caldera, a Linux company, to form what should have been a major Red Hat rival. Instead, two years later, SCO sued IBM in an all-out legal attack against Linux.

The fact that most of you don’t know either company’s name gives you an idea of how well that lawsuit went. SCO’s Linux lawsuit made no sense and no one at the time gave it much of a chance of succeeding. Over time it was revealed that Microsoft had been using SCO as a sock puppet against Linux. Unfortunately for Microsoft and SCO, it soon became abundantly clear that SCO didn’t have a real case against Linux and its allies. SCO lost battle after battle. The fatal blow came in 2007 when SCO was proven to have never owned the copyrights to Unix. So, by 2011, the only thing of value left in SCO, its Unix operating systems, was sold to UnXis. This acquisition, which puzzled most, actually made some sense. SCO’s Unix products, OpenServer and Unixware, still had a small, but real market. At the time, UnXis now under the name, Xinuos, stated it had no interest in SCO’s worthless lawsuits. In 2016, CEO Sean Synder said, “We are not SCO. We are investors who bought the products. We did not buy the ability to pursue litigation against IBM, and we have absolutely no interest in that.” So, what changed? The company appears to have fallen on hard times. As Synder stated: “systems, like our FreeBSD-based OpenServer 10, have been pushed out of the market.” Officially, in his statement, Snyder now says, “While this case is about Xinuos and the theft of our intellectual property, it is also about market manipulation that has harmed consumers, competitors, the open-source community, and innovation itself.”

Source: SCO Linux FUD Returns From the Dead – Slashdot

Wi-Fi devices set to become object sensors by 2024 under planned 802.11bf standard – no, they haven’t thought of security and privacy

In three years or so, the Wi-Fi specification is scheduled to get an upgrade that will turn wireless devices into sensors capable of gathering data about the people and objects bathed in their signals.

“When 802.11bf will be finalized and introduced as an IEEE standard in September 2024, Wi-Fi will cease to be a communication-only standard and will legitimately become a full-fledged sensing paradigm,” explains Francesco Restuccia, assistant professor of electrical and computer engineering at Northeastern University, in a paper summarizing the state of the Wi-Fi Sensing project (SENS) currently being developed by the Institute of Electrical and Electronics Engineers (IEEE).

SENS is envisioned as a way for devices capable of sending and receiving wireless data to use Wi-Fi signal interference differences to measure the range, velocity, direction, motion, presence, and proximity of people and objects.

It may come as no surprise that the security and privacy considerations of Wi-Fi-based sensing have not received much attention.

As Restuccia warns in his paper, “As yet, research and development efforts have been focused on improving the classification accuracy of the phenomena being monitored, with little regard to S&P [security and privacy] issues. While this could be acceptable from a research perspective, we point out that to allow widespread adoption of 802.11bf, ordinary people need to trust its underlying technologies. Therefore, S&P guarantees must be provided to the end users.”

[…]

“Indeed, it has been shown that SENS-based classifiers can infer privacy-critical information such as keyboard typing, gesture recognition and activity tracking,” Restuccia explains. “Given the broadcast nature of the wireless channel, a malicious eavesdropper could easily ‘listen’ to CSI [Channel State Information] reports and track the user’s activity without authorization.”

And worse still, he argues, such tracking can be done surreptitiously because Wi-Fi signals can penetrate walls, don’t require light, and don’t offer any visible indicator of their presence.

Restuccia suggests there needs to be a way to opt-out of SENS-based surveillance; a more privacy-friendly stance would be to opt-in, but there’s not much precedent for seeking permission in the technology industry.

[…]

Source: Wi-Fi devices set to become object sensors by 2024 under planned 802.11bf standard • The Register

Android, iOS beam telemetry to Google, Apple even when you tell them not to

In a recent released research paper, titled “Mobile Handset Privacy: Measuring The Data iOS and Android Send to Apple And Google” [PDF], Douglas Leith, chairman of computer systems in the school of computer science and statistics at Trinity College Dublin, Ireland, documents how iPhones and Android devices phone home regardless of the wishes of their owners.

According to Leith, Android and iOS handsets share data about their salient characteristics with their makers every 4.5 minutes on average.

“The phone IMEI, hardware serial number, SIM serial number and IMSI, handset phone number etc are shared with Apple and Google,” the paper says. “Both iOS and Google Android transmit telemetry, despite the user explicitly opting out of this.”

These transmissions occur even when the iOS Analytics & Improvements option is turned off and the Android Usage & Diagnostics option is turned off.

Such data may be considered personal information under privacy rules, depending upon the applicable laws and whether they can be associated with an individual. It can also have legitimate uses.

Of the two mobile operating systems, Android is claimed to be the more chatty: According to Leith, “Google collects a notably larger volume of handset data than Apple.”

Within 10 minutes of starting up, a Google Pixel handset sent about 1MB of data to Google, compared to 42KB of data sent to Apple in a similar startup scenario. And when the handsets sit idle, the Pixel will send about 1MB every 12 hours, about 20x more than the 52KB sent over the same period by an idle iPhone.

[…]

Leith’s tests excluded data related to services selected by device users, like those related to search, cloud storage, maps, and the like. Instead, they focused on the transmission of data shared when there’s no logged in user, including IMEI number, hardware serial number, SIM serial number, phone number, device ids (UDID, Ad ID, RDID, etc), location, telemetry, cookies, local IP address, device Wi-Fi MAC address, and nearby Wi-Fi MAC addresses.

This last category is noteworthy because it has privacy implications for other people on the same network. As the paper explains, iOS shares additional data: the handset Bluetooth UniqueChipID, the Secure Element ID (used for Apple Pay), and the Wi-Fi MAC addresses of nearby devices, specifically other devices using the same network gateway.

“When the handset location setting is enabled, these MAC addresses are also tagged with the GPS location,” the paper says. “Note that it takes only one device to tag the home gateway MAC address with its GPS location and thereafter the location of all other devices reporting that MAC address to Apple is revealed.”

[…]

Google also has a plausible fine-print justification: Leith notes that Google’s analytics options menu includes the text, “Turning off this feature doesn’t affect your device’s ability to send the information needed for essential services such as system updates and security.” However, Leith argues that this “essential” data is extensive and beyond reasonable user expectations.

As for Apple, you might think a company that proclaims “What happens on your iPhone stays on your iPhone” on billboards, and “Your data. Your choice,” on its website would want to explain its permission-defying telemetry. Yet the iPhone maker did not respond to a request for comment.

Source: Android, iOS beam telemetry to Google, Apple even when you tell them not to – study • The Register

Privacy Laws Giving Big Internet Companies A Convenient Excuse To Avoid Academic Scrutiny – or not? A Balanced argument

For years we’ve talked about how the fact that no one really understands privacy, leads to very bad attempts at regulating privacy in ways that do more harm than good. They often don’t do anything that actually protects privacy — and instead screw up lots of other important things, from competition to free speech. In fact, in some ways, there’s a big conflict between open internet systems and privacy. There are ways to get around that — usually by moving the data from centralized silos out towards the ends of the network — but that’s rarely happening in practice. I mean, going back over thirteen years ago, we were writing about the inherent conflict between Facebook’s (then) open social graph and privacy. Yet, at the time, Facebook was cheered on for opening up its social graph. It was creating a more “open” internet, an internet that others could build upon.

But, of course, over the years things have changed. A lot. In 2018, after the Cambridge Analytica scandal, Mark Zuckerberg more or less admitted that the world was telling Facebook to lock everything down again:

I do think early on on the platform we had this very idealistic vision around how data portability would allow all these different new experiences, and I think the feedback that we’ve gotten from our community and from the world is that privacy and having the data locked down is more important to people than maybe making it easier to bring more data and have different kinds of experiences.

As we pointed out in response — this was worrisome thinking, because it would likely take us away from a better world in which the data is more controlled by end users. Instead, so many people have now come to think that “protecting privacy” means making the big internet companies lock down our data rather than the much better approach which would be giving us full control over our own data. Those are two different things, that only sometimes look alike.

I say all of that as preamble in suggesting people read an excellent Protocol article by Issie Lapowsky, which — in a very thoughtful and nuanced way — highlights the unfortunate conflict between academic researchers trying to study the big internet companies and the companies’ insistence that they need to keep data private. We’ve touched on this topic before ourselves, in covering the still ongoing fight between Facebook and NYU regarding NYU’s Ad Observer project.

That project involves getting individuals to install a browser extension that shares data back to NYU about what ads the user sees. Facebook insists that it violates their privacy rules — and points to how much trouble it got in (and the massive fines it paid) over the Cambridge Analytica mess. Though, as we explained then, the scenarios are quite different.

Lapowsky’s article goes further — noting how Facebook told her that the Ad Observer project was collecting data without the user’s permission, which worried the PhD student who was working on the project. It turns out that was false. The project only collects data from the user who installs it and agrees (giving permission) to collect the data in question.

But the story and others in the article highlight an unfortunate situation: the somewhat haphazard demands on the big internet companies to “protect privacy” are now providing convenient excuses to those same companies to shut down academic research on those companies and their practices. In some cases there are legitimate concerns. For example, as the article notes, there were concerns about how much Facebook is willing to share regarding ad targeting. That information could be really important for those studying disinformation or civil rights issues. But… it could also be used in nefarious ways:

Facebook released an API for its political ad archive and invited the NYU team to be early testers. Using the API, Edelson and McCoy began studying the spread of disinformation and misinformation through political ads and quickly realized that the dataset had one glaring gap: It didn’t include any data on who the ads were targeting, something they viewed as key to understanding advertisers’ malintent. For example, last year, the Trump campaign ran an ad envisioning a dystopian post-Biden presidency, where the world is burning and no one answers 911 calls due to “defunding of the police department.” That ad, Edelson found, had been targeted specifically to married women in the suburbs. “I think that’s relevant context to understanding that ad,” Edelson said.

But Facebook was unwilling to share targeting data publicly. According to Satterfield, that could make it too easy to reverse-engineer a person’s interests and other personal information. If, for instance, a person likes or comments on a given ad, it wouldn’t be too hard to check the targeting data on that ad, if it were public, and deduce that that person meets those targeting criteria. “If you combine those two data sets, you could potentially learn things about the people who engaged with the ad,” Satterfield said.

Legitimate concern… but also allows the company to shield data that could be really useful to academics. Of course, it doesn’t help that so many people are so distrustful of these big companies that no matter what they do it will be portrayed — sometimes by the very same people — as evil. It was just a few weeks ago that we saw people screaming both about the big internet companies willing to cave in and pay Rupert Murdoch the Australian link tax… and when they refused to. Both options were painted as evil.

So, sharing data will inevitably be presented by some as violating people’s privacy, while not sharing data will be presented as hiding from researchers and trying to avoid transparency. And there’s probably some truth in every angle to these stories.

Of course, that all leaves out a better approach that these companies could do: give more power to the end users themselves to control their own data. Let the users decide what data is shared and what is not. Let the users decide where and how that data is stored (even if it’s not on the platform itself). But, instead, we just have people yelling about how these companies both have to protect everyone’s privacy and give access to researchers to see what they’re doing with all this data. I don’t think the “middle ground” laid out in the article is all that tenable. Right now it’s just to basically create special exceptions in which academics are “allowed” — under strict conditions — to get access to that data.

The problem with that framing is that the big internet companies still end up in control of the data, rather than the end users. The situation with NYU seems like a perfectly good example. Facebook shouldn’t have to share data from people who don’t consent, but with the Ad Observer, it’s all people who are actually consenting to handing over their own data, and Facebook shouldn’t be in the business of blocking that — even if it’s inevitable that some reporter at some future date will try to spin that into a story claiming that Facebook “violated” privacy because these researchers convinced people to turn over their own info.

Source: Privacy Laws Giving Big Internet Companies A Convenient Excuse To Avoid Academic Scrutiny | Techdirt

The argument Mike makes above is basically a plea for what Sir Tim Berners Lee, inventor of the internet is pleading for and already making in his companies Solid and Inrupt. User data is placed in personal Pods / Silos and the user can determine what data is given to who.

It’s an idealistic scenario that seems to ignore a few things:

  • who hosts the pods? the hoster can usually see into things or at any rate gather metadata (which is usually more valuable than the actual data). Who pays for hosting the pods?
  • will people understand and be willing to take the time to curate their pod access? people have trouble finding privacy settings on their social networks, this promises to be more complex
  • if a site requires access to data in a pod, won’t people blindly click on accept without understanding that they are giving away their data? Or will they be coerced into giving away data they don’t want because there are no alternatives to using the service?

The New York Times has a nice article on what he’s doing: He Created the Web. Now He’s Out to Remake the Digital World.

Data Broker Looking To Sell Global Real-Time Vehicle Location Data To Government Agencies, Including The Military

[…]

utting a couple of middle men between the app data and the purchase of data helps agencies steer clear of Constitutional issues related to the Supreme Court’s Carpenter decision, which introduced a warrant mandate for engaging in proxy tracking of people via cell service providers.

But phones aren’t the only objects that generate a wealth of location data. Cars go almost as many places as phones do, providing data brokers with yet another source of possibly useful location data that government agencies might be interested in obtaining access to. Here’s Joseph Cox of Vice with more details:

A surveillance contractor that has previously sold services to the U.S. military is advertising a product that it says can locate the real-time locations of specific cars in nearly any country on Earth. It says it does this by using data collected and sent by the cars and their components themselves, according to a document obtained by Motherboard.

“Ulysses can provide our clients with the ability to remotely geolocate vehicles in nearly every country except for North Korea and Cuba on a near real time basis,” the document, written by contractor The Ulysses Group, reads. “Currently, we can access over 15 billion vehicle locations around the world every month,” the document adds.

Historical data is cool. But what’s even cooler is real-time tracking of vehicle movements. Of course the DoD would be interested in this. It has a drone strike program that’s thirsty for location data and has relied on even more questionable data in the past to make extrajudicial “death from above” decisions in the past.

Phones are reliable snitches. So are cars — a fact that may come as a surprise to car owners who haven’t been paying attention to tech developments over the past several years. Plenty of data is constantly captured by internal “black boxes,” but tends to only be retained when there’s a collision. But the interconnectedness of cars and people’s phones provides new data-gathering opportunities.

Then there are the car manufacturers themselves, which apparently feel driver data is theirs for the taking and are willing to sell it to third parties who are (also apparently) willing to sell all of this to government agencies.

“Vehicle telematics is data transmitted from the vehicle to the automaker or OEM through embedded communications systems in the car,” the Ulysses document continues. “Among the thousands of other data points, vehicle location data is transmitted on a constant and near real time basis while the vehicle is operating.”

This document wasn’t obtained from FOIA requests. It actually couldn’t be — not if Ulysses isn’t currently selling to government agencies. It was actually obtained by Senator Ron Wyden, who shared it with Vice’s tech-related offshoot, Motherboard. As Wyden noted while handing it over, very little is known about these under-the-radar suppliers of location data and their government customers. This company may have no (acknowledged) government customers at this point, but real-time access to vehicle movement is something plenty of government agencies would be willing to pay for.

[…]

Source: Data Broker Looking To Sell Real-Time Vehicle Location Data To Government Agencies, Including The Military | Techdirt

Rabble Rousing Mob who can’t Read Seek Removal of Richard Stallman and Entire FSF Board

Richard Stallman’s return to the Free Software Foundation’s board of directors has drawn condemnation from many people in the free software community. An open letter signed by hundreds of people today called for Stallman to be removed again and for the FSF’s entire board to resign. Letter signers include Neil McGovern, GNOME Foundation executive director and former Debian Project Leader; Deb Nicholson, general manager of the Open Source Initiative; Matthew Garrett, a former member of the FSF board of directors; seven of the eight members of the X.org Foundation board of directors; Elana Hashman of the Debian Technical Committee, Open Source Initiative, and Kubernetes project; Molly de Blanc of the Debian Project and GNOME Foundation; and more than 300 others. That number has been rising quickly today: the open letter contains instructions for signing it.

The letter said all members of the FSF board should be removed because they ‘have enabled and empowered RMS for years. They demonstrate this again by permitting him to rejoin the FSF Board. It is time for RMS to step back from the free software, tech ethics, digital rights, and tech communities, for he cannot provide the leadership we need.’ The letter also called for Stallman to be removed from his position leading the GNU Project. “We urge those in a position to do so to stop supporting the Free Software Foundation,” they wrote. “Refuse to contribute to projects related to the FSF and RMS. Do not speak at or attend FSF events, or events that welcome RMS and his brand of intolerance. We ask for contributors to free software projects to take a stand against bigotry and hate within their projects. While doing these things, tell these communities and the FSF why.” UPDATE: For a quick summary of the controversy, long-time Slashdot reader Jogar the Barbarian recommends this article from It’s Foss.

Source: Free Software Advocates Seek Removal of Richard Stallman and Entire FSF Board – Slashdot

From the comments:

Your misleading quoting is mendacious, wrong, and sickening from someone on Slashdot who ought to know better. Here is the RMS quote, as quoted by the MIT cancellor (I’ve bolded the parts that you tried to hide):

RMS:

The injustice is in the word “assaulting”. The term “sexual assault” is so vague and slippery that it facilitates accusation inflation: taking claims that someone did X and leading people to think of it as Y, which is much worse than X.

The accusation quoted is a clear example of inflation. The reference reports the claim that Minsky had sex with one of Epstein’s harem. … Let’s presume that was true (I see no reason to disbelieve it).

The word “assaulting” presumes that he applied force or violence, in some unspecified way, but the article itself says no such thing. Only that they had sex.

We can imagine many scenarios, but the most plausible scenario is that she presented herself to him as entirely willing. Assuming she was being coerced by Epstein, he would have had every reason to tell her to conceal that from most of his associates.

https://news.slashdot.org/comments.pl?sid=18535476&cid=61195002 / Moridineas

This really frightens me. Moridineas, you have provided the precise quote, and it is absolutely clear that you are right. Stallman did not speak in vague metaphors or with sloppy grammar. What was written is clear as crystal, and easily objectively verified by absolutely anyone who bothers to read the quote.

The objective truth here is Stallman DID NOT say that these girls were entirely willing. If he had said that, we would all be having a very different conversation here. But he did not, and that is that. He speculated that they presented as entirely willing. This is a completely different statement, and it is not the moral sin that Stallman is being accused of committing.

And yet, there is an army of angry people adamantly insisting that he said they were entirely willing. People who seem to be otherwise intelligent and capable of understanding English. Every one of these people can read the quote just like you did, and see that he did not say what they insist he said.

So what is motivating this? How can so many otherwise-normal people insist on an obvious lie to the point of insisting that so many people resign? What is wrong with these people? Don’t they care about the truth? Doesn’t that matter?

What good is speaking precisely when people will just change what you say and then crucify you for it?

https://news.slashdot.org/comments.pl?sid=18535476&cid=61195246 / Brain-Fu

California bans website ‘dark patterns’, confusing language when opting out of having your personal info sold

The rule amendments [PDF], just approved by the American state’s Office of Administrative Law, were proposed last October after a set of initial rules for enforcing the California Consumer Privacy Act (CCPA) were adopted last August, a month after CCPA enforcement began.

The CCPA amendments:

  • Clarify that businesses operating offline need to provide a way to opt-out of data sales.
  • Establish a standard Opt-Out Icon for notice and consent of data sales.
  • Prohibit designs that impair or subvert a consumer’s choice to opt-out.
  • Require that opting out takes no more steps or clicks than opting in.
  • Ban confusing language, like the double negative “Don’t not sell my information,” when presenting an opt-out choice.
  • Forbid asking for personal information not necessary to carry out an opt-out request.
  • Disallow forcing people to scroll through a privacy policy if they’ve opted out or to review reasons not to opt-out.

[…]

Research published in 2019 found 22 companies selling manipulative interface design or dark patterns as a service and found 1,841 examples on 1,267 websites employing these dubious techniques out of 11,000 surveyed.

Source: California bans website ‘dark patterns’, confusing language when opting out of having your personal info sold • The Register

Bag maker Peak Design calls out Amazon for its copycat ways

Amazon is well-known for its copycat ways, but it’s not so often that another company calls it out on it, much less in a way that’s funny. But that’s exactly what Peak Design did today when it uploaded a video to YouTube comparing its Everyday Sling to a camera bag from AmazonBasics that shares the exact same name.

“It looks suspiciously like the Peak Design Everyday Sling, but you don’t pay for all those needless bells and whistles,” the video’s narrator declares. Those extras include things like a lifetime warranty, BlueSign approved recycled materials, as well as the time and effort the company’s design team put into creating the bag.

In its most on-the-nose jab at Amazon, the video includes a “dramatization” of how the AmazonBasics design team created their take on the bag. “Keep combing that data,” a googly-eyed executive tells his subordinate, who’s played here by Peak Design founder and CEO Peter Dering. “Let’s Basic that bad boy,” they say after finding the Everyday Sling.

Source: Bag maker Peak Design calls out Amazon for its copycat ways | Engadget

ICANN Refuses to Accredit Pirate Bay Founder Peter Sunde Due to His ‘Background’

Peter Sunde was one of the key people behind The Pirate Bay in the early years, a role for which he was eventually convicted in Sweden.

While Sunde cut his ties with the notorious torrent site many years ago, he remains an active and vocal personality on the Internet.

[…]

Sunde is also involved with the domain registrar Sarek, which caters to technology enthusiasts and people who are interested in a fair and balanced Internet, promising low prices for domain registrations

As a business, everything was going well for Sarek. The company made several deals with domain registries to offer cheap domains but there is one element that’s missing. To resell the most popular domains, including .com and .org, it has to be accredited by ICANN.

ICANN is the main oversight body for the Internet’s global domain name system. Among other things, it develops policies for accredited registrars to prevent abuse and illegal use of domain names. Without this accreditation, reselling several popular domains simply isn’t an option.

ICANN Denies Accreditation

Sunde and the Sarek team hoped to overcome this hurdle and started the ICANN accreditation process in 2019. After a long period of waiting, the organization recently informed Sunde that his application was denied.

[…]

“After the background check I get a reply that I’ve checked the wrong boxes,” Sunde wrote. “Not only that, but they’re also upset I was wanted by Interpol.”

The Twitter thread didn’t go unnoticed by ICANN who contacted Sunde over the phone to offer clarification. As it turns out, the ‘wrong box’ issue isn’t the main problem, as he explains in a follow-up Twitter thread.

“I got some sort of semi-excuse regarding their claim that I lied on my application. They also said that they agreed it wasn’t fraud or similar really. So both of the points they made regarding the denial were not really the reason,” Sunde clarifies.

ICANN is Not Comfortable With Sunde

Over the phone, ICANN explained that the matter was discussed internally. This unnamed group of people concluded that the organization is ‘not comfortable’ doing business with him.

“They basically admitted that they don’t like me. They’ve banned me for nothing else than my political views. This is typical discrimination. Considering I have no one to appeal to except them, it’s concerning, since they control the actual fucking center of the internet.”

[…]

Making matters worse, ICANN will also keep the registration fee, so this whole ordeal is costing money as well.

Source: ICANN Refuses to Accredit Pirate Bay Founder Peter Sunde Due to His ‘Background’ * TorrentFreak

Yup. ICANN. It’s an autocracy run by no-one but themselves. This is clearly visible in their processes, which almost led to the whole .org TLD being sold off for massive profit (.org is not for profit!) to an ex board member.

India’s New Cyber Law Goes Live: Subtracts Safe Harbor Protections, Adds Compelled Assistance Demands For Intermediaries, Massive surveillance infrastructure

New rules for social media companies and other hosts of third-party content have just gone into effect in India. The proposed changes to India’s 2018 Intermediary Guidelines are now live, allowing the government to insert itself into content moderation efforts and make demands of tech companies some simply won’t be able to comply with.

Now, under the threat of fines and jail time, platforms like Twitter (itself a recent combatant of the Indian government over its attempts to silence people protesting yet another bad law) can be held directly responsible for any “illegal” content it hosts, even as the government attempts to pay lip service to honoring long-standing intermediary protections that immunized them from the actions of their users.

[…]

turns a whole lot of online discourse into potentially illegal content.

[…]

The new mandates demand platforms operating in India proactively scan all uploaded content to ensure it complies with India’s laws.

The Intermediary shall deploy technology based automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content.

This obligation is not only impossible to comply with (and is prohibitively expensive for smaller platforms and sites/online forums that don’t have access to AI tools), it opens up platforms to prosecution simply for being unable to do the impossible. And complying with this directive to implement this demand undercuts the Safe Harbour protections granted to intermediaries by the Indian government.

If you’re moderating all content prior to it going “live,” it’s no longer possible to claim you’re not acting as an editor or curator. The Indian government grants Safe Harbour to “passive” conduits of information. The new law pretty much abolishes those because complying with the law turns intermediaries from “passive” to “active.”

Broader and broader it gets, with the Indian government rewriting its “national security only” demands to cover “investigation or detection or prosecution or prevention of offence(s).” In other words, the Indian government can force platforms and services to provide information and assistance within 72 hours of notification to almost any government agency for almost any reason.

This assistance includes “tracing the origin” of illegal content — something that may be impossible to comply with since some platforms don’t collect enough personal information to make identification possible. Any information dug up by intermediaries in support of government action must be retained for 180 days whether or not the government makes use of it.

More burdens: any intermediary with more than 5 million users must establish permanent residence in India and provide on-call service 24/7. Takedown compliance has been accelerated from 36 hours of notification to 24 hours.

Very few companies will be able to comply with most of these directives. No company will be able to comply with them completely. And with the government insisting on adding more “eye of the beholder” content to the illegal list, the law encourages pre-censorship of any questionable content and invites regulators and other government agencies to get into the moderation business.

[…]

Source: India’s New Cyber Law Goes Live: Subtracts Safe Harbor Protections, Adds Compelled Assistance Demands For Intermediaries | Techdirt

Extension shows the monopoly big tech has on your browsing – you always route your traffic through them

A new extension for Google Chrome has made explicit how most popular sites on the internet load resources from one or more of Google, Facebook, Microsoft and Amazon.

The extension, Big Tech Detective, shows the extent to which websites exchange data with these four companies by reporting on them. It also optionally blocks sites that request such data. Any such request is also effectively a tracker, since the provider sees the IP number and other request data for the user’s web browser.

The extension was built by investigative data reporter Dhruv Mehrotra in association with the Anti-Monopoly Fund at the Economic Security Project, a non-profit research group financed by the US-based Hopewell Fund in Washington DC.

Cara Rose Defabio, editor at the Economic Security Project, said: “Big Tech Detective is a tool that pulls the curtain back on exactly how much control these corporations have over the internet. Our browser extension lets you ‘lock out’ Google, Amazon, Facebook and Microsoft, alerting you when a website you’re using pings any one of these companies… you can’t do much online without your data being routed through one of these giants.”

[…]

That, perhaps, is an exaggeration. Big Tech Detective will spot sites that use Google Analytics to report on web traffic, or host Google ads, or use a service hosted on Amazon Web Services such as Chartbeat analytics – which embeds a script that pings its service every 15 seconds according to this post – but that is not the same as routing your data through the services.

In terms of actual data collection and analysis, we would guess that Google and Facebook are ahead of AWS and Microsoft, and munging together infrastructure services with analytics and tracking is perhaps unhelpful.

Another point to note is that a third-party service hosted on a public cloud server at AWS, Microsoft or Google is distinct from services run directly by those companies. Public cloud is an infrastructure choice and the infrastructure provider does not get that data other than being able to see that there is traffic.

[Note: This is untrue. They also get to see where the traffic is from, where it goes to, how it is routed, how many connections there are, the size of the traffice being sent. This metadata is often more valuable than the actual data being sent]

Dependencies

Defabio made the point, though, that the companies behind public cloud have huge power, referencing Amazon’s decision to “refuse hosting service to the right wing social app Parler, effectively shutting it down.” While there was substantial popular approval of the action, it was Amazon’s decision, rather than one based on law and regulation.

She argued that these giant corporations should be broken up, so that Amazon the retailer is separate from AWS, for example. The release of the new extension is timed to coincide with US government hearings on digital competition, drawing on research from last year.

[…]

Source: Ever felt that a few big tech companies are following you around the internet? That’s because … they are • The Register

1Password has none, KeePass has none… So why are there seven embedded trackers in the LastPass Android app?

A security researcher has recommended against using the LastPass password manager Android app after noting seven embedded trackers. The software’s maker says users can opt out if they want.

[…]

The Exodus report on LastPass shows seven trackers in the Android app, including four from Google for the purpose of analytics and crash reporting, as well as others from AppsFlyer, MixPanel, and Segment. Segment, for instance, gathers data for marketing teams, and claims to offer a “single view of the customer”, profiling users and connecting their activity across different platforms, presumably for tailored adverts.

LastPass has many free users – is it a problem if its owner seeks to monetise them in some way? Kuketz said it is. Typically, the way trackers like this work is that the developer compiles code from the tracking provider into their application. The gathered information can be used to build up a profile of the user’s interests from their activities, and target them with ads.

Even the app developers do not know what data is collected and transmitted to the third-party providers, said Kuketz, and the integration of proprietary code could introduce security risks and unexpected behaviour, as well as being a privacy risk. These things do not belong in password managers, which are security-critical, he said.

Kuketz also investigated what data is transmitted by inspecting the network traffic. He found that this included details about the device being used, the mobile operator, the type of LastPass account, the Google Advertising ID (which can connect data about the user across different apps). During use, the data also shows when new passwords are created and what type they are. Kuketz did not suggest that actual passwords or usernames are transmitted, but did note the absence of any opt-out dialogs, or information for the user about the data being sent to third parties. In his view, the presence of the trackers demonstrates a suboptimal attitude to security. Kuketz recommended changing to a different password manager, such as the open-source KeePass.

Do all password apps contain such trackers? Not according to Exodus. 1Password has none. KeePass has none. The open-source Bitwarden has two for Google Firebase analytics and Microsoft Visual Studio crash reporting. Dashlane has four. LastPass does appear to have more than its rivals. And yes, lots of smartphone apps have trackers: today, we’re talking about LastPass.

[…]

“All LastPass users, regardless of browser or device, are given the option to opt-out of these analytics in their LastPass Privacy Settings, located in their account here: Account Settings > Show Advanced Settings > Privacy.

Source: 1Password has none, KeePass has none… So why are there seven embedded trackers in the LastPass Android app? • The Register

Looking for this option was definitely not easy to find.

I just bought a year’s subscription as I thought the $2.11 / month price point was OK. They added on a few cents and then told me this price was excl VAT. Not doing very well on the trustworthyness scale here.

Use AdNauseum to Block Ads and Confuse Google’s Advertising

In an online world in which countless systems are trying to figure out what exactly you enjoy so they can serve you up advertising about it, it really fucks up their profiling mechanisms when they think you like everything. And to help you out with this approach, I recommend checking out the Chrome/Firefox extension AdNauseum. You won’t find it on the Chrome Web Store, however, as Google frowns at extensions that screw up Google’s efforts to show you advertising for some totally inexplicable reason. You’ll have to install it manually, but it’s worth it.

[…]

AdNauseum works on a different principle. As Lee McGuigan writes over at the MIT Technology Review:

“AdNauseam is like conventional ad-blocking software, but with an extra layer. Instead of just removing ads when the user browses a website, it also automatically clicks on them. By making it appear as if the user is interested in everything, AdNauseam makes it hard for observers to construct a profile of that person. It’s like jamming radar by flooding it with false signals. And it’s adjustable. Users can choose to trust privacy-respecting advertisers while jamming others. They can also choose whether to automatically click on all the ads on a given website or only some percentage of them.”

McGuigan goes on to describe the various experiments he worked on with AdNauseum founder Helen Nissenbaum, allegedly proving that the extension can make it past Google’s various checks for fraudulent or otherwise illegitimate clicks on advertising. Google, as you might expect, denies the experiments actually prove anything, and maintains that a “vast majority” of these kinds of clicks are detected and ignored.

[…]

Once you’ve installed AdNauseum, you’ll be presented with three simple options:

undefined
Screenshot: David Murphy

Feel free to enable all three, but heed AdNauseum’s warning: You probably don’t want to use the extension alongside another adblocker, as the two will conflict and you probably won’t see any added benefit.

As with most adblockers, there are plenty of options you can play with if you dig deeper into AdNauseum’s settings.

[…]

note that AdNauseum still (theoretically) generates revenue for the sites tracking you. That in itself might cause you to adopt a nuclear approach vs. an obfuscation-by-noise approach. Your call.

Source: Use AdNauseum to Block Ads and Confuse Google’s Advertising

CNAME DNS-based tracking defies your browser privacy defenses

Boffins based in Belgium have found that a DNS-based technique for bypassing defenses against online tracking has become increasingly common and represents a growing threat to both privacy and security.

In a research paper to be presented in July at the 21st Privacy Enhancing Technologies Symposium (PETS 2021), KU Leuven-affiliated researchers Yana Dimova, Gunes Acar, Lukasz Olejnik, Wouter Joosen, and Tom Van Goethem delve into increasing adoption of CNAME-based tracking, which abuse DNS records to erase the distinction between first-party and third-party contexts.

“This tracking scheme takes advantage of a CNAME record on a subdomain such that it is same-site to the including web site,” the paper explains. “As such, defenses that block third-party cookies are rendered ineffective.”

[…]

A technique known as DNS delegation or DNS aliasing has been known since at least 2007 and showed up in privacy-focused research papers in 2010 [PDF] and 2014 [PDF]. Based on the use of CNAME DNS records, the counter anti-tracking mechanism drew attention two years ago when open source developer Raymond Hill implemented a defense in the Firefox version of his uBlock Origin content blocking extension.

CNAME cloaking involves having a web publisher put a subdomain – e.g. trackyou.example.com – under the control of a third-party through the use of a CNAME DNS record. This makes a third-party tracker associated with the subdomain look like it belongs to the first-party domain, example.com.

The boffins from Belgium studied the CNAME-based tracking ecosystem and found 13 different companies using the technique. They claim that the usage of such trackers is growing, up 21 per cent over the past 22 months, and that CNAME trackers can be found on almost 10 per cent of the top 10,000 websites.

What’s more, sites with CNAME trackers have an average of about 28 other tracking scripts. They also leak data due to the way web architecture works. The researchers found cookie data leaks on 7,377 sites (95%) out of the 7,797 sites that used CNAME tracking. Most of these were the result of third-party analytics scripts setting cookies on the first-party domain.

Not all of these leaks exposed sensitive data but some did. Out of 103 websites with login functionality tested, the researchers found 13 that leaked sensitive info, including the user’s full name, location, email address, and authentication cookie.

“This suggests that this scheme is actively dangerous,” wrote Dr Lukasz Olejnik, one of the paper’s co-authors, an independent privacy researcher, and consultant, in a blog post. “It is harmful to web security and privacy.”

[…]

In addition, the researchers report that ad tech biz Criteo switches specifically to CNAME tracking – putting its cookies into a first-party context – when its trackers encountered users of Safari, which has strong third-party cookie defenses.

According to Olejnik, CNAME tracking can defeat most anti-tracking techniques and there are few defenses against it.

Firefox running the add-on uBlock Origin 1.25+ can see through CNAME deception. So too can Brave, which recently had to repair its CNAME defenses due to problems it created with Tor.

Chrome falls short because it does not have a suitable DNS-resolving API for uBlock Origin to hook into. Safari will limit the lifespan of cookies set via CNAME cloaking but doesn’t provide a way to undo the domain disguise to determine whether the subdomain should be blocked outright.

[…]

Source: What’s CNAME of your game? This DNS-based tracking defies your browser privacy defenses • The Register

WhatsApp: Users Who Don’t Accept Privacy Terms Can’t Read or send Texts

After causing a huge virtual meltdown with the announcement of its new privacy policy, and then postponing the implementation of said policy due to online fury, WhatsApp has spent the last few weeks trying not to stir up trouble. However, it has just revealed what will happen to users who do not accept its new privacy policy by the May 15 deadline.

WhatsApp has apparently been emailing some of its merchant partners to inform them that it will “slowly ask” users to accept the new privacy policy “in order to have full functionality” of the app, according to TechCrunch, which saw an email and confirmed its veracity with WhatsApp. The email also pointed to a public WhatsApp FAQ page titled, “What happens on the effective date?”

The FAQ page states that WhatsApp will not delete the accounts of users who do not accept the new terms, but that they won’t be able to use it like they normally do.

“If you haven’t accepted by then, WhatsApp will not delete your account. However, you won’t have full functionality of WhatsApp until you accept. For a short time, you’ll be able to receive calls and notifications, but won’t be able to read or send messages from the app,” WhatsApp wrote.

If the “for a short time” part has you scratching your head, WhatsApp did elaborate, sort of. Users who do not accept the new privacy policy by May 15 will be considered inactive users and subject to WhatsApp’s existing policy on that front, as detailed below.

“To maintain security, limit data retention, and protect the privacy of our users, WhatsApp accounts are generally deleted after 120 days of inactivity,” WhatsApp states. “Content stored locally on a user’s device prior to account deletion will remain until WhatsApp is deleted from the device. When a user reregisters for WhatsApp on the same device, their locally stored content will reappear.”

Source: WhatsApp: Users Who Don’t Accept Privacy Terms Can’t Read Texts

Aussie shakedown: Facebook ‘Endangered Public Safety’ by Blocking News During Pandemic According to Australia- after forcing FB to pay for news on the site

Facebook has endangered public safety by blocking news on the platform in Australia during the covid-19 pandemic, according to Australia’s Treasurer Josh Frydenberg a high-ranking official in the country’s ruling Liberal Party.

Frydenberg appeared on the local TV program “Today,” on Friday morning, Australia time, and insisted the government was not going to tolerate Facebook’s “unnecessary” and “wrong” attempts to bully Australia into submission.

“He endangered public safety,” Frydenberg said of Facebook CEO Mark Zuckerberg. “In the middle of a pandemic, people weren’t able to get access to information about the vaccines.”

Facebook started blocking all news content for Australian users on Thursday in retaliation for the government’s plan to implement a new law that would force large tech companies to pay news publishers for linking to their content. Google previously threatened to block all searches in Australia over the law but has since signed agreements with several large Australian publishers.

[…]

Source: Facebook ‘Endangered Public Safety’ by Blocking News During Pandemic According to Australia

Australia facepalms as Facebook blocks bookstores, sport, health services instead of just news

Facebook is being flayed in Australia after its ban on sharing of links to news publications caught plenty of websites that have nothing to do with news.

The Social Network™ announced its ban with a blog post and the sudden erasure of all posts on certain Facebook pages.

Links to news outlets big and small (including The Register) are currently impossible to post to Facebook from within Australia. Australian Facebook users don’t see news links posted from outside the nation.

Which is as Facebook intended to show its displeasure with Australia’s News Media Bargaining Code, a newly legislated scheme that forces Facebook to negotiate payments with local news publishers for the privilege of linking to their content.

But when Facebook implemented its ban, an online bookstore, charities, and even a domestic violence support service saw their Facebook presences erased. Australia’s national Basketball and Rugby bodies also saw their pages sent to the sin bin.

Facebook’s actions to unfriend Australia today … were arrogant and disappointing

Facebook said that the breadth of its blocks is regrettable, but as Australia’s law “does not provide clear guidance on the definition of news content, we have taken a broad definition in order to respect the law as drafted.”

This leaves Facebook in the interesting position of telling advertisers it offers superior micro-targeting services, while telling the world it is unable to tell the difference between a newspaper and a bookshop.

Australia’s Prime Minister Scott Morrison used Facebook to say “Facebook’s actions to unfriend Australia today, cutting off essential information services on health and emergency services, were as arrogant as they were disappointing.”

While Australia facepalms at Facebook’s clumsiness, publishers and politicians around the world have expressed dismay that Facebook has banned news and, by doing so, again demonstrated its ability to shape public discourse.

That Facebook’s contribution to public conversations has so often been to infuse them with misinformation, then promise to do better by ensuring that higher-quality content such as public interest journalism becomes more prominent, has not gone unnoticed.

[…]

Source: Australia facepalms as Facebook blocks bookstores, sport, health services instead of just news • The Register

So a country tells FB to pay for news or not show it and is then suprised that stuff starts dissappearing from FB?

And to complete the shakedown by the Aussie government, read: Facebook ‘Endangered Public Safety’ by Blocking News During Pandemic According to Australia

FortressIQ just comes out and says it: To really understand business processes, feed your staff’s screen activity to an AI

In a sign that interest in process mining is heating up, vendor FortressIQ is launching an analytics platform with a novel approach to understanding how users really work – it “videos” their on-screen activity for later analysis.

According to the San Francisco-based biz, its Process Intelligence platform will allow organisations to be better prepared for business transformation, the rollout of new applications, and digital projects by helping customers understand how people actually do their jobs, as opposed to how the business thinks they work.

The goal of process mining itself is not new. German vendor Celonis has already marked out the territory and raised approximately $290m in a funding round in November 2019, when it was valued at $2.5bn.

Celonis works by recording a users’ application logs, and by applying machine learning to data across a number of applications, purports to figure out how processes work in real life. FortressIQ, which raised $30m in May 2020, uses a different approach – recording all the user’s screen activity and using AI and computer vision to try to understand all their behaviour.

Pankaj Chowdhry, CEO at FortressIQ, told The Register that the company had built was a “virtual process analyst”, a software agent which taps into a user’s video card on the desktop or laptop. It streams a low-bandwidth version of what is occuring on the screen to provide the raw data for the machine-learning models.

“We built machine learning and computer vision AI that will, in essence, watch that movie, and convert it into a structured activity,” he said.

In an effort to assure those forgiven for being a little freaked out by the recording of users’ every on-screen move, the company said it anonymises the data it analyses to show which processes are better than others, rather than which user is better. Similarly, it said it guarantees the privacy of on-screen data.

Nonetheless, users should be aware of potential kickbacks when deploying the technology, said Tom Seal, senior research director with IDC.

“Businesses will be somewhat wary about provoking that negative reaction, particularly with the remote working that’s been triggered by COVID,” he said.

At the same time, remote working may be where the approach to process mining can show its worth, helping to understand how people adapt their working patterns in the current conditions.

FortressIQ may have an advantage over rivals in that it captures all data from the users’ screen, rather than the applications the organisation thinks should be involved in a process, said Seal. “It’s seeing activity that the application logs won’t pick up, so there is an advantage there.”

Of course, there is still the possibility that users get around prescribed processes using Post-It notes, whiteboards and phone apps, which nobody should put beyond them.

Celonis and FortressIQ come from very different places. The German firm has a background in engineering and manufacturing, with an early use case at Siemens led by Lars Reinkemeyer who has since joined the software vendor as veep for customer transformation. He literally wrote the book on process mining while at the University of California, Santa Barbara. FortressIQ, on the other hand, was founded by Chowdhry who worked as AI leader at global business process outsourcer Genpact before going it alone.

And it’s not just these two players. Software giant SAP has bought Signavio, a specialist in business process analysis and management, in a deal said to be worth $1.2bn to help understand users’ processes as it readies them for the cloud and application upgrades. ®

Source: FortressIQ just comes out and says it: To really understand business processes, feed your staff’s screen activity to an AI • The Register