The Linkielist

Linking ideas with the world

The Linkielist

Android, iOS beam telemetry to Google, Apple even when you tell them not to

In a recent released research paper, titled “Mobile Handset Privacy: Measuring The Data iOS and Android Send to Apple And Google” [PDF], Douglas Leith, chairman of computer systems in the school of computer science and statistics at Trinity College Dublin, Ireland, documents how iPhones and Android devices phone home regardless of the wishes of their owners.

According to Leith, Android and iOS handsets share data about their salient characteristics with their makers every 4.5 minutes on average.

“The phone IMEI, hardware serial number, SIM serial number and IMSI, handset phone number etc are shared with Apple and Google,” the paper says. “Both iOS and Google Android transmit telemetry, despite the user explicitly opting out of this.”

These transmissions occur even when the iOS Analytics & Improvements option is turned off and the Android Usage & Diagnostics option is turned off.

Such data may be considered personal information under privacy rules, depending upon the applicable laws and whether they can be associated with an individual. It can also have legitimate uses.

Of the two mobile operating systems, Android is claimed to be the more chatty: According to Leith, “Google collects a notably larger volume of handset data than Apple.”

Within 10 minutes of starting up, a Google Pixel handset sent about 1MB of data to Google, compared to 42KB of data sent to Apple in a similar startup scenario. And when the handsets sit idle, the Pixel will send about 1MB every 12 hours, about 20x more than the 52KB sent over the same period by an idle iPhone.

[…]

Leith’s tests excluded data related to services selected by device users, like those related to search, cloud storage, maps, and the like. Instead, they focused on the transmission of data shared when there’s no logged in user, including IMEI number, hardware serial number, SIM serial number, phone number, device ids (UDID, Ad ID, RDID, etc), location, telemetry, cookies, local IP address, device Wi-Fi MAC address, and nearby Wi-Fi MAC addresses.

This last category is noteworthy because it has privacy implications for other people on the same network. As the paper explains, iOS shares additional data: the handset Bluetooth UniqueChipID, the Secure Element ID (used for Apple Pay), and the Wi-Fi MAC addresses of nearby devices, specifically other devices using the same network gateway.

“When the handset location setting is enabled, these MAC addresses are also tagged with the GPS location,” the paper says. “Note that it takes only one device to tag the home gateway MAC address with its GPS location and thereafter the location of all other devices reporting that MAC address to Apple is revealed.”

[…]

Google also has a plausible fine-print justification: Leith notes that Google’s analytics options menu includes the text, “Turning off this feature doesn’t affect your device’s ability to send the information needed for essential services such as system updates and security.” However, Leith argues that this “essential” data is extensive and beyond reasonable user expectations.

As for Apple, you might think a company that proclaims “What happens on your iPhone stays on your iPhone” on billboards, and “Your data. Your choice,” on its website would want to explain its permission-defying telemetry. Yet the iPhone maker did not respond to a request for comment.

Source: Android, iOS beam telemetry to Google, Apple even when you tell them not to – study • The Register

Privacy Laws Giving Big Internet Companies A Convenient Excuse To Avoid Academic Scrutiny – or not? A Balanced argument

For years we’ve talked about how the fact that no one really understands privacy, leads to very bad attempts at regulating privacy in ways that do more harm than good. They often don’t do anything that actually protects privacy — and instead screw up lots of other important things, from competition to free speech. In fact, in some ways, there’s a big conflict between open internet systems and privacy. There are ways to get around that — usually by moving the data from centralized silos out towards the ends of the network — but that’s rarely happening in practice. I mean, going back over thirteen years ago, we were writing about the inherent conflict between Facebook’s (then) open social graph and privacy. Yet, at the time, Facebook was cheered on for opening up its social graph. It was creating a more “open” internet, an internet that others could build upon.

But, of course, over the years things have changed. A lot. In 2018, after the Cambridge Analytica scandal, Mark Zuckerberg more or less admitted that the world was telling Facebook to lock everything down again:

I do think early on on the platform we had this very idealistic vision around how data portability would allow all these different new experiences, and I think the feedback that we’ve gotten from our community and from the world is that privacy and having the data locked down is more important to people than maybe making it easier to bring more data and have different kinds of experiences.

As we pointed out in response — this was worrisome thinking, because it would likely take us away from a better world in which the data is more controlled by end users. Instead, so many people have now come to think that “protecting privacy” means making the big internet companies lock down our data rather than the much better approach which would be giving us full control over our own data. Those are two different things, that only sometimes look alike.

I say all of that as preamble in suggesting people read an excellent Protocol article by Issie Lapowsky, which — in a very thoughtful and nuanced way — highlights the unfortunate conflict between academic researchers trying to study the big internet companies and the companies’ insistence that they need to keep data private. We’ve touched on this topic before ourselves, in covering the still ongoing fight between Facebook and NYU regarding NYU’s Ad Observer project.

That project involves getting individuals to install a browser extension that shares data back to NYU about what ads the user sees. Facebook insists that it violates their privacy rules — and points to how much trouble it got in (and the massive fines it paid) over the Cambridge Analytica mess. Though, as we explained then, the scenarios are quite different.

Lapowsky’s article goes further — noting how Facebook told her that the Ad Observer project was collecting data without the user’s permission, which worried the PhD student who was working on the project. It turns out that was false. The project only collects data from the user who installs it and agrees (giving permission) to collect the data in question.

But the story and others in the article highlight an unfortunate situation: the somewhat haphazard demands on the big internet companies to “protect privacy” are now providing convenient excuses to those same companies to shut down academic research on those companies and their practices. In some cases there are legitimate concerns. For example, as the article notes, there were concerns about how much Facebook is willing to share regarding ad targeting. That information could be really important for those studying disinformation or civil rights issues. But… it could also be used in nefarious ways:

Facebook released an API for its political ad archive and invited the NYU team to be early testers. Using the API, Edelson and McCoy began studying the spread of disinformation and misinformation through political ads and quickly realized that the dataset had one glaring gap: It didn’t include any data on who the ads were targeting, something they viewed as key to understanding advertisers’ malintent. For example, last year, the Trump campaign ran an ad envisioning a dystopian post-Biden presidency, where the world is burning and no one answers 911 calls due to “defunding of the police department.” That ad, Edelson found, had been targeted specifically to married women in the suburbs. “I think that’s relevant context to understanding that ad,” Edelson said.

But Facebook was unwilling to share targeting data publicly. According to Satterfield, that could make it too easy to reverse-engineer a person’s interests and other personal information. If, for instance, a person likes or comments on a given ad, it wouldn’t be too hard to check the targeting data on that ad, if it were public, and deduce that that person meets those targeting criteria. “If you combine those two data sets, you could potentially learn things about the people who engaged with the ad,” Satterfield said.

Legitimate concern… but also allows the company to shield data that could be really useful to academics. Of course, it doesn’t help that so many people are so distrustful of these big companies that no matter what they do it will be portrayed — sometimes by the very same people — as evil. It was just a few weeks ago that we saw people screaming both about the big internet companies willing to cave in and pay Rupert Murdoch the Australian link tax… and when they refused to. Both options were painted as evil.

So, sharing data will inevitably be presented by some as violating people’s privacy, while not sharing data will be presented as hiding from researchers and trying to avoid transparency. And there’s probably some truth in every angle to these stories.

Of course, that all leaves out a better approach that these companies could do: give more power to the end users themselves to control their own data. Let the users decide what data is shared and what is not. Let the users decide where and how that data is stored (even if it’s not on the platform itself). But, instead, we just have people yelling about how these companies both have to protect everyone’s privacy and give access to researchers to see what they’re doing with all this data. I don’t think the “middle ground” laid out in the article is all that tenable. Right now it’s just to basically create special exceptions in which academics are “allowed” — under strict conditions — to get access to that data.

The problem with that framing is that the big internet companies still end up in control of the data, rather than the end users. The situation with NYU seems like a perfectly good example. Facebook shouldn’t have to share data from people who don’t consent, but with the Ad Observer, it’s all people who are actually consenting to handing over their own data, and Facebook shouldn’t be in the business of blocking that — even if it’s inevitable that some reporter at some future date will try to spin that into a story claiming that Facebook “violated” privacy because these researchers convinced people to turn over their own info.

Source: Privacy Laws Giving Big Internet Companies A Convenient Excuse To Avoid Academic Scrutiny | Techdirt

The argument Mike makes above is basically a plea for what Sir Tim Berners Lee, inventor of the internet is pleading for and already making in his companies Solid and Inrupt. User data is placed in personal Pods / Silos and the user can determine what data is given to who.

It’s an idealistic scenario that seems to ignore a few things:

  • who hosts the pods? the hoster can usually see into things or at any rate gather metadata (which is usually more valuable than the actual data). Who pays for hosting the pods?
  • will people understand and be willing to take the time to curate their pod access? people have trouble finding privacy settings on their social networks, this promises to be more complex
  • if a site requires access to data in a pod, won’t people blindly click on accept without understanding that they are giving away their data? Or will they be coerced into giving away data they don’t want because there are no alternatives to using the service?

The New York Times has a nice article on what he’s doing: He Created the Web. Now He’s Out to Remake the Digital World.

Data Broker Looking To Sell Global Real-Time Vehicle Location Data To Government Agencies, Including The Military

[…]

utting a couple of middle men between the app data and the purchase of data helps agencies steer clear of Constitutional issues related to the Supreme Court’s Carpenter decision, which introduced a warrant mandate for engaging in proxy tracking of people via cell service providers.

But phones aren’t the only objects that generate a wealth of location data. Cars go almost as many places as phones do, providing data brokers with yet another source of possibly useful location data that government agencies might be interested in obtaining access to. Here’s Joseph Cox of Vice with more details:

A surveillance contractor that has previously sold services to the U.S. military is advertising a product that it says can locate the real-time locations of specific cars in nearly any country on Earth. It says it does this by using data collected and sent by the cars and their components themselves, according to a document obtained by Motherboard.

“Ulysses can provide our clients with the ability to remotely geolocate vehicles in nearly every country except for North Korea and Cuba on a near real time basis,” the document, written by contractor The Ulysses Group, reads. “Currently, we can access over 15 billion vehicle locations around the world every month,” the document adds.

Historical data is cool. But what’s even cooler is real-time tracking of vehicle movements. Of course the DoD would be interested in this. It has a drone strike program that’s thirsty for location data and has relied on even more questionable data in the past to make extrajudicial “death from above” decisions in the past.

Phones are reliable snitches. So are cars — a fact that may come as a surprise to car owners who haven’t been paying attention to tech developments over the past several years. Plenty of data is constantly captured by internal “black boxes,” but tends to only be retained when there’s a collision. But the interconnectedness of cars and people’s phones provides new data-gathering opportunities.

Then there are the car manufacturers themselves, which apparently feel driver data is theirs for the taking and are willing to sell it to third parties who are (also apparently) willing to sell all of this to government agencies.

“Vehicle telematics is data transmitted from the vehicle to the automaker or OEM through embedded communications systems in the car,” the Ulysses document continues. “Among the thousands of other data points, vehicle location data is transmitted on a constant and near real time basis while the vehicle is operating.”

This document wasn’t obtained from FOIA requests. It actually couldn’t be — not if Ulysses isn’t currently selling to government agencies. It was actually obtained by Senator Ron Wyden, who shared it with Vice’s tech-related offshoot, Motherboard. As Wyden noted while handing it over, very little is known about these under-the-radar suppliers of location data and their government customers. This company may have no (acknowledged) government customers at this point, but real-time access to vehicle movement is something plenty of government agencies would be willing to pay for.

[…]

Source: Data Broker Looking To Sell Real-Time Vehicle Location Data To Government Agencies, Including The Military | Techdirt

India’s New Cyber Law Goes Live: Subtracts Safe Harbor Protections, Adds Compelled Assistance Demands For Intermediaries, Massive surveillance infrastructure

New rules for social media companies and other hosts of third-party content have just gone into effect in India. The proposed changes to India’s 2018 Intermediary Guidelines are now live, allowing the government to insert itself into content moderation efforts and make demands of tech companies some simply won’t be able to comply with.

Now, under the threat of fines and jail time, platforms like Twitter (itself a recent combatant of the Indian government over its attempts to silence people protesting yet another bad law) can be held directly responsible for any “illegal” content it hosts, even as the government attempts to pay lip service to honoring long-standing intermediary protections that immunized them from the actions of their users.

[…]

turns a whole lot of online discourse into potentially illegal content.

[…]

The new mandates demand platforms operating in India proactively scan all uploaded content to ensure it complies with India’s laws.

The Intermediary shall deploy technology based automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content.

This obligation is not only impossible to comply with (and is prohibitively expensive for smaller platforms and sites/online forums that don’t have access to AI tools), it opens up platforms to prosecution simply for being unable to do the impossible. And complying with this directive to implement this demand undercuts the Safe Harbour protections granted to intermediaries by the Indian government.

If you’re moderating all content prior to it going “live,” it’s no longer possible to claim you’re not acting as an editor or curator. The Indian government grants Safe Harbour to “passive” conduits of information. The new law pretty much abolishes those because complying with the law turns intermediaries from “passive” to “active.”

Broader and broader it gets, with the Indian government rewriting its “national security only” demands to cover “investigation or detection or prosecution or prevention of offence(s).” In other words, the Indian government can force platforms and services to provide information and assistance within 72 hours of notification to almost any government agency for almost any reason.

This assistance includes “tracing the origin” of illegal content — something that may be impossible to comply with since some platforms don’t collect enough personal information to make identification possible. Any information dug up by intermediaries in support of government action must be retained for 180 days whether or not the government makes use of it.

More burdens: any intermediary with more than 5 million users must establish permanent residence in India and provide on-call service 24/7. Takedown compliance has been accelerated from 36 hours of notification to 24 hours.

Very few companies will be able to comply with most of these directives. No company will be able to comply with them completely. And with the government insisting on adding more “eye of the beholder” content to the illegal list, the law encourages pre-censorship of any questionable content and invites regulators and other government agencies to get into the moderation business.

[…]

Source: India’s New Cyber Law Goes Live: Subtracts Safe Harbor Protections, Adds Compelled Assistance Demands For Intermediaries | Techdirt

Extension shows the monopoly big tech has on your browsing – you always route your traffic through them

A new extension for Google Chrome has made explicit how most popular sites on the internet load resources from one or more of Google, Facebook, Microsoft and Amazon.

The extension, Big Tech Detective, shows the extent to which websites exchange data with these four companies by reporting on them. It also optionally blocks sites that request such data. Any such request is also effectively a tracker, since the provider sees the IP number and other request data for the user’s web browser.

The extension was built by investigative data reporter Dhruv Mehrotra in association with the Anti-Monopoly Fund at the Economic Security Project, a non-profit research group financed by the US-based Hopewell Fund in Washington DC.

Cara Rose Defabio, editor at the Economic Security Project, said: “Big Tech Detective is a tool that pulls the curtain back on exactly how much control these corporations have over the internet. Our browser extension lets you ‘lock out’ Google, Amazon, Facebook and Microsoft, alerting you when a website you’re using pings any one of these companies… you can’t do much online without your data being routed through one of these giants.”

[…]

That, perhaps, is an exaggeration. Big Tech Detective will spot sites that use Google Analytics to report on web traffic, or host Google ads, or use a service hosted on Amazon Web Services such as Chartbeat analytics – which embeds a script that pings its service every 15 seconds according to this post – but that is not the same as routing your data through the services.

In terms of actual data collection and analysis, we would guess that Google and Facebook are ahead of AWS and Microsoft, and munging together infrastructure services with analytics and tracking is perhaps unhelpful.

Another point to note is that a third-party service hosted on a public cloud server at AWS, Microsoft or Google is distinct from services run directly by those companies. Public cloud is an infrastructure choice and the infrastructure provider does not get that data other than being able to see that there is traffic.

[Note: This is untrue. They also get to see where the traffic is from, where it goes to, how it is routed, how many connections there are, the size of the traffice being sent. This metadata is often more valuable than the actual data being sent]

Dependencies

Defabio made the point, though, that the companies behind public cloud have huge power, referencing Amazon’s decision to “refuse hosting service to the right wing social app Parler, effectively shutting it down.” While there was substantial popular approval of the action, it was Amazon’s decision, rather than one based on law and regulation.

She argued that these giant corporations should be broken up, so that Amazon the retailer is separate from AWS, for example. The release of the new extension is timed to coincide with US government hearings on digital competition, drawing on research from last year.

[…]

Source: Ever felt that a few big tech companies are following you around the internet? That’s because … they are • The Register

1Password has none, KeePass has none… So why are there seven embedded trackers in the LastPass Android app?

A security researcher has recommended against using the LastPass password manager Android app after noting seven embedded trackers. The software’s maker says users can opt out if they want.

[…]

The Exodus report on LastPass shows seven trackers in the Android app, including four from Google for the purpose of analytics and crash reporting, as well as others from AppsFlyer, MixPanel, and Segment. Segment, for instance, gathers data for marketing teams, and claims to offer a “single view of the customer”, profiling users and connecting their activity across different platforms, presumably for tailored adverts.

LastPass has many free users – is it a problem if its owner seeks to monetise them in some way? Kuketz said it is. Typically, the way trackers like this work is that the developer compiles code from the tracking provider into their application. The gathered information can be used to build up a profile of the user’s interests from their activities, and target them with ads.

Even the app developers do not know what data is collected and transmitted to the third-party providers, said Kuketz, and the integration of proprietary code could introduce security risks and unexpected behaviour, as well as being a privacy risk. These things do not belong in password managers, which are security-critical, he said.

Kuketz also investigated what data is transmitted by inspecting the network traffic. He found that this included details about the device being used, the mobile operator, the type of LastPass account, the Google Advertising ID (which can connect data about the user across different apps). During use, the data also shows when new passwords are created and what type they are. Kuketz did not suggest that actual passwords or usernames are transmitted, but did note the absence of any opt-out dialogs, or information for the user about the data being sent to third parties. In his view, the presence of the trackers demonstrates a suboptimal attitude to security. Kuketz recommended changing to a different password manager, such as the open-source KeePass.

Do all password apps contain such trackers? Not according to Exodus. 1Password has none. KeePass has none. The open-source Bitwarden has two for Google Firebase analytics and Microsoft Visual Studio crash reporting. Dashlane has four. LastPass does appear to have more than its rivals. And yes, lots of smartphone apps have trackers: today, we’re talking about LastPass.

[…]

“All LastPass users, regardless of browser or device, are given the option to opt-out of these analytics in their LastPass Privacy Settings, located in their account here: Account Settings > Show Advanced Settings > Privacy.

Source: 1Password has none, KeePass has none… So why are there seven embedded trackers in the LastPass Android app? • The Register

Looking for this option was definitely not easy to find.

I just bought a year’s subscription as I thought the $2.11 / month price point was OK. They added on a few cents and then told me this price was excl VAT. Not doing very well on the trustworthyness scale here.

Use AdNauseum to Block Ads and Confuse Google’s Advertising

In an online world in which countless systems are trying to figure out what exactly you enjoy so they can serve you up advertising about it, it really fucks up their profiling mechanisms when they think you like everything. And to help you out with this approach, I recommend checking out the Chrome/Firefox extension AdNauseum. You won’t find it on the Chrome Web Store, however, as Google frowns at extensions that screw up Google’s efforts to show you advertising for some totally inexplicable reason. You’ll have to install it manually, but it’s worth it.

[…]

AdNauseum works on a different principle. As Lee McGuigan writes over at the MIT Technology Review:

“AdNauseam is like conventional ad-blocking software, but with an extra layer. Instead of just removing ads when the user browses a website, it also automatically clicks on them. By making it appear as if the user is interested in everything, AdNauseam makes it hard for observers to construct a profile of that person. It’s like jamming radar by flooding it with false signals. And it’s adjustable. Users can choose to trust privacy-respecting advertisers while jamming others. They can also choose whether to automatically click on all the ads on a given website or only some percentage of them.”

McGuigan goes on to describe the various experiments he worked on with AdNauseum founder Helen Nissenbaum, allegedly proving that the extension can make it past Google’s various checks for fraudulent or otherwise illegitimate clicks on advertising. Google, as you might expect, denies the experiments actually prove anything, and maintains that a “vast majority” of these kinds of clicks are detected and ignored.

[…]

Once you’ve installed AdNauseum, you’ll be presented with three simple options:

undefined
Screenshot: David Murphy

Feel free to enable all three, but heed AdNauseum’s warning: You probably don’t want to use the extension alongside another adblocker, as the two will conflict and you probably won’t see any added benefit.

As with most adblockers, there are plenty of options you can play with if you dig deeper into AdNauseum’s settings.

[…]

note that AdNauseum still (theoretically) generates revenue for the sites tracking you. That in itself might cause you to adopt a nuclear approach vs. an obfuscation-by-noise approach. Your call.

Source: Use AdNauseum to Block Ads and Confuse Google’s Advertising

CNAME DNS-based tracking defies your browser privacy defenses

Boffins based in Belgium have found that a DNS-based technique for bypassing defenses against online tracking has become increasingly common and represents a growing threat to both privacy and security.

In a research paper to be presented in July at the 21st Privacy Enhancing Technologies Symposium (PETS 2021), KU Leuven-affiliated researchers Yana Dimova, Gunes Acar, Lukasz Olejnik, Wouter Joosen, and Tom Van Goethem delve into increasing adoption of CNAME-based tracking, which abuse DNS records to erase the distinction between first-party and third-party contexts.

“This tracking scheme takes advantage of a CNAME record on a subdomain such that it is same-site to the including web site,” the paper explains. “As such, defenses that block third-party cookies are rendered ineffective.”

[…]

A technique known as DNS delegation or DNS aliasing has been known since at least 2007 and showed up in privacy-focused research papers in 2010 [PDF] and 2014 [PDF]. Based on the use of CNAME DNS records, the counter anti-tracking mechanism drew attention two years ago when open source developer Raymond Hill implemented a defense in the Firefox version of his uBlock Origin content blocking extension.

CNAME cloaking involves having a web publisher put a subdomain – e.g. trackyou.example.com – under the control of a third-party through the use of a CNAME DNS record. This makes a third-party tracker associated with the subdomain look like it belongs to the first-party domain, example.com.

The boffins from Belgium studied the CNAME-based tracking ecosystem and found 13 different companies using the technique. They claim that the usage of such trackers is growing, up 21 per cent over the past 22 months, and that CNAME trackers can be found on almost 10 per cent of the top 10,000 websites.

What’s more, sites with CNAME trackers have an average of about 28 other tracking scripts. They also leak data due to the way web architecture works. The researchers found cookie data leaks on 7,377 sites (95%) out of the 7,797 sites that used CNAME tracking. Most of these were the result of third-party analytics scripts setting cookies on the first-party domain.

Not all of these leaks exposed sensitive data but some did. Out of 103 websites with login functionality tested, the researchers found 13 that leaked sensitive info, including the user’s full name, location, email address, and authentication cookie.

“This suggests that this scheme is actively dangerous,” wrote Dr Lukasz Olejnik, one of the paper’s co-authors, an independent privacy researcher, and consultant, in a blog post. “It is harmful to web security and privacy.”

[…]

In addition, the researchers report that ad tech biz Criteo switches specifically to CNAME tracking – putting its cookies into a first-party context – when its trackers encountered users of Safari, which has strong third-party cookie defenses.

According to Olejnik, CNAME tracking can defeat most anti-tracking techniques and there are few defenses against it.

Firefox running the add-on uBlock Origin 1.25+ can see through CNAME deception. So too can Brave, which recently had to repair its CNAME defenses due to problems it created with Tor.

Chrome falls short because it does not have a suitable DNS-resolving API for uBlock Origin to hook into. Safari will limit the lifespan of cookies set via CNAME cloaking but doesn’t provide a way to undo the domain disguise to determine whether the subdomain should be blocked outright.

[…]

Source: What’s CNAME of your game? This DNS-based tracking defies your browser privacy defenses • The Register

FortressIQ just comes out and says it: To really understand business processes, feed your staff’s screen activity to an AI

In a sign that interest in process mining is heating up, vendor FortressIQ is launching an analytics platform with a novel approach to understanding how users really work – it “videos” their on-screen activity for later analysis.

According to the San Francisco-based biz, its Process Intelligence platform will allow organisations to be better prepared for business transformation, the rollout of new applications, and digital projects by helping customers understand how people actually do their jobs, as opposed to how the business thinks they work.

The goal of process mining itself is not new. German vendor Celonis has already marked out the territory and raised approximately $290m in a funding round in November 2019, when it was valued at $2.5bn.

Celonis works by recording a users’ application logs, and by applying machine learning to data across a number of applications, purports to figure out how processes work in real life. FortressIQ, which raised $30m in May 2020, uses a different approach – recording all the user’s screen activity and using AI and computer vision to try to understand all their behaviour.

Pankaj Chowdhry, CEO at FortressIQ, told The Register that the company had built was a “virtual process analyst”, a software agent which taps into a user’s video card on the desktop or laptop. It streams a low-bandwidth version of what is occuring on the screen to provide the raw data for the machine-learning models.

“We built machine learning and computer vision AI that will, in essence, watch that movie, and convert it into a structured activity,” he said.

In an effort to assure those forgiven for being a little freaked out by the recording of users’ every on-screen move, the company said it anonymises the data it analyses to show which processes are better than others, rather than which user is better. Similarly, it said it guarantees the privacy of on-screen data.

Nonetheless, users should be aware of potential kickbacks when deploying the technology, said Tom Seal, senior research director with IDC.

“Businesses will be somewhat wary about provoking that negative reaction, particularly with the remote working that’s been triggered by COVID,” he said.

At the same time, remote working may be where the approach to process mining can show its worth, helping to understand how people adapt their working patterns in the current conditions.

FortressIQ may have an advantage over rivals in that it captures all data from the users’ screen, rather than the applications the organisation thinks should be involved in a process, said Seal. “It’s seeing activity that the application logs won’t pick up, so there is an advantage there.”

Of course, there is still the possibility that users get around prescribed processes using Post-It notes, whiteboards and phone apps, which nobody should put beyond them.

Celonis and FortressIQ come from very different places. The German firm has a background in engineering and manufacturing, with an early use case at Siemens led by Lars Reinkemeyer who has since joined the software vendor as veep for customer transformation. He literally wrote the book on process mining while at the University of California, Santa Barbara. FortressIQ, on the other hand, was founded by Chowdhry who worked as AI leader at global business process outsourcer Genpact before going it alone.

And it’s not just these two players. Software giant SAP has bought Signavio, a specialist in business process analysis and management, in a deal said to be worth $1.2bn to help understand users’ processes as it readies them for the cloud and application upgrades. ®

Source: FortressIQ just comes out and says it: To really understand business processes, feed your staff’s screen activity to an AI • The Register

Cell Phone Location Privacy could be done easily

We all know that our cell phones constantly give our location away to our mobile network operators; that’s how they work. A group of researchers has figured out a way to fix that. “Pretty Good Phone Privacy” (PGPP) protects both user identity and user location using the existing cellular networks. It protects users from fake cell phone towers (IMSI-catchers) and surveillance by cell providers.

It’s a clever system. The players are the user, a traditional mobile network operator (MNO) like AT&T or Verizon, and a new mobile virtual network operator (MVNO). MVNOs aren’t new. They’re intermediaries like Cricket and Boost.

Here’s how it works:

  1. One-time setup: The user’s phone gets a new SIM from the MVNO. All MVNO SIMs are identical.
  2. Monthly: The user pays their bill to the MVNO (credit card or otherwise) and the phone gets anonymous authentication (using Chaum blind signatures) tokens for each time slice (e.g., hour) in the coming month.
  3. Ongoing: When the phone talks to a tower (run by the MNO), it sends a token for the current time slice. This is relayed to a MVNO backend server, which checks the Chaum blind signature of the token. If it’s valid, the MVNO tells the MNO that the user is authenticated, and the user receives a temporary random ID and an IP address. (Again, this is now MVNOs like Boost already work.)
  4. On demand: The user uses the phone normally.

The MNO doesn’t have to modify its system in any way. The PGPP MVNO implementation is in software. The user’s traffic is sent to the MVNO gateway and then out onto the Internet, potentially even using a VPN.

All connectivity is data connectivity in cell networks today. The user can choose to be data-only (e.g., use Signal for voice), or use the MVNO or a third party for VoIP service that will look just like normal telephony.

The group prototyped and tested everything with real phones in the lab. Their approach adds essentially zero latency, and doesn’t introduce any new bottlenecks, so it doesn’t have performance/scalability problems like most anonymity networks. The service could handle tens of millions of users on a single server, because it only has to do infrequent authentication, though for resilience you’d probably run more.

The paper is here.

Source: Cell Phone Location Privacy | OSINT

I checked Apple’s new privacy ‘nutrition labels.’ Many were false.

[…]

Apple only lets you access iPhone apps through its own App Store, which it says keeps everything safe. It appeared to bolster that idea when it announced in 2020 that it would ask app makers to fill out what are essentially privacy nutrition labels. Just like packaged food has to disclose how much sugar it contains, apps would have to disclose in clear terms how they gobble your data. The labels appear in boxes toward the bottom of app listings. (Click here for my guide on how to read privacy nutrition labels.)

But after I studied the labels, the App Store is now a product I trust less to protect us. In some ways, Apple uses a narrow definition of privacy that benefits Apple — which has its own profit motivations — more than it benefits us.

Apple’s big privacy product is built on a shaky foundation: the honor system. In tiny print on the detail page of each app label, Apple says, “This information has not been verified by Apple.”

The first time I read that, I did a double take. Apple, which says caring for our privacy is a “core responsibility,” surely knows devil-may-care data harvesters can’t be counted on to act honorably. Apple, which made an estimated $64 billion off its App Store last year, shares in the responsibility for what it publishes.

It’s true that just by asking apps to highlight data practices, Apple goes beyond Google’s rival Play Store for Android phones. It has also promised to soon make apps seek permission to track us, which Facebook has called an abuse of Apple’s monopoly over the App Store.

In an email, Apple spokeswoman Katie Clark-AlSadder said: “Apple conducts routine and ongoing audits of the information provided and we work with developers to correct any inaccuracies. Apps that fail to disclose privacy information accurately may have future app updates rejected, or in some cases, be removed from the App Store entirely if they don’t come into compliance.”

My spot checks suggest Apple isn’t being very effective.

And even when they are filled out correctly, what are Apple’s privacy labels allowing apps to get away with not telling us?

Trust but verify

A tip from a tech-savvy Washington Post reader helped me realize something smelled fishy. He was using a journaling app that claimed not to collect any data but, using some technical tools, he spotted it talking an awful lot to Google.

[…]

To be clear, I don’t know exactly how widespread the falsehoods are on Apple’s privacy labels. My sample wasn’t necessarily representative: There are about 2 million apps, and some big companies, like Google, have yet to even post labels. (They’re only required to do so with new updates.) About 1 in 3 of the apps I checked that claimed they took no data appeared to be inaccurate. “Apple is the only one in a position to do this on all the apps,” says Jackson.

But if a journalist and a talented geek could find so many problems just by kicking over a few stones, why isn’t Apple?

Even after I sent it a list of dubious apps, Apple wouldn’t answer my specific questions, including: How many bad apps has it caught? If being inaccurate means you get the boot, why are some of the ones I flagged still available?

[…]

We need help to fend off the surveillance economy. Apple’s App Store isn’t doing enough, but we also have no alternative. Apple insists on having a monopoly in running app stores for iPhones and iPads. In testimony to Congress about antitrust concerns last summer, Apple CEO Tim Cook argued that Apple alone can protect our security.

Other industries that make products that could harm consumers don’t necessarily get to write the rules for themselves. The Food and Drug Administration sets the standards for nutrition labels. We can debate whether it’s good at enforcement, but at least when everyone has to work with the same labels, consumers can get smart about reading them — and companies face the penalty of law if they don’t tell the truth.

Apple’s privacy labels are not only an unsatisfying product. They should also send a message to lawmakers weighing whether the tech industry can be trusted to protect our privacy on its own.

Source: I checked Apple’s new privacy ‘nutrition labels.’ Many were false.

How to Restore Recently Deleted Instagram Posts – because deleted means: stored somewhere you can’t get at them

Instagram is adding a new “Recently deleted” folder to the app’s menu that temporarily stores posts after you remove them from your profile or archive, giving you the ability to restore deleted posts if you change your mind.

The folder includes sections for photos, IGTV, Reels, and Stories posts. No one else can see your recently deleted posts, but as long as a photo or video is still in the folder, it can be restored. Regular photos, IGTV videos, and Reels remain in the folder for up to 30 days, after which they’re gone forever. Stories stick around for up to 24 hours before they’re permanently removed, but you can still access them in your Stories archive.

[…]

Source: How to Restore Recently Deleted Instagram Posts

It’s nice how they’re framing the fact that they don’t delete your data as a “feature”

Amazon Plans to Install Creepy Always-On Surveillance Cameras in Delivery Vans

Not content to only wield its creepy surveillance infrastructure against warehouse workers and employees considering unionization, Amazon is reportedly gearing up to install perpetually-on cameras inside its fleet of delivery vehicles as well.

A new report from The Information claims that Amazon recently shared the plans in an instructional video sent out to the contractor workers who drive the Amazon-branded delivery vans.

In the video, the company reportedly explains to drivers that the high-tech video cameras will use artificial intelligence to determine when drivers are engaging in risky behavior, and will give out verbal warnings including “Distracted driving,” “No stop detected” and “Please slow down.”

According to a video posted to Vimeo a week ago, the hardware and software for the cameras will be provided through a partnership with California-based company Netradyne, which is also responsible for a platform called Driveri that similarly uses artificial intelligence to analyze a driver’s behavior as they operate a vehicle.

While the camera’s automated feedback will be immediate, other data will also reportedly be stored for later analysis that will help the company to evaluate its fleet of drivers.

Although it’s not clear when Amazon plans to install the cameras or how many of the vehicles in the company’s massive fleet will be outfitted with them, the company told The Information in a statement that the software will be implemented in the spirit of increasing safety precautions and not, you know, bolstering an insidious and growing surveillance apparatus.

Source: Amazon Plans to Install Always-On Surveillance Cameras in Delivery Vans

ProtonMail, Tutanota among authors of letter urging EU to reconsider encryption rules

Encrypted service providers are urging lawmakers to back away from a controversial plan that critics say would undercut effective data protection measures.

ProtonMail, Threema, Tresorit and Tutanota — all European companies that offer some form of encrypted services — issued a joint statement this week declaring that a resolution the European Council adopted on Dec. 14 is ill-advised. That measure calls for “security through encryption and security despite encryption,” which technologists have interpreted as a threat to end-to-end encryption. In recent months governments around the world, including the U.S., U.K., Australia, New Zealand, Canada, India and Japan, have been reigniting conversations about law enforcement officials’ interest in bypassing encryption, as they have sporadically done for years.

In a letter that will be sent to council members on Thursday, the authors write that the council’s stated goal of endorsing encryption, and the council’s argument that law enforcement authorities must rely on accessing electronic evidence “despite encryption,” contradict one another. The advancement of legislation that forces technology companies to guarantee police investigators a way to intercept user messages, for instance, repeatedly has been scrutinized by technology leaders who argue there is no way to stop such a tool from being abused.

The resolution “will threaten the basic rights of millions of Europeans and undermine a global shift towards adopting end-to-end encryption,” say the companies, which offer users either encrypted email, file-sharing or messaging.

“[E]ncryption is an absolute, data is either encrypted or it isn’t, users have privacy or they don’t,” the letter, which was shared with CyberScoop in advance, states. “The desire to give law enforcement more tools to fight crime is obviously understandable. But the proposals are the digital equivalent of giving law enforcement a key to every citizens’ home and might begin a slippery slope towards greater violations of personal privacy.”

[…]

Source: ProtonMail, Tutanota among authors of letter urging EU to reconsider encryption rules

Firefox 85 removes support for Flash and adds protection against supercookies

Mozilla has released Firefox 85 ending support for Adobe Flash Player plugin and has brought in ways to block supercookies to enhance a user’s privacy. Mozilla, in a blog post, noted that supercookies are store user identifiers, and are much more difficult to delete and block. It further noted that the changes it is making through network partitioning in Firefox 85 will “reduce the effectiveness of cache-based supercookies by eliminating a tracker’s ability to use them across websites.”

“Trackers can abuse caches to create supercookies and can use connection identifiers to track users. But by isolating caches and network connections to the website they were created on, we make them useless for cross-site tracking,” Mozilla noted.

It explained that the network partitioning works by splitting the Firefox browser cache on a per-website basis, a technical solution that prevents websites from tracking users as they move across the web. Mozilla also noted that by removing support for Flash, there was not much impact on the page load time. The development was first reported by ZDNet.

[…]

Source: Firefox 85 removes support for Flash and adds protection against supercookies – Technology News

Indian government slams Facebook over WhatsApp ‘privacy’ update, wants its own Europe-style opt-out switch

The Indian government has sent a fierce letter to Facebook over its decision to update the privacy rules around its WhatsApp chat service, and asked the antisocial media giant to put a halt to the plans.In an email from the IT ministry to WhatsApp head Will Cathcart, provided to media outlets, the Indian government notes that the proposed changes “raise grave concerns regarding the implications for the choice and autonomy of Indian citizens.”In particular, the ministry is incensed that European users will be given a choice to opt out over sharing WhatsApp data with the larger Facebook empire, as well as businesses using the platform to communicate with customers, while Indian users will not.“This differential and discriminatory treatment of Indian and European users is attracting serious criticism and betrays a lack of respect for the rights and interest of Indian citizens who form a substantial portion of WhatsApp’s user base,” the letter says. It concludes by asking WhatsApp to “withdraw the proposed changes.”IndiaIndia’s top techies form digital foundation to fight Apple and GoogleREAD MOREThe reason that Europe is being treated as a special case by Facebook is, of course, the existence of the GDPR privacy rules that Facebook has repeatedly flouted and as a result faces pan-European legal action.

Source: Indian government slams Facebook over WhatsApp ‘privacy’ update, wants its own Europe-style opt-out switch • The Register

AI upstart stealing facial data told to delete data and algorithms

Everalbum, a consumer photo app maker that shut down on August 31, 2020, and has since relaunched as a facial recognition provider under the name Paravision, on Monday reached a settlement with the FTC over the 2017 introduction of a feature called “Friends” in its discontinued Ever app. The watchdog agency claims the app deployed facial recognition code to organize users’ photos by default, without permission.

According to the FTC, between July 2018 and April 2019, Everalbum told people that it would not employ facial recognition on users’ content without consent. The company allegedly let users in certain regions – Illinois, Texas, Washington, and the EU – make that choice, but automatically activated the feature for those located elsewhere.

The agency further claims that Everalbum’s use of facial recognition went beyond supporting the Friends feature. The company is alleged to have combined users’ faces with facial images from other information to create four datasets that informed its facial recognition technology, which became the basis of a face detection service for enterprise customers.

The company also is said to have told consumers using its app that it would delete their data if they deactivated their accounts, but didn’t do so until at least October 2019.

The FTC, in announcing the case and its settlement, said Everalbum/Paravision will be required to delete: photos and videos belonging to Ever app users who deactivated their accounts; all face embeddings – vector representations of facial features – from users who did not grant consent; and “any facial recognition models or algorithms developed with Ever users’ photos or videos.”

The FTC has not done this in past privacy cases with technology companies. According to FTC Commissioner Rohit Chopra, when Google and YouTube agreed to pay $170m over allegations the companies had collected data from children without parental consent, the FTC settlement “allowed Google and YouTube to profit from its conduct, even after paying a civil penalty.”

Likewise, when the FTC voted to approve a settlement with Facebook over claims it had violated its 2012 privacy settlement agreement, he said, Facebook did not have to give up any of its facial recognition technology or data.

“Commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data,” said Chopra in a statement [PDF]. “This is an important course correction.”

[…]

Source: Privacy pilfering project punished by FTC purge penalty: AI upstart told to delete data and algorithms • The Register

NYPD posts surveillance systems and use and requests comments

Beginning, January 11, 2020, draft surveillance technology impact and use policies will be posted on the Department’s website. Members of the public are invited to review the impact and use policies and provide feedback on their contents. The impact and use policies provide details of: 1) the capabilities of the Department’s surveillance technologies, 2) the rules regulating the use of the technologies, 3) protections against unauthorized access of the technologies or related data, 4) surveillance technologies data retention policies, 5) public access to surveillance technologies data, 6) external entity access to surveillance technologies data, 7) Department trainings in the use of surveillance technologies, 8) internal audit and oversight mechanisms of surveillance technologies, 9) health and safety reporting on the surveillance technologies, and 10) potential disparate impacts of the impact and use policies for surveillance technologies.

Source: Draft Policies for Public Comment

WhatsApp delays enforcement of privacy terms by 3 months, following backlash

WhatsApp said on Friday that it won’t enforce the planned update to its data-sharing policy until May 15, weeks after news about the new terms created confusion among its users, exposed the Facebook app to a potential lawsuit, triggered a nationwide investigation and drove tens of millions of its loyal fans to explore alternative messaging apps.

“We’re now moving back the date on which people will be asked to review and accept the terms. No one will have their account suspended or deleted on February 8. We’re also going to do a lot more to clear up the misinformation around how privacy and security works on WhatsApp. We’ll then go to people gradually to review the policy at their own pace before new business options are available on May 15,” the firm said in a blog post.

Source: WhatsApp delays enforcement of privacy terms by 3 months, following backlash | TechCrunch

I’m pretty sure there is no confusion. People just don’t want all their data shared to Facebook when they were promised it wouldn’t be. So they are leaving to Signal and Telegram.

Turkey launches antitrust probe into WhatsApp and Facebook over the new privacy policy. Still can’t export Whatsapp chats.

WhatsApp updated its privacy policy at the turn of the new year. Users were notified via a popup message upon opening the app that their data would now be shared with Facebook and other companies come February 8. Due to Facebook’s notorious history with user data and privacy, the new update has since then garnered criticism with many people migrating to alternative messaging apps like Signal and Telegram. Microsoft entered the playing field too, recommending users to use Skype in place of the Facebook-owned WhatsApp.

In the latest, Turkey has now launched an antitrust probe into Facebook and WhatsApp regarding the updated privacy policy. Bloomberg reports that:

Turkey’s antitrust board launched an investigation into Facebook Inc. and its messaging service WhatsApp Inc. over new usage terms that have sparked privacy concerns.

[…]

The regulator also said it was halting implementation of such terms, it said on Monday. The new terms would result in “more data being collected, processed and used by Facebook,” according to the statement.

Source: Turkey launches antitrust probe into WhatsApp and Facebook over the new privacy policy – Neowin

Every Deleted Parler Post, Many With Users’ Location Data, Has Been Archived. Parler goes down. Still can’t export your Whatsapp history.

In the wake of the violent insurrection at the U.S. Capitol by scores of President Trump’s supporters, a lone researcher began an effort to catalogue the posts of social media users across Parler, a platform founded to provide conservative users a safe haven for uninhibited “free speech” — but which ultimately devolved into a hotbed of far-right conspiracy theories, unchecked racism, and death threats aimed at prominent politicians.

The researcher, who asked to be referred to by her Twitter handle, @donk_enby, began with the goal of archiving every post from January 6, the day of the Capitol riot; what she called a bevy of “very incriminating” evidence. According to the Atlantic Council’s Digital Forensic Research Lab, among other sources, Parler is one of a several apps used by the insurrections to coordinate their breach of the Capitol, in a plan to overturn the 2020 election results and keep Donald Trump in power.

Five people died in the attempt.

Hoping to create a lasting public record for future researchers to sift through, @donk_enby began by archiving the posts from that day. The scope of the project quickly broadened, however, as it became increasingly clear that Parler was on borrowed time. Apple and Google announced that Parler would be removed from their app stores because it had failed to properly moderate posts that encouraged violence and crime. The final nail in the coffin came Saturday when Amazon announced it was pulling Parler’s plug.

In an email first obtained by BuzzFeed News, Amazon officials told the company they planned to boot it from its clouding hosting service, Amazon Web Services, saying it had witnessed a “steady increase” in violent content across the platform. “It’s clear that Parler does not have an effective process to comply with the AWS terms of service,” the email read.

Operating on little sleep, @donk_enby began the work of archiving all of Parler’s posts, ultimately capturing around 99.9 percent of its content. In a tweet early Sunday, @donk_enby said she was crawling some 1.1 million Parler video URLs. “These are the original, unprocessed, raw files as uploaded to Parler with all associated metadata,” she said. Included in this data tranche, now more than 56 terabytes in size, @donk_enby confirmed that the raw video files include GPS metadata pointing to exact locations of where the videos were taken.

@donk_enby later shared a screenshot showing the GPS position of a particular video, with coordinates in latitude and longitude.

The privacy implications are obvious, but the copious data may also serve as a fertile hunting ground for law enforcement. Federal and local authorities have arrested dozens of suspects in recent days accused of taking part in the Capitol riot, where a Capitol police officer, Brian Sicknick, was fatally wounded after being struck in the head with a fire extinguisher.

[…]

Kirtaner, creator of 420chan — a.k.a. Aubrey Cottle — reported obtaining 6.3 GB of Parler user data from an unsecured AWS server in November. The leak reportedly contained passwords, photos and email addresses from several other companies as well. Parler CEO John Matze later claimed to Business Insider that the data contained only “public information” about users, which had been improperly stored by an email vendor whose contract was subsequently terminated over the leak. (This leak is separate from the debunked claim that Parler was “hacked” in late November, proof of which was determined to be fake.)

In December, Twitter suspended Kirtaner for tweeting, “I’m killing Parler and its fucking glorious,” citing its rules against threatening “violence against an individual or group of people.” Kirtaner’s account remains suspended despite an online campaign urging Twitter’s safety team to reverse its decision. Gregg Housh, an internet activist involved in many early Anonymous campaigns, noted online that the tweet was “not aimed at a person and [was] not actually violent.”

Source: Every Deleted Parler Post, Many With Users’ Location Data, Has Been Archived

ODoH: Cloudflare and Apple design a new privacy-friendly internet protocol for DNS

Engineers at Cloudflare and Apple say they’ve developed a new internet protocol that will shore up one of the biggest holes in internet privacy that many don’t know even exists. Dubbed Oblivious DNS-over-HTTPS, or ODoH for short, the new protocol makes it far more difficult for internet providers to know which websites you visit.

But first, a little bit about how the internet works.

Every time you go to visit a website, your browser uses a DNS resolver to convert web addresses to machine-readable IP addresses to locate where a web page is located on the internet. But this process is not encrypted, meaning that every time you load a website the DNS query is sent in the clear. That means the DNS resolver — which might be your internet provider unless you’ve changed it — knows which websites you visit. That’s not great for your privacy, especially since your internet provider can also sell your browsing history to advertisers.

Recent developments like DNS-over-HTTPS (or DoH) have added encryption to DNS queries, making it harder for attackers to hijack DNS queries and point victims to malicious websites instead of the real website you wanted to visit. But that still doesn’t stop the DNS resolvers from seeing which website you’re trying to visit.

Enter ODoH, which builds on previous work by Princeton academics. In simple terms, ODoH decouples DNS queries from the internet user, preventing the DNS resolver from knowing which sites you visit.

Here’s how it works: ODoH wraps a layer of encryption around the DNS query and passes it through a proxy server, which acts as a go-between the internet user and the website they want to visit. Because the DNS query is encrypted, the proxy can’t see what’s inside, but acts as a shield to prevent the DNS resolver from seeing who sent the query to begin with.

“What ODoH is meant to do is separate the information about who is making the query and what the query is,” said Nick Sullivan, Cloudflare’s head of research.

In other words, ODoH ensures that only the proxy knows the identity of the internet user and that the DNS resolver only knows the website being requested. Sullivan said that page loading times on ODoH are “practically indistinguishable” from DoH and shouldn’t cause any significant changes to browsing speed.

A key component of ODoH working properly is ensuring that the proxy and the DNS resolver never “collude,” in that the two are never controlled by the same entity, otherwise the “separation of knowledge is broken,” Sullivan said. That means having to rely on companies offering to run proxies.

Sullivan said a few partner organizations are already running proxies, allowing for early adopters to begin using the technology through Cloudflare’s existing 1.1.1.1 DNS resolver. But most will have to wait until ODoH is baked into browsers and operating systems before it can be used. That could take months or years, depending on how long it takes for ODoH to be certified as a standard by the Internet Engineering Task Force.

Source: Cloudflare and Apple design a new privacy-friendly internet protocol | TechCrunch

WhatsApp Has Shared Your Data With Facebook since 2016, actually.

Since Facebook acquired WhatsApp in 2014, users have wondered and worried about how much data would flow between the two platforms. Many of them experienced a rude awakening this week, as a new in-app notification raises awareness about a step WhatsApp actually took to share more with Facebook back in 2016.

On Monday, WhatsApp updated its terms of use and privacy policy, primarily to expand on its practices around how WhatsApp business users can store their communications. A pop-up has been notifying users that as of February 8, the app’s privacy policy will change and they must accept the terms to keep using the app. As part of that privacy policy refresh, WhatsApp also removed a passage about opting out of sharing certain data with Facebook: “If you are an existing user, you can choose not to have your WhatsApp account information shared with Facebook to improve your Facebook ads and products experiences.”

Some media outlets and confused WhatsApp users understandably assumed that this meant WhatsApp had finally crossed a line, requiring data-sharing with no alternative. But in fact the company says that the privacy policy deletion simply reflects how WhatsApp has shared data with Facebook since 2016 for the vast majority of its now 2 billion-plus users.

When WhatsApp launched a major update to its privacy policy in August 2016, it started sharing user information and metadata with Facebook. At that time, the messaging service offered its billion existing users 30 days to opt out of at least some of the sharing. If you chose to opt out at the time, WhatsApp will continue to honor that choice. The feature is long gone from the app settings, but you can check whether you’re opted out through the “Request account info” function in Settings.

Meanwhile, the billion-plus users WhatsApp has added since 2016, along with anyone who missed that opt-out window, have had their data shared with Facebook all this time. WhatsApp emphasized to WIRED that this week’s privacy policy changes do not actually impact WhatsApp’s existing practices or behavior around sharing data with Facebook.

[…]

None of this has at any point impacted WhatsApp’s marquee feature: end-to-end encryption. Messages, photos, and other content you send and receive on WhatsApp can only be viewed on your smartphone and the devices of the people you choose to message with. WhatsApp and Facebook itself can’t access your communications.

[…]

In practice, this means that WhatsApp shares a lot of intel with Facebook, including  account information like your phone number, logs of how long and how often you use WhatsApp, information about how you interact with other users, device identifiers, and other device details like IP address, operating system, browser details, battery health information, app version, mobile network, language and time zone. Transaction and payment data, cookies, and location information are also all fair game to share with Facebook depending on the permissions you grant WhatsApp in the first place.

[…]

Source: WhatsApp Has Shared Your Data With Facebook for Years, Actually | WIRED

If you’re a WhatsApp user, you’ll have to share your personal data with Facebook from next month – and no, you can’t opt out this time

WhatsApp users must agree to share their personal information with Facebook if they want to continue using the messaging service from next month, according to new terms and conditions.

“As part of the Facebook Companies, WhatsApp receives information from, and shares information with, the other Facebook Companies,” its privacy policy, updated this week, stated.

“We may use the information we receive from them, and they may use the information we share with them, to help operate, provide, improve, understand, customize, support, and market our Services and their offerings, including the Facebook Company Products.”

Yes, said information includes your personal information. Thus, in other words, WhatsApp users must allow their personal info to be shared with Facebook and its subsidiaries as and when decided by the tech giant. Presumably, this is to serve personalized advertising.

If you’re a user today, you have two choices: accept this new arrangement, or stop using the end-to-end encrypted chat app (and use something else, like Signal.) The changes are expected to take effect on February 8.

When WhatsApp was acquired by Facebook in 2014, it promised netizens that its instant-messaging app would not collect names, addresses, internet searches, or location data. CEO Jan Koum wrote in a blog post: “Above all else, I want to make sure you understand how deeply I value the principle of private communication. For me, this is very personal. I was born in Ukraine, and grew up in the USSR during the 1980s.

“One of my strongest memories from that time is a phrase I’d frequently hear when my mother was talking on the phone: ‘This is not a phone conversation; I’ll tell you in person.’ The fact that we couldn’t speak freely without the fear that our communications would be monitored by KGB is in part why we moved to the United States when I was a teenager.”

Two years later, however, that vow was eroded by, well, capitalism, and WhatsApp decided it would share its users’ information with Facebook though only if they consented. That ability to opt-out, however, will no longer be an option from next month. Koum left in 2018.

That means users who wish to keep using WhatsApp must be prepared to give up personal info such as their names, profile pictures, status updates, phone numbers, contacts lists, and IP addresses, as well as data about their mobile devices, such as model numbers, operating system versions, and network carrier details, to the mothership. If users engage with businesses via the app, order details such as shipping addresses and the amount of money spent can be passed to Facebook, too.

Source: If you’re a WhatsApp user, you’ll have to share your personal data with Facebook from next month – and no, you can’t opt out this time • The Register

Singapore police can access now data from the country’s contract tracing app

With a nearly 80 percent uptake among the country’s population, Singapore’s TraceTogether app is one of the best examples of what a successful centralized contact tracing effort can look like as countries across the world struggle to contain the coronavirus pandemic. To date, more than 4.2 million people in Singapore have download the app or obtained the wearable the government has offered to people.

In contrast to Apple’s and Google’s Exposure Notifications System — which powers the majority of COVID-19 apps out there, including ones put out by states and countries like California and Germany — Singapore’s TraceTogether app and wearable uses the country’s own internally developed BlueTrace protocol. The protocol relies on a centralized reporting structure wherein a user’s entire contact log is uploaded to a server administered by a government health authority. Outside of Singapore, only Australia has so far adopted the protocol.

In an update the government made to the platform’s privacy policy on Monday, it added a paragraph about how police can use data collected through the platform. “TraceTogether data may be used in circumstances where citizen safety and security is or has been affected,” the new paragraph states. “Authorized Police officers may invoke Criminal Procedure Code (CPC) powers to request users to upload their TraceTogether data for criminal investigations.”

Previous versions of the privacy policy made no mention of the fact police could access any data collected by the app; in fact, the website used to say, “data will only be used for COVID-19 contact tracing.” The government added the paragraph after Singapore’s opposition party asked the Minister of State for Home Affairs if police could use the data for criminal investigations. “We do not preclude the use of TraceTogether data in circumstances where citizens’ safety and security is or has been affected, and this applies to all other data as well,” said Minister Desmond Tan.

What’s happening in Singapore is an example of the exact type of potential privacy nightmare that experts warned might happen with centralized digital contact tracing efforts. Worse, a loss of trust in the privacy of data could push people further away from contact tracing efforts altogether, putting everyone at more risk.

Source: Singapore police can access data from the country’s contract tracing app | Engadget