The Linkielist

Linking ideas with the world

The Linkielist

FortressIQ just comes out and says it: To really understand business processes, feed your staff’s screen activity to an AI

In a sign that interest in process mining is heating up, vendor FortressIQ is launching an analytics platform with a novel approach to understanding how users really work – it “videos” their on-screen activity for later analysis.

According to the San Francisco-based biz, its Process Intelligence platform will allow organisations to be better prepared for business transformation, the rollout of new applications, and digital projects by helping customers understand how people actually do their jobs, as opposed to how the business thinks they work.

The goal of process mining itself is not new. German vendor Celonis has already marked out the territory and raised approximately $290m in a funding round in November 2019, when it was valued at $2.5bn.

Celonis works by recording a users’ application logs, and by applying machine learning to data across a number of applications, purports to figure out how processes work in real life. FortressIQ, which raised $30m in May 2020, uses a different approach – recording all the user’s screen activity and using AI and computer vision to try to understand all their behaviour.

Pankaj Chowdhry, CEO at FortressIQ, told The Register that the company had built was a “virtual process analyst”, a software agent which taps into a user’s video card on the desktop or laptop. It streams a low-bandwidth version of what is occuring on the screen to provide the raw data for the machine-learning models.

“We built machine learning and computer vision AI that will, in essence, watch that movie, and convert it into a structured activity,” he said.

In an effort to assure those forgiven for being a little freaked out by the recording of users’ every on-screen move, the company said it anonymises the data it analyses to show which processes are better than others, rather than which user is better. Similarly, it said it guarantees the privacy of on-screen data.

Nonetheless, users should be aware of potential kickbacks when deploying the technology, said Tom Seal, senior research director with IDC.

“Businesses will be somewhat wary about provoking that negative reaction, particularly with the remote working that’s been triggered by COVID,” he said.

At the same time, remote working may be where the approach to process mining can show its worth, helping to understand how people adapt their working patterns in the current conditions.

FortressIQ may have an advantage over rivals in that it captures all data from the users’ screen, rather than the applications the organisation thinks should be involved in a process, said Seal. “It’s seeing activity that the application logs won’t pick up, so there is an advantage there.”

Of course, there is still the possibility that users get around prescribed processes using Post-It notes, whiteboards and phone apps, which nobody should put beyond them.

Celonis and FortressIQ come from very different places. The German firm has a background in engineering and manufacturing, with an early use case at Siemens led by Lars Reinkemeyer who has since joined the software vendor as veep for customer transformation. He literally wrote the book on process mining while at the University of California, Santa Barbara. FortressIQ, on the other hand, was founded by Chowdhry who worked as AI leader at global business process outsourcer Genpact before going it alone.

And it’s not just these two players. Software giant SAP has bought Signavio, a specialist in business process analysis and management, in a deal said to be worth $1.2bn to help understand users’ processes as it readies them for the cloud and application upgrades. ®

Source: FortressIQ just comes out and says it: To really understand business processes, feed your staff’s screen activity to an AI • The Register

Cell Phone Location Privacy could be done easily

We all know that our cell phones constantly give our location away to our mobile network operators; that’s how they work. A group of researchers has figured out a way to fix that. “Pretty Good Phone Privacy” (PGPP) protects both user identity and user location using the existing cellular networks. It protects users from fake cell phone towers (IMSI-catchers) and surveillance by cell providers.

It’s a clever system. The players are the user, a traditional mobile network operator (MNO) like AT&T or Verizon, and a new mobile virtual network operator (MVNO). MVNOs aren’t new. They’re intermediaries like Cricket and Boost.

Here’s how it works:

  1. One-time setup: The user’s phone gets a new SIM from the MVNO. All MVNO SIMs are identical.
  2. Monthly: The user pays their bill to the MVNO (credit card or otherwise) and the phone gets anonymous authentication (using Chaum blind signatures) tokens for each time slice (e.g., hour) in the coming month.
  3. Ongoing: When the phone talks to a tower (run by the MNO), it sends a token for the current time slice. This is relayed to a MVNO backend server, which checks the Chaum blind signature of the token. If it’s valid, the MVNO tells the MNO that the user is authenticated, and the user receives a temporary random ID and an IP address. (Again, this is now MVNOs like Boost already work.)
  4. On demand: The user uses the phone normally.

The MNO doesn’t have to modify its system in any way. The PGPP MVNO implementation is in software. The user’s traffic is sent to the MVNO gateway and then out onto the Internet, potentially even using a VPN.

All connectivity is data connectivity in cell networks today. The user can choose to be data-only (e.g., use Signal for voice), or use the MVNO or a third party for VoIP service that will look just like normal telephony.

The group prototyped and tested everything with real phones in the lab. Their approach adds essentially zero latency, and doesn’t introduce any new bottlenecks, so it doesn’t have performance/scalability problems like most anonymity networks. The service could handle tens of millions of users on a single server, because it only has to do infrequent authentication, though for resilience you’d probably run more.

The paper is here.

Source: Cell Phone Location Privacy | OSINT

I checked Apple’s new privacy ‘nutrition labels.’ Many were false.

[…]

Apple only lets you access iPhone apps through its own App Store, which it says keeps everything safe. It appeared to bolster that idea when it announced in 2020 that it would ask app makers to fill out what are essentially privacy nutrition labels. Just like packaged food has to disclose how much sugar it contains, apps would have to disclose in clear terms how they gobble your data. The labels appear in boxes toward the bottom of app listings. (Click here for my guide on how to read privacy nutrition labels.)

But after I studied the labels, the App Store is now a product I trust less to protect us. In some ways, Apple uses a narrow definition of privacy that benefits Apple — which has its own profit motivations — more than it benefits us.

Apple’s big privacy product is built on a shaky foundation: the honor system. In tiny print on the detail page of each app label, Apple says, “This information has not been verified by Apple.”

The first time I read that, I did a double take. Apple, which says caring for our privacy is a “core responsibility,” surely knows devil-may-care data harvesters can’t be counted on to act honorably. Apple, which made an estimated $64 billion off its App Store last year, shares in the responsibility for what it publishes.

It’s true that just by asking apps to highlight data practices, Apple goes beyond Google’s rival Play Store for Android phones. It has also promised to soon make apps seek permission to track us, which Facebook has called an abuse of Apple’s monopoly over the App Store.

In an email, Apple spokeswoman Katie Clark-AlSadder said: “Apple conducts routine and ongoing audits of the information provided and we work with developers to correct any inaccuracies. Apps that fail to disclose privacy information accurately may have future app updates rejected, or in some cases, be removed from the App Store entirely if they don’t come into compliance.”

My spot checks suggest Apple isn’t being very effective.

And even when they are filled out correctly, what are Apple’s privacy labels allowing apps to get away with not telling us?

Trust but verify

A tip from a tech-savvy Washington Post reader helped me realize something smelled fishy. He was using a journaling app that claimed not to collect any data but, using some technical tools, he spotted it talking an awful lot to Google.

[…]

To be clear, I don’t know exactly how widespread the falsehoods are on Apple’s privacy labels. My sample wasn’t necessarily representative: There are about 2 million apps, and some big companies, like Google, have yet to even post labels. (They’re only required to do so with new updates.) About 1 in 3 of the apps I checked that claimed they took no data appeared to be inaccurate. “Apple is the only one in a position to do this on all the apps,” says Jackson.

But if a journalist and a talented geek could find so many problems just by kicking over a few stones, why isn’t Apple?

Even after I sent it a list of dubious apps, Apple wouldn’t answer my specific questions, including: How many bad apps has it caught? If being inaccurate means you get the boot, why are some of the ones I flagged still available?

[…]

We need help to fend off the surveillance economy. Apple’s App Store isn’t doing enough, but we also have no alternative. Apple insists on having a monopoly in running app stores for iPhones and iPads. In testimony to Congress about antitrust concerns last summer, Apple CEO Tim Cook argued that Apple alone can protect our security.

Other industries that make products that could harm consumers don’t necessarily get to write the rules for themselves. The Food and Drug Administration sets the standards for nutrition labels. We can debate whether it’s good at enforcement, but at least when everyone has to work with the same labels, consumers can get smart about reading them — and companies face the penalty of law if they don’t tell the truth.

Apple’s privacy labels are not only an unsatisfying product. They should also send a message to lawmakers weighing whether the tech industry can be trusted to protect our privacy on its own.

Source: I checked Apple’s new privacy ‘nutrition labels.’ Many were false.

How to Restore Recently Deleted Instagram Posts – because deleted means: stored somewhere you can’t get at them

Instagram is adding a new “Recently deleted” folder to the app’s menu that temporarily stores posts after you remove them from your profile or archive, giving you the ability to restore deleted posts if you change your mind.

The folder includes sections for photos, IGTV, Reels, and Stories posts. No one else can see your recently deleted posts, but as long as a photo or video is still in the folder, it can be restored. Regular photos, IGTV videos, and Reels remain in the folder for up to 30 days, after which they’re gone forever. Stories stick around for up to 24 hours before they’re permanently removed, but you can still access them in your Stories archive.

[…]

Source: How to Restore Recently Deleted Instagram Posts

It’s nice how they’re framing the fact that they don’t delete your data as a “feature”

Amazon Plans to Install Creepy Always-On Surveillance Cameras in Delivery Vans

Not content to only wield its creepy surveillance infrastructure against warehouse workers and employees considering unionization, Amazon is reportedly gearing up to install perpetually-on cameras inside its fleet of delivery vehicles as well.

A new report from The Information claims that Amazon recently shared the plans in an instructional video sent out to the contractor workers who drive the Amazon-branded delivery vans.

In the video, the company reportedly explains to drivers that the high-tech video cameras will use artificial intelligence to determine when drivers are engaging in risky behavior, and will give out verbal warnings including “Distracted driving,” “No stop detected” and “Please slow down.”

According to a video posted to Vimeo a week ago, the hardware and software for the cameras will be provided through a partnership with California-based company Netradyne, which is also responsible for a platform called Driveri that similarly uses artificial intelligence to analyze a driver’s behavior as they operate a vehicle.

While the camera’s automated feedback will be immediate, other data will also reportedly be stored for later analysis that will help the company to evaluate its fleet of drivers.

Although it’s not clear when Amazon plans to install the cameras or how many of the vehicles in the company’s massive fleet will be outfitted with them, the company told The Information in a statement that the software will be implemented in the spirit of increasing safety precautions and not, you know, bolstering an insidious and growing surveillance apparatus.

Source: Amazon Plans to Install Always-On Surveillance Cameras in Delivery Vans

ProtonMail, Tutanota among authors of letter urging EU to reconsider encryption rules

Encrypted service providers are urging lawmakers to back away from a controversial plan that critics say would undercut effective data protection measures.

ProtonMail, Threema, Tresorit and Tutanota — all European companies that offer some form of encrypted services — issued a joint statement this week declaring that a resolution the European Council adopted on Dec. 14 is ill-advised. That measure calls for “security through encryption and security despite encryption,” which technologists have interpreted as a threat to end-to-end encryption. In recent months governments around the world, including the U.S., U.K., Australia, New Zealand, Canada, India and Japan, have been reigniting conversations about law enforcement officials’ interest in bypassing encryption, as they have sporadically done for years.

In a letter that will be sent to council members on Thursday, the authors write that the council’s stated goal of endorsing encryption, and the council’s argument that law enforcement authorities must rely on accessing electronic evidence “despite encryption,” contradict one another. The advancement of legislation that forces technology companies to guarantee police investigators a way to intercept user messages, for instance, repeatedly has been scrutinized by technology leaders who argue there is no way to stop such a tool from being abused.

The resolution “will threaten the basic rights of millions of Europeans and undermine a global shift towards adopting end-to-end encryption,” say the companies, which offer users either encrypted email, file-sharing or messaging.

“[E]ncryption is an absolute, data is either encrypted or it isn’t, users have privacy or they don’t,” the letter, which was shared with CyberScoop in advance, states. “The desire to give law enforcement more tools to fight crime is obviously understandable. But the proposals are the digital equivalent of giving law enforcement a key to every citizens’ home and might begin a slippery slope towards greater violations of personal privacy.”

[…]

Source: ProtonMail, Tutanota among authors of letter urging EU to reconsider encryption rules

Firefox 85 removes support for Flash and adds protection against supercookies

Mozilla has released Firefox 85 ending support for Adobe Flash Player plugin and has brought in ways to block supercookies to enhance a user’s privacy. Mozilla, in a blog post, noted that supercookies are store user identifiers, and are much more difficult to delete and block. It further noted that the changes it is making through network partitioning in Firefox 85 will “reduce the effectiveness of cache-based supercookies by eliminating a tracker’s ability to use them across websites.”

“Trackers can abuse caches to create supercookies and can use connection identifiers to track users. But by isolating caches and network connections to the website they were created on, we make them useless for cross-site tracking,” Mozilla noted.

It explained that the network partitioning works by splitting the Firefox browser cache on a per-website basis, a technical solution that prevents websites from tracking users as they move across the web. Mozilla also noted that by removing support for Flash, there was not much impact on the page load time. The development was first reported by ZDNet.

[…]

Source: Firefox 85 removes support for Flash and adds protection against supercookies – Technology News

Indian government slams Facebook over WhatsApp ‘privacy’ update, wants its own Europe-style opt-out switch

The Indian government has sent a fierce letter to Facebook over its decision to update the privacy rules around its WhatsApp chat service, and asked the antisocial media giant to put a halt to the plans.In an email from the IT ministry to WhatsApp head Will Cathcart, provided to media outlets, the Indian government notes that the proposed changes “raise grave concerns regarding the implications for the choice and autonomy of Indian citizens.”In particular, the ministry is incensed that European users will be given a choice to opt out over sharing WhatsApp data with the larger Facebook empire, as well as businesses using the platform to communicate with customers, while Indian users will not.“This differential and discriminatory treatment of Indian and European users is attracting serious criticism and betrays a lack of respect for the rights and interest of Indian citizens who form a substantial portion of WhatsApp’s user base,” the letter says. It concludes by asking WhatsApp to “withdraw the proposed changes.”IndiaIndia’s top techies form digital foundation to fight Apple and GoogleREAD MOREThe reason that Europe is being treated as a special case by Facebook is, of course, the existence of the GDPR privacy rules that Facebook has repeatedly flouted and as a result faces pan-European legal action.

Source: Indian government slams Facebook over WhatsApp ‘privacy’ update, wants its own Europe-style opt-out switch • The Register

AI upstart stealing facial data told to delete data and algorithms

Everalbum, a consumer photo app maker that shut down on August 31, 2020, and has since relaunched as a facial recognition provider under the name Paravision, on Monday reached a settlement with the FTC over the 2017 introduction of a feature called “Friends” in its discontinued Ever app. The watchdog agency claims the app deployed facial recognition code to organize users’ photos by default, without permission.

According to the FTC, between July 2018 and April 2019, Everalbum told people that it would not employ facial recognition on users’ content without consent. The company allegedly let users in certain regions – Illinois, Texas, Washington, and the EU – make that choice, but automatically activated the feature for those located elsewhere.

The agency further claims that Everalbum’s use of facial recognition went beyond supporting the Friends feature. The company is alleged to have combined users’ faces with facial images from other information to create four datasets that informed its facial recognition technology, which became the basis of a face detection service for enterprise customers.

The company also is said to have told consumers using its app that it would delete their data if they deactivated their accounts, but didn’t do so until at least October 2019.

The FTC, in announcing the case and its settlement, said Everalbum/Paravision will be required to delete: photos and videos belonging to Ever app users who deactivated their accounts; all face embeddings – vector representations of facial features – from users who did not grant consent; and “any facial recognition models or algorithms developed with Ever users’ photos or videos.”

The FTC has not done this in past privacy cases with technology companies. According to FTC Commissioner Rohit Chopra, when Google and YouTube agreed to pay $170m over allegations the companies had collected data from children without parental consent, the FTC settlement “allowed Google and YouTube to profit from its conduct, even after paying a civil penalty.”

Likewise, when the FTC voted to approve a settlement with Facebook over claims it had violated its 2012 privacy settlement agreement, he said, Facebook did not have to give up any of its facial recognition technology or data.

“Commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data,” said Chopra in a statement [PDF]. “This is an important course correction.”

[…]

Source: Privacy pilfering project punished by FTC purge penalty: AI upstart told to delete data and algorithms • The Register

NYPD posts surveillance systems and use and requests comments

Beginning, January 11, 2020, draft surveillance technology impact and use policies will be posted on the Department’s website. Members of the public are invited to review the impact and use policies and provide feedback on their contents. The impact and use policies provide details of: 1) the capabilities of the Department’s surveillance technologies, 2) the rules regulating the use of the technologies, 3) protections against unauthorized access of the technologies or related data, 4) surveillance technologies data retention policies, 5) public access to surveillance technologies data, 6) external entity access to surveillance technologies data, 7) Department trainings in the use of surveillance technologies, 8) internal audit and oversight mechanisms of surveillance technologies, 9) health and safety reporting on the surveillance technologies, and 10) potential disparate impacts of the impact and use policies for surveillance technologies.

Source: Draft Policies for Public Comment

WhatsApp delays enforcement of privacy terms by 3 months, following backlash

WhatsApp said on Friday that it won’t enforce the planned update to its data-sharing policy until May 15, weeks after news about the new terms created confusion among its users, exposed the Facebook app to a potential lawsuit, triggered a nationwide investigation and drove tens of millions of its loyal fans to explore alternative messaging apps.

“We’re now moving back the date on which people will be asked to review and accept the terms. No one will have their account suspended or deleted on February 8. We’re also going to do a lot more to clear up the misinformation around how privacy and security works on WhatsApp. We’ll then go to people gradually to review the policy at their own pace before new business options are available on May 15,” the firm said in a blog post.

Source: WhatsApp delays enforcement of privacy terms by 3 months, following backlash | TechCrunch

I’m pretty sure there is no confusion. People just don’t want all their data shared to Facebook when they were promised it wouldn’t be. So they are leaving to Signal and Telegram.

Turkey launches antitrust probe into WhatsApp and Facebook over the new privacy policy. Still can’t export Whatsapp chats.

WhatsApp updated its privacy policy at the turn of the new year. Users were notified via a popup message upon opening the app that their data would now be shared with Facebook and other companies come February 8. Due to Facebook’s notorious history with user data and privacy, the new update has since then garnered criticism with many people migrating to alternative messaging apps like Signal and Telegram. Microsoft entered the playing field too, recommending users to use Skype in place of the Facebook-owned WhatsApp.

In the latest, Turkey has now launched an antitrust probe into Facebook and WhatsApp regarding the updated privacy policy. Bloomberg reports that:

Turkey’s antitrust board launched an investigation into Facebook Inc. and its messaging service WhatsApp Inc. over new usage terms that have sparked privacy concerns.

[…]

The regulator also said it was halting implementation of such terms, it said on Monday. The new terms would result in “more data being collected, processed and used by Facebook,” according to the statement.

Source: Turkey launches antitrust probe into WhatsApp and Facebook over the new privacy policy – Neowin

Every Deleted Parler Post, Many With Users’ Location Data, Has Been Archived. Parler goes down. Still can’t export your Whatsapp history.

In the wake of the violent insurrection at the U.S. Capitol by scores of President Trump’s supporters, a lone researcher began an effort to catalogue the posts of social media users across Parler, a platform founded to provide conservative users a safe haven for uninhibited “free speech” — but which ultimately devolved into a hotbed of far-right conspiracy theories, unchecked racism, and death threats aimed at prominent politicians.

The researcher, who asked to be referred to by her Twitter handle, @donk_enby, began with the goal of archiving every post from January 6, the day of the Capitol riot; what she called a bevy of “very incriminating” evidence. According to the Atlantic Council’s Digital Forensic Research Lab, among other sources, Parler is one of a several apps used by the insurrections to coordinate their breach of the Capitol, in a plan to overturn the 2020 election results and keep Donald Trump in power.

Five people died in the attempt.

Hoping to create a lasting public record for future researchers to sift through, @donk_enby began by archiving the posts from that day. The scope of the project quickly broadened, however, as it became increasingly clear that Parler was on borrowed time. Apple and Google announced that Parler would be removed from their app stores because it had failed to properly moderate posts that encouraged violence and crime. The final nail in the coffin came Saturday when Amazon announced it was pulling Parler’s plug.

In an email first obtained by BuzzFeed News, Amazon officials told the company they planned to boot it from its clouding hosting service, Amazon Web Services, saying it had witnessed a “steady increase” in violent content across the platform. “It’s clear that Parler does not have an effective process to comply with the AWS terms of service,” the email read.

Operating on little sleep, @donk_enby began the work of archiving all of Parler’s posts, ultimately capturing around 99.9 percent of its content. In a tweet early Sunday, @donk_enby said she was crawling some 1.1 million Parler video URLs. “These are the original, unprocessed, raw files as uploaded to Parler with all associated metadata,” she said. Included in this data tranche, now more than 56 terabytes in size, @donk_enby confirmed that the raw video files include GPS metadata pointing to exact locations of where the videos were taken.

@donk_enby later shared a screenshot showing the GPS position of a particular video, with coordinates in latitude and longitude.

The privacy implications are obvious, but the copious data may also serve as a fertile hunting ground for law enforcement. Federal and local authorities have arrested dozens of suspects in recent days accused of taking part in the Capitol riot, where a Capitol police officer, Brian Sicknick, was fatally wounded after being struck in the head with a fire extinguisher.

[…]

Kirtaner, creator of 420chan — a.k.a. Aubrey Cottle — reported obtaining 6.3 GB of Parler user data from an unsecured AWS server in November. The leak reportedly contained passwords, photos and email addresses from several other companies as well. Parler CEO John Matze later claimed to Business Insider that the data contained only “public information” about users, which had been improperly stored by an email vendor whose contract was subsequently terminated over the leak. (This leak is separate from the debunked claim that Parler was “hacked” in late November, proof of which was determined to be fake.)

In December, Twitter suspended Kirtaner for tweeting, “I’m killing Parler and its fucking glorious,” citing its rules against threatening “violence against an individual or group of people.” Kirtaner’s account remains suspended despite an online campaign urging Twitter’s safety team to reverse its decision. Gregg Housh, an internet activist involved in many early Anonymous campaigns, noted online that the tweet was “not aimed at a person and [was] not actually violent.”

Source: Every Deleted Parler Post, Many With Users’ Location Data, Has Been Archived

ODoH: Cloudflare and Apple design a new privacy-friendly internet protocol for DNS

Engineers at Cloudflare and Apple say they’ve developed a new internet protocol that will shore up one of the biggest holes in internet privacy that many don’t know even exists. Dubbed Oblivious DNS-over-HTTPS, or ODoH for short, the new protocol makes it far more difficult for internet providers to know which websites you visit.

But first, a little bit about how the internet works.

Every time you go to visit a website, your browser uses a DNS resolver to convert web addresses to machine-readable IP addresses to locate where a web page is located on the internet. But this process is not encrypted, meaning that every time you load a website the DNS query is sent in the clear. That means the DNS resolver — which might be your internet provider unless you’ve changed it — knows which websites you visit. That’s not great for your privacy, especially since your internet provider can also sell your browsing history to advertisers.

Recent developments like DNS-over-HTTPS (or DoH) have added encryption to DNS queries, making it harder for attackers to hijack DNS queries and point victims to malicious websites instead of the real website you wanted to visit. But that still doesn’t stop the DNS resolvers from seeing which website you’re trying to visit.

Enter ODoH, which builds on previous work by Princeton academics. In simple terms, ODoH decouples DNS queries from the internet user, preventing the DNS resolver from knowing which sites you visit.

Here’s how it works: ODoH wraps a layer of encryption around the DNS query and passes it through a proxy server, which acts as a go-between the internet user and the website they want to visit. Because the DNS query is encrypted, the proxy can’t see what’s inside, but acts as a shield to prevent the DNS resolver from seeing who sent the query to begin with.

“What ODoH is meant to do is separate the information about who is making the query and what the query is,” said Nick Sullivan, Cloudflare’s head of research.

In other words, ODoH ensures that only the proxy knows the identity of the internet user and that the DNS resolver only knows the website being requested. Sullivan said that page loading times on ODoH are “practically indistinguishable” from DoH and shouldn’t cause any significant changes to browsing speed.

A key component of ODoH working properly is ensuring that the proxy and the DNS resolver never “collude,” in that the two are never controlled by the same entity, otherwise the “separation of knowledge is broken,” Sullivan said. That means having to rely on companies offering to run proxies.

Sullivan said a few partner organizations are already running proxies, allowing for early adopters to begin using the technology through Cloudflare’s existing 1.1.1.1 DNS resolver. But most will have to wait until ODoH is baked into browsers and operating systems before it can be used. That could take months or years, depending on how long it takes for ODoH to be certified as a standard by the Internet Engineering Task Force.

Source: Cloudflare and Apple design a new privacy-friendly internet protocol | TechCrunch

WhatsApp Has Shared Your Data With Facebook since 2016, actually.

Since Facebook acquired WhatsApp in 2014, users have wondered and worried about how much data would flow between the two platforms. Many of them experienced a rude awakening this week, as a new in-app notification raises awareness about a step WhatsApp actually took to share more with Facebook back in 2016.

On Monday, WhatsApp updated its terms of use and privacy policy, primarily to expand on its practices around how WhatsApp business users can store their communications. A pop-up has been notifying users that as of February 8, the app’s privacy policy will change and they must accept the terms to keep using the app. As part of that privacy policy refresh, WhatsApp also removed a passage about opting out of sharing certain data with Facebook: “If you are an existing user, you can choose not to have your WhatsApp account information shared with Facebook to improve your Facebook ads and products experiences.”

Some media outlets and confused WhatsApp users understandably assumed that this meant WhatsApp had finally crossed a line, requiring data-sharing with no alternative. But in fact the company says that the privacy policy deletion simply reflects how WhatsApp has shared data with Facebook since 2016 for the vast majority of its now 2 billion-plus users.

When WhatsApp launched a major update to its privacy policy in August 2016, it started sharing user information and metadata with Facebook. At that time, the messaging service offered its billion existing users 30 days to opt out of at least some of the sharing. If you chose to opt out at the time, WhatsApp will continue to honor that choice. The feature is long gone from the app settings, but you can check whether you’re opted out through the “Request account info” function in Settings.

Meanwhile, the billion-plus users WhatsApp has added since 2016, along with anyone who missed that opt-out window, have had their data shared with Facebook all this time. WhatsApp emphasized to WIRED that this week’s privacy policy changes do not actually impact WhatsApp’s existing practices or behavior around sharing data with Facebook.

[…]

None of this has at any point impacted WhatsApp’s marquee feature: end-to-end encryption. Messages, photos, and other content you send and receive on WhatsApp can only be viewed on your smartphone and the devices of the people you choose to message with. WhatsApp and Facebook itself can’t access your communications.

[…]

In practice, this means that WhatsApp shares a lot of intel with Facebook, including  account information like your phone number, logs of how long and how often you use WhatsApp, information about how you interact with other users, device identifiers, and other device details like IP address, operating system, browser details, battery health information, app version, mobile network, language and time zone. Transaction and payment data, cookies, and location information are also all fair game to share with Facebook depending on the permissions you grant WhatsApp in the first place.

[…]

Source: WhatsApp Has Shared Your Data With Facebook for Years, Actually | WIRED

If you’re a WhatsApp user, you’ll have to share your personal data with Facebook from next month – and no, you can’t opt out this time

WhatsApp users must agree to share their personal information with Facebook if they want to continue using the messaging service from next month, according to new terms and conditions.

“As part of the Facebook Companies, WhatsApp receives information from, and shares information with, the other Facebook Companies,” its privacy policy, updated this week, stated.

“We may use the information we receive from them, and they may use the information we share with them, to help operate, provide, improve, understand, customize, support, and market our Services and their offerings, including the Facebook Company Products.”

Yes, said information includes your personal information. Thus, in other words, WhatsApp users must allow their personal info to be shared with Facebook and its subsidiaries as and when decided by the tech giant. Presumably, this is to serve personalized advertising.

If you’re a user today, you have two choices: accept this new arrangement, or stop using the end-to-end encrypted chat app (and use something else, like Signal.) The changes are expected to take effect on February 8.

When WhatsApp was acquired by Facebook in 2014, it promised netizens that its instant-messaging app would not collect names, addresses, internet searches, or location data. CEO Jan Koum wrote in a blog post: “Above all else, I want to make sure you understand how deeply I value the principle of private communication. For me, this is very personal. I was born in Ukraine, and grew up in the USSR during the 1980s.

“One of my strongest memories from that time is a phrase I’d frequently hear when my mother was talking on the phone: ‘This is not a phone conversation; I’ll tell you in person.’ The fact that we couldn’t speak freely without the fear that our communications would be monitored by KGB is in part why we moved to the United States when I was a teenager.”

Two years later, however, that vow was eroded by, well, capitalism, and WhatsApp decided it would share its users’ information with Facebook though only if they consented. That ability to opt-out, however, will no longer be an option from next month. Koum left in 2018.

That means users who wish to keep using WhatsApp must be prepared to give up personal info such as their names, profile pictures, status updates, phone numbers, contacts lists, and IP addresses, as well as data about their mobile devices, such as model numbers, operating system versions, and network carrier details, to the mothership. If users engage with businesses via the app, order details such as shipping addresses and the amount of money spent can be passed to Facebook, too.

Source: If you’re a WhatsApp user, you’ll have to share your personal data with Facebook from next month – and no, you can’t opt out this time • The Register

Singapore police can access now data from the country’s contract tracing app

With a nearly 80 percent uptake among the country’s population, Singapore’s TraceTogether app is one of the best examples of what a successful centralized contact tracing effort can look like as countries across the world struggle to contain the coronavirus pandemic. To date, more than 4.2 million people in Singapore have download the app or obtained the wearable the government has offered to people.

In contrast to Apple’s and Google’s Exposure Notifications System — which powers the majority of COVID-19 apps out there, including ones put out by states and countries like California and Germany — Singapore’s TraceTogether app and wearable uses the country’s own internally developed BlueTrace protocol. The protocol relies on a centralized reporting structure wherein a user’s entire contact log is uploaded to a server administered by a government health authority. Outside of Singapore, only Australia has so far adopted the protocol.

In an update the government made to the platform’s privacy policy on Monday, it added a paragraph about how police can use data collected through the platform. “TraceTogether data may be used in circumstances where citizen safety and security is or has been affected,” the new paragraph states. “Authorized Police officers may invoke Criminal Procedure Code (CPC) powers to request users to upload their TraceTogether data for criminal investigations.”

Previous versions of the privacy policy made no mention of the fact police could access any data collected by the app; in fact, the website used to say, “data will only be used for COVID-19 contact tracing.” The government added the paragraph after Singapore’s opposition party asked the Minister of State for Home Affairs if police could use the data for criminal investigations. “We do not preclude the use of TraceTogether data in circumstances where citizens’ safety and security is or has been affected, and this applies to all other data as well,” said Minister Desmond Tan.

What’s happening in Singapore is an example of the exact type of potential privacy nightmare that experts warned might happen with centralized digital contact tracing efforts. Worse, a loss of trust in the privacy of data could push people further away from contact tracing efforts altogether, putting everyone at more risk.

Source: Singapore police can access data from the country’s contract tracing app | Engadget

Access To Big Data Turns Farm Machine Makers Into Tech Firms

The combine harvester, a staple of farmers’ fields since the late 1800s, does much more these days than just vacuum up corn, soybeans and other crops. It also beams back reams of data to its manufacturer.

GPS records the combine’s precise path through the field. Sensors tally the number of crops gathered per acre and the spacing between them. On a sister machine called a planter, algorithms adjust the distribution of seeds based on which parts of the soil have in past years performed best. Another machine, a sprayer, uses algorithms to scan for weeds and zap them with pesticides. All the while sensors record the wear and tear on the machines, so that when the farmer who operates them heads to the local distributor to look for a replacement part, it has already been ordered and is waiting for them.

Farming may be an earthy industry, but much of it now takes place in the cloud. Leading farm machine makers like Chicago-based John Deere & Co. DE +1.1% or Duluth’s AGCO AGCO +0.9% collect data from all around the world thanks to the ability of their bulky machines to extract a huge variety of metrics from farmers’ fields and store them online. The farmers who sit in the driver’s seats of these machines have access to the data that they themselves accumulate, but legal murk obfuscates the question of whether they actually own that data and only the machine manufacturer can see all the data from all the machines leased or sold.

[…]

Still, farmers have yet to be fully won over. Many worry that by allowing the transfer of their data to manufacturers, it will inadvertently wind up in the hands of neighboring farmers with whom they compete for scarce land, who could then mine their closely guarded information about the number of acres they plow or the types of fertilizers and pesticides they use, thus gaining a competitive edge. Others fear that information about the type of seeds or fertilizer they use will wind up in the hands of the chemicals companies they buy from, allowing those companies to anticipate their product needs and charge them more, said Jonathan Coppess, a professor at the University of Illinois.

Sensitive to the suggestion that they are infringing on privacy, the largest equipment makers say they don’t share farmers’ data with third parties unless farmers give permission. (Farmers frequently agree to share data with, for example, their local distributors and dealers.)

It’s common to hear that farmers are, by nature, highly protective of their land and business, and that this predisposes them to worry about sharing data even when there are more potential benefits than drawbacks. Still, the concerns are at least partly the result of a lack of legal and regulatory standards around the collection of data from smart farming technologies, observers say. Contracts to buy or rent big machines are many pages long and the language unclear, especially since some of the underlying legal concepts regarding the sharing and collecting of agricultural data are still evolving.

As one 2019 paper puts it, “the lack of transparency and clarity around issues such as data ownership, portability, privacy, trust and liability in the commercial relationships governing smart farming are contributing to farmers’ reluctance to engage in the widespread sharing of their farm data that smart farming facilitates. At the heart of the concerns is the lack of trust between the farmers as data contributors, and those third parties who collect, aggregate and share their data.”

[…]

Some farmers may still find themselves surprised to discover the amount of access Deere and others have to their data. Jacob Maurer is an agronomist with RDO Equipment Co., a Deere dealer, who helps farmers understand how to use their data to work their fields more efficiently. He explained that some farmers would be shocked to learn how much information about their fields he can access by simply tapping into Deere’s vast online stores of data and pulling up their details.

[…]

Based on the mountains of data flowing in to their databases, equipment makers with sufficient sales of machines around the country may in theory actually be able to predict, at least to some small but meaningful extent, the prices of various crops by analyzing the data its machines are sending in — such as “yields” of crops per acre, the amount of fertilizer used, or the average number of seeds of a given crop planted in various regions, all of which would help to anticipate the supply of crops come harvest season.

Were the company then to sell that data to a commodities trader, say, it could likely reap a windfall. Normally, the markets must wait for highly-anticipated government surveys to run their course before having an indication of the future supply of crops. The agronomic data that machine makers collect could offer similar insights but far sooner.

Machine makers don’t deny the obvious value of the data they collect. As AGCO’s Crawford put it: “Anybody that trades grains would love to have their hands on this data.”

Experts occasionally wonder about what companies could do with the data. Mary Kay Thatcher, a former official with the American Farm Bureau, raised just such a concern in an interview with National Public Radio in 2014, when questions about data ownership were swirling after Monsanto began deploying a new “precision planting” tool that required it to have gobs of data.

“They could actually manipulate the market with it. You know, they only have to know the information about what’s actually happening with harvest minutes before somebody else knows it,” Thatcher said in the interview.

“Not saying they will. Just a concern.”

Source: Access To Big Data Turns Farm Machine Makers Into Tech Firms

Firefox to ship ‘network partitioning’ as a new anti-tracking defense

Firefox 85, scheduled to be released next month, in January 2021, will ship with a feature named Network Partitioning as a new form of anti-tracking protection.

The feature is based on “Client-Side Storage Partitioning,” a new standard currently being developed by the World Wide Web Consortium’s Privacy Community Group.

“Network Partitioning is highly technical, but to simplify it somewhat; your browser has many ways it can save data from websites, not just via cookies,” privacy researcher Zach Edwards told ZDNet in an interview this week.

“These other storage mechanisms include the HTTP cache, image cache, favicon cache, font cache, CORS-preflight cache, and a variety of other caches and storage mechanisms that can be used to track people across websites.”

Edwards says all these data storage systems are shared among websites.

The difference is that Network Partitioning will allow Firefox to save resources like the cache, favicons, CSS files, images, and more, on a per-website basis, rather than together, in the same pool.

This makes it harder for websites and third-parties like ad and web analytics companies to track users since they can’t probe for the presence of other sites’ data in this shared pool.

According to Mozilla, the following network resources will be partitioned starting with Firefox 85:

  • HTTP cache
  • Image cache
  • Favicon cache
  • Connection pooling
  • StyleSheet cache
  • DNS
  • HTTP authentication
  • Alt-Svc
  • Speculative connections
  • Font cache
  • HSTS
  • OCSP
  • Intermediate CA cache
  • TLS client certificates
  • TLS session identifiers
  • Prefetch
  • Preconnect
  • CORS-preflight cache

But while Mozilla will be deploying the broadest user data “partitioning system” to date, the Firefox creator isn’t the first.

Edwards said the first browser maker to do so was Apple, in 2013, when it began partitioning the HTTP cache, and then followed through by partitioning even more user data storage systems years later, as part of its Tracking Prevention feature.

Google also partitioned the HTTP cache last month, with the release of Chrome 86, and the results began being felt right away, as Google Fonts lost some of its performance metrics as it couldn’t store fonts in the shared HTTP cache anymore.

The Mozilla team expects similar performance issues for sites loaded in Firefox, but it’s willing to take the hit just to improve the privacy of its users.

“Most policy makers and digital strategists are focused on the death of the 3rd party cookie, but there are a wide variety of other fingerprinting techniques and user tracking strategies that need to be broken by browsers,” Edwards also ZDNet, lauding Mozilla’s move.

PS: Mozilla also said that a side-effect of deploying Network Partitioning is that Firefox 85 will finally be able to block “supercookies” better, a type of browser cookie file that abuses various shared storage mediums to persist in browsers and allow advertisers to track user movements across the web.

Source: Firefox to ship ‘network partitioning’ as a new anti-tracking defense | ZDNet

Should We Use Search History for Credit Scores? IMF Says Yes

With more services than ever collecting your data, it’s easy to start asking why anyone should care about most of it. This is why. Because people start having ideas like this.

In a new blog post for the International Monetary Fund, four researchers presented their findings from a working paper that examines the current relationship between finance and tech as well as its potential future. Gazing into their crystal ball, the researchers see the possibility of using the data from your browsing, search, and purchase history to create a more accurate mechanism for determining the credit rating of an individual or business. They believe that this approach could result in greater lending to borrowers who would potentially be denied by traditional financial institutions.

At its heart, the paper is trying to wrestle with the dawning notion that the institutional banking system is facing a serious threat from tech companies like Google, Facebook, and Apple. The researchers identify two key areas in which this is true: Tech companies have greater access to soft-information, and messaging platforms can take the place of the physical locations that banks rely on for meeting with customers.

[…]

But how would all this data be incorporated into credit ratings? Machine learning, of course. It’s black boxes all the way down.

The researchers acknowledge that there will be privacy and policy concerns related to incorporating this kind of soft-data into credit analysis. And they do little to explain how this might work in practice. The paper isn’t long, and it’s worth a read just to wrap your mind around some of the notions of fintech’s future and why everyone seems to want in on the payments game.

As it is, getting the really fine soft-data points would probably require companies like Facebook and Apple to loosen up their standards on linking unencrypted information with individual accounts. How they might share information would other institutions would be its own can of worms.

[…]

Yes, the idea of every move you make online feeding into your credit score is creepy. It may not even be possible in the near future. The IMF researchers stress that “governments should follow and carefully support the technological transition in finance. It is important to adjust policies accordingly and stay ahead of the curve.” When’s the last time a government did any of that?

Source: Should We Use Search History for Credit Scores? IMF Says Yes

France fines Google $120M and Amazon $42M for dropping tracking cookies without consent

France’s data protection agency, the CNIL, has slapped Google and Amazon with fines for dropping tracking cookies without consent.

Google has been hit with a total of €100 million ($120 million) for dropping cookies on Google.fr and Amazon €35 million (~$42 million) for doing so on the Amazon .fr domain under the penalty notices issued today.

The regulator carried out investigations of the websites over the past year and found tracking cookies were automatically dropped when a user visited the domains in breach of the country’s Data Protection Act.

In Google’s case the CNIL has found three consent violations related to dropping non-essential cookies.

“As this type of cookies cannot be deposited without the user having expressed his consent, the restricted committee considered that the companies had not complied with the requirement provided for by article 82 of the Data Protection Act and the prior collection of the consent before the deposit of non-essential cookies,” it writes in the penalty notice [which we’ve translated from French].

Amazon was found to have made two violations, per the CNIL penalty notice.

CNIL also found that the information about the cookies provided to site visitors was inadequate — noting that a banner displayed by Google did not provide specific information about the tracking cookies the Google.fr site had already dropped.

Under local French (and European) law, site users should have been clearly informed before the cookies were dropped and asked for their consent.

In Amazon’s case its French site displayed a banner informing arriving visitors that they agreed to its use of cookies. CNIL said this did not comply with transparency or consent requirements — since it was not clear to users that the tech giant was using cookies for ad tracking. Nor were users given the opportunity to consent.

The law on tracking cookie consent has been clear in Europe for years. But in October 2019 a CJEU ruling further clarified that consent must be obtained prior to storing or accessing non-essential cookies. As we reported at the time, sites that failed to ask for consent to track were risking a big fine under EU privacy laws.

Source: France fines Google $120M and Amazon $42M for dropping tracking cookies without consent | TechCrunch

As if Productivity Score wasn’t creepy enough, Microsoft has patented tech for ‘meeting quality monitoring devices’ – PS is being defanged though

The slightly creepy “Productivity Score” may not be all that’s in store for Microsoft 365 users, judging by a trawl of Redmond’s patents.

One that has popped up recently concerns a “Meeting Insight Computing System“, spotted first by GeekWire, created to give meetings a quality score with a view to improving upcoming get-togethers.

It all sounds innocent enough until you read about the requirement for “quality parameters” to be collected from “meeting quality monitoring devices”, which might give some pause for thought.

Productivity Score relies on metrics captured within Microsoft 365 to assess how productive a company and its workers are. Metrics include the take-up of messaging platforms versus email. And though Microsoft has been quick to insist the motives behind the tech are pure, others have cast more of a jaundiced eye over the technology.

[…]

Meeting Insights would take things further by plugging data from a variety of devices into an algorithm in order to score the meeting. Sampling of environmental data such as air quality and the like is all well and good, but proposed sensors such as “a microphone that may, for instance, detect speech patterns consistent with boredom, fatigue, etc” as well as measuring other metrics, such as how long a person spends speaking, could also provide data to be stirred into the mix.

And if that doesn’t worry attendees, how about some more metrics to measure how focused a person is? Are they taking care of emails, messaging or enjoying a surf of the internet when they should be paying attention to the speaker? Heck, if one is taking data from a user’s computer, one could even consider the physical location of the device.

[…]

Talking to The Reg, one privacy campaigner who asked to remain anonymous said of tools such as Productivity Score and the Meeting Insight Computing System patent: “There is a simple dictum in privacy: you cannot lose data you don’t have. In other words, if you collect it you have to protect it, and that sort of data is risky to start with.

“Who do you trust? The correct answer is ‘no one’.”

Source: As if Productivity Score wasn’t creepy enough, Microsoft has patented tech for ‘meeting quality monitoring devices’ • The Register

Since then, Microsoft will remove user names from ‘Productivity Score’ feature after privacy backlash ( Geekwire )

Microsoft says it will make changes in its new Productivity Score feature, including removing the ability for companies to see data about individual users, to address concerns from privacy experts that the tech giant had effectively rolled out a new tool for snooping on workers.

“Going forward, the communications, meetings, content collaboration, teamwork, and mobility measures in Productivity Score will only aggregate data at the organization level—providing a clear measure of organization-level adoption of key features,” wrote Jared Spataro, Microsoft 365 corporate vice president, in a post this morning. “No one in the organization will be able to use Productivity Score to access data about how an individual user is using apps and services in Microsoft 365.”

The company rolled out its new “Productivity Score” feature as part of Microsoft 365 in late October. It gives companies data to understand how workers are using and adopting different forms of technology. It made headlines over the past week as reports surfaced that the tool lets managers see individual user data by default.

As originally rolled out, Productivity Score turned Microsoft 365 into a “full-fledged workplace surveillance tool,” wrote Wolfie Christl of the independent Cracked Labs digital research institute in Vienna, Austria. “Employers/managers can analyze employee activities at the individual level (!), for example, the number of days an employee has been sending emails, using the chat, using ‘mentions’ in emails etc.”

The initial version of the Productivity Score tool allowed companies to see individual user data. (Screenshot via YouTube)

Spataro wrote this morning, “We appreciate the feedback we’ve heard over the last few days and are moving quickly to respond by removing user names entirely from the product. This change will ensure that Productivity Score can’t be used to monitor individual employees.”

Privacy campaigner flags concerns about Microsoft’s creepy Productivity Score now in 365

Microsoft’s Productivity Score has put in a public appearance in Microsoft 365 and attracted the ire of privacy campaigners and activists.

The Register had already noted the vaguely creepy-sounding technology back in May. The goal of it is to use telemetry captured by the Windows behemoth to track the productivity of an organisation through metrics such as a corporate obsession with interminable meetings or just how collaborative employees are being.

The whole thing sounds vaguely disturbing in spite of Microsoft’s insistence that it was for users’ own good.

As more details have emerged, so have concerns over just how granular the level of data capture is.

Vienna-based researcher (and co-creator of Data Dealer) Wolfie Christl suggested that the new features “turns Microsoft 365 into an full-fledged workplace surveillance tool.”

Christl went on to claim that the software allows employers to dig into employee activities, checking the usage of email versus Teams and looking into email threads with @mentions. “This is so problematic at many levels,” he noted, adding: “Managers evaluating individual-level employee data is a no go,” and that there was the danger that evaluating “productivity” data can shift power from employees to organisations.

Earlier this year we put it to Microsoft corporate vice president Brad Anderson that employees might find themselves under the gimlet gaze of HR thanks to this data.

He told us: “There is no PII [personally identifiable information] data in there… it’s a valid concern, and so we’ve been very careful that as we bring that telemetry back, you know, we bring back what we need, but we stay out of the PII world.”

Microsoft did concede that there could be granularity down to the individual level although exceptions could be configured. Melissa Grant, director of product marketing for Microsoft 365, told us that Microsoft had been asked if it was possible to use the tool to check, for example, that everyone was online and working by 8 but added: “We’re not in the business of monitoring employees.”

Christl’s concerns are not limited to the Productivity Score dashboard itself, but also regarding what is going on behind the scenes in the form of the Microsoft Graph. The People API, for example, is a handy jumping off point into all manner of employee data.

For its part, Microsoft has continued to insist that Productivity Score is not a stick with which to bash employees. In a recent blog on the matter, the company stated:

To be clear, Productivity Score is not designed as a tool for monitoring employee work output and activities. In fact, we safeguard against this type of use by not providing specific information on individualized actions, and instead only analyze user-level data aggregated over a 28-day period, so you can’t see what a specific employee is working on at a given time. Productivity Score was built to help you understand how people are using productivity tools and how well the underlying technology supports them in this.

In an email to The Register, Christl retorted: “The system *does* clearly monitor employee activities. And they call it ‘Productivity Score’, which is perhaps misleading, but will make managers use it in a way managers usually use tools that claim to measure ‘productivity’.”

He added that Microsoft’s own promotional video for the technology showed a list of clearly identifiable users, which corporate veep Jared Spataro said enabled companies to “find your top communicators across activities for the last four weeks.”

We put Christl’s concerns to Microsoft and asked the company if its good intentions extended to the APIs exposed by the Microsoft Graph.

While it has yet to respond to worries about the APIs, it reiterated that the tool was compliant with privacy laws and regulations, telling us: “Productivity Score is an opt-in experience that gives IT administrators insights about technology and infrastructure usage.

It added: “Insights are intended to help organizations make the most of their technology investments by addressing common pain points like long boot times, inefficient document collaboration, or poor network connectivity. Insights are shown in aggregate over a 28-day period and are provided at the user level so that an IT admin can provide technical support and guidance.”

Source: Privacy campaigner flags concerns about Microsoft’s creepy Productivity Score • The Register

IRS contracted to Search Warrantless Location Database Over 10,000 Times

The IRS was able to query a database of location data quietly harvested from ordinary smartphone apps over 10,000 times, according to a copy of the contract between IRS and the data provider obtained by Motherboard.

The document provides more insight into what exactly the IRS wanted to do with a tool purchased from Venntel, a government contractor that sells clients access to a database of smartphone movements. The Inspector General is currently investigating the IRS for using the data without a warrant to try to track the location of Americans.

“This contract makes clear that the IRS intended to use Venntel’s spying tool to identify specific smartphone users using data collected by apps and sold onwards to shady data brokers. The IRS would have needed a warrant to obtain this kind of sensitive information from AT&T or Google,” Senator Ron Wyden told Motherboard in a statement after reviewing the contract.

[…]

Venntel sources its location data from gaming, weather, and other innocuous looking apps. An aide for the office of Senator Ron Wyden, whose office has been investigating the location data industry, previously told Motherboard that officials from Customs and Border Protection (CBP), which has also purchased Venntel products, said they believe Venntel also obtains location information from the real-time bidding that occurs when advertisers push their adverts into users’ browsing sessions.

One of the new documents says Venntel sources the location information from its “advertising analytics network and other sources.” Venntel is a subsidiary of advertising firm Gravy Analytics.

The data is “global,” according to a document obtained from CBP.

[…]

Source: IRS Could Search Warrantless Location Database Over 10,000 Times

GM launches OnStar Insurance Services – uses your driving data to calculate insurance rate

Andrew Rose, president of OnStar Insurance Services commented: “OnStar Insurance will promote safety, security and peace of mind. We aim to be an industry leader, offering insurance in an innovative way.

“GM customers who have subscribed to OnStar and connected services will be eligible to receive discounts, while also receiving fully-integrated services from OnStar Insurance Services.”

The service has been developed to improve the experience for policyholders who have an OnStar Safety & Security plan, as Automatic Crash Response has been designed to notify an OnStar Emergency-certified Advisor who can send for help.

The service is currently working with its insurance carrier partners to remove biased insurance plans by focusing on factors within the customer’s control, which includes individual vehicle usage and rewarding smart driving habits that benefit road safety.

OnStar Insurance Services plans to provide customers with personalised vehicle care and promote safer driving habits, along with a data-backed analysis of driving behaviour.

Source: General Motors launches OnStar Insurance Services – Reinsurance News

What it doesn’t say is whether it could raise insurances or deny them entirely, how transparent the reward system will be or what else they will be doing with your data.