The Linkielist

Linking ideas with the world

The Linkielist

ProtonMail, Tutanota among authors of letter urging EU to reconsider encryption rules

Encrypted service providers are urging lawmakers to back away from a controversial plan that critics say would undercut effective data protection measures.

ProtonMail, Threema, Tresorit and Tutanota — all European companies that offer some form of encrypted services — issued a joint statement this week declaring that a resolution the European Council adopted on Dec. 14 is ill-advised. That measure calls for “security through encryption and security despite encryption,” which technologists have interpreted as a threat to end-to-end encryption. In recent months governments around the world, including the U.S., U.K., Australia, New Zealand, Canada, India and Japan, have been reigniting conversations about law enforcement officials’ interest in bypassing encryption, as they have sporadically done for years.

In a letter that will be sent to council members on Thursday, the authors write that the council’s stated goal of endorsing encryption, and the council’s argument that law enforcement authorities must rely on accessing electronic evidence “despite encryption,” contradict one another. The advancement of legislation that forces technology companies to guarantee police investigators a way to intercept user messages, for instance, repeatedly has been scrutinized by technology leaders who argue there is no way to stop such a tool from being abused.

The resolution “will threaten the basic rights of millions of Europeans and undermine a global shift towards adopting end-to-end encryption,” say the companies, which offer users either encrypted email, file-sharing or messaging.

“[E]ncryption is an absolute, data is either encrypted or it isn’t, users have privacy or they don’t,” the letter, which was shared with CyberScoop in advance, states. “The desire to give law enforcement more tools to fight crime is obviously understandable. But the proposals are the digital equivalent of giving law enforcement a key to every citizens’ home and might begin a slippery slope towards greater violations of personal privacy.”

[…]

Source: ProtonMail, Tutanota among authors of letter urging EU to reconsider encryption rules

Firefox 85 removes support for Flash and adds protection against supercookies

Mozilla has released Firefox 85 ending support for Adobe Flash Player plugin and has brought in ways to block supercookies to enhance a user’s privacy. Mozilla, in a blog post, noted that supercookies are store user identifiers, and are much more difficult to delete and block. It further noted that the changes it is making through network partitioning in Firefox 85 will “reduce the effectiveness of cache-based supercookies by eliminating a tracker’s ability to use them across websites.”

“Trackers can abuse caches to create supercookies and can use connection identifiers to track users. But by isolating caches and network connections to the website they were created on, we make them useless for cross-site tracking,” Mozilla noted.

It explained that the network partitioning works by splitting the Firefox browser cache on a per-website basis, a technical solution that prevents websites from tracking users as they move across the web. Mozilla also noted that by removing support for Flash, there was not much impact on the page load time. The development was first reported by ZDNet.

[…]

Source: Firefox 85 removes support for Flash and adds protection against supercookies – Technology News

Indian government slams Facebook over WhatsApp ‘privacy’ update, wants its own Europe-style opt-out switch

The Indian government has sent a fierce letter to Facebook over its decision to update the privacy rules around its WhatsApp chat service, and asked the antisocial media giant to put a halt to the plans.In an email from the IT ministry to WhatsApp head Will Cathcart, provided to media outlets, the Indian government notes that the proposed changes “raise grave concerns regarding the implications for the choice and autonomy of Indian citizens.”In particular, the ministry is incensed that European users will be given a choice to opt out over sharing WhatsApp data with the larger Facebook empire, as well as businesses using the platform to communicate with customers, while Indian users will not.“This differential and discriminatory treatment of Indian and European users is attracting serious criticism and betrays a lack of respect for the rights and interest of Indian citizens who form a substantial portion of WhatsApp’s user base,” the letter says. It concludes by asking WhatsApp to “withdraw the proposed changes.”IndiaIndia’s top techies form digital foundation to fight Apple and GoogleREAD MOREThe reason that Europe is being treated as a special case by Facebook is, of course, the existence of the GDPR privacy rules that Facebook has repeatedly flouted and as a result faces pan-European legal action.

Source: Indian government slams Facebook over WhatsApp ‘privacy’ update, wants its own Europe-style opt-out switch • The Register

AI upstart stealing facial data told to delete data and algorithms

Everalbum, a consumer photo app maker that shut down on August 31, 2020, and has since relaunched as a facial recognition provider under the name Paravision, on Monday reached a settlement with the FTC over the 2017 introduction of a feature called “Friends” in its discontinued Ever app. The watchdog agency claims the app deployed facial recognition code to organize users’ photos by default, without permission.

According to the FTC, between July 2018 and April 2019, Everalbum told people that it would not employ facial recognition on users’ content without consent. The company allegedly let users in certain regions – Illinois, Texas, Washington, and the EU – make that choice, but automatically activated the feature for those located elsewhere.

The agency further claims that Everalbum’s use of facial recognition went beyond supporting the Friends feature. The company is alleged to have combined users’ faces with facial images from other information to create four datasets that informed its facial recognition technology, which became the basis of a face detection service for enterprise customers.

The company also is said to have told consumers using its app that it would delete their data if they deactivated their accounts, but didn’t do so until at least October 2019.

The FTC, in announcing the case and its settlement, said Everalbum/Paravision will be required to delete: photos and videos belonging to Ever app users who deactivated their accounts; all face embeddings – vector representations of facial features – from users who did not grant consent; and “any facial recognition models or algorithms developed with Ever users’ photos or videos.”

The FTC has not done this in past privacy cases with technology companies. According to FTC Commissioner Rohit Chopra, when Google and YouTube agreed to pay $170m over allegations the companies had collected data from children without parental consent, the FTC settlement “allowed Google and YouTube to profit from its conduct, even after paying a civil penalty.”

Likewise, when the FTC voted to approve a settlement with Facebook over claims it had violated its 2012 privacy settlement agreement, he said, Facebook did not have to give up any of its facial recognition technology or data.

“Commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data,” said Chopra in a statement [PDF]. “This is an important course correction.”

[…]

Source: Privacy pilfering project punished by FTC purge penalty: AI upstart told to delete data and algorithms • The Register

NYPD posts surveillance systems and use and requests comments

Beginning, January 11, 2020, draft surveillance technology impact and use policies will be posted on the Department’s website. Members of the public are invited to review the impact and use policies and provide feedback on their contents. The impact and use policies provide details of: 1) the capabilities of the Department’s surveillance technologies, 2) the rules regulating the use of the technologies, 3) protections against unauthorized access of the technologies or related data, 4) surveillance technologies data retention policies, 5) public access to surveillance technologies data, 6) external entity access to surveillance technologies data, 7) Department trainings in the use of surveillance technologies, 8) internal audit and oversight mechanisms of surveillance technologies, 9) health and safety reporting on the surveillance technologies, and 10) potential disparate impacts of the impact and use policies for surveillance technologies.

Source: Draft Policies for Public Comment

WhatsApp delays enforcement of privacy terms by 3 months, following backlash

WhatsApp said on Friday that it won’t enforce the planned update to its data-sharing policy until May 15, weeks after news about the new terms created confusion among its users, exposed the Facebook app to a potential lawsuit, triggered a nationwide investigation and drove tens of millions of its loyal fans to explore alternative messaging apps.

“We’re now moving back the date on which people will be asked to review and accept the terms. No one will have their account suspended or deleted on February 8. We’re also going to do a lot more to clear up the misinformation around how privacy and security works on WhatsApp. We’ll then go to people gradually to review the policy at their own pace before new business options are available on May 15,” the firm said in a blog post.

Source: WhatsApp delays enforcement of privacy terms by 3 months, following backlash | TechCrunch

I’m pretty sure there is no confusion. People just don’t want all their data shared to Facebook when they were promised it wouldn’t be. So they are leaving to Signal and Telegram.

Turkey launches antitrust probe into WhatsApp and Facebook over the new privacy policy. Still can’t export Whatsapp chats.

WhatsApp updated its privacy policy at the turn of the new year. Users were notified via a popup message upon opening the app that their data would now be shared with Facebook and other companies come February 8. Due to Facebook’s notorious history with user data and privacy, the new update has since then garnered criticism with many people migrating to alternative messaging apps like Signal and Telegram. Microsoft entered the playing field too, recommending users to use Skype in place of the Facebook-owned WhatsApp.

In the latest, Turkey has now launched an antitrust probe into Facebook and WhatsApp regarding the updated privacy policy. Bloomberg reports that:

Turkey’s antitrust board launched an investigation into Facebook Inc. and its messaging service WhatsApp Inc. over new usage terms that have sparked privacy concerns.

[…]

The regulator also said it was halting implementation of such terms, it said on Monday. The new terms would result in “more data being collected, processed and used by Facebook,” according to the statement.

Source: Turkey launches antitrust probe into WhatsApp and Facebook over the new privacy policy – Neowin

Every Deleted Parler Post, Many With Users’ Location Data, Has Been Archived. Parler goes down. Still can’t export your Whatsapp history.

In the wake of the violent insurrection at the U.S. Capitol by scores of President Trump’s supporters, a lone researcher began an effort to catalogue the posts of social media users across Parler, a platform founded to provide conservative users a safe haven for uninhibited “free speech” — but which ultimately devolved into a hotbed of far-right conspiracy theories, unchecked racism, and death threats aimed at prominent politicians.

The researcher, who asked to be referred to by her Twitter handle, @donk_enby, began with the goal of archiving every post from January 6, the day of the Capitol riot; what she called a bevy of “very incriminating” evidence. According to the Atlantic Council’s Digital Forensic Research Lab, among other sources, Parler is one of a several apps used by the insurrections to coordinate their breach of the Capitol, in a plan to overturn the 2020 election results and keep Donald Trump in power.

Five people died in the attempt.

Hoping to create a lasting public record for future researchers to sift through, @donk_enby began by archiving the posts from that day. The scope of the project quickly broadened, however, as it became increasingly clear that Parler was on borrowed time. Apple and Google announced that Parler would be removed from their app stores because it had failed to properly moderate posts that encouraged violence and crime. The final nail in the coffin came Saturday when Amazon announced it was pulling Parler’s plug.

In an email first obtained by BuzzFeed News, Amazon officials told the company they planned to boot it from its clouding hosting service, Amazon Web Services, saying it had witnessed a “steady increase” in violent content across the platform. “It’s clear that Parler does not have an effective process to comply with the AWS terms of service,” the email read.

Operating on little sleep, @donk_enby began the work of archiving all of Parler’s posts, ultimately capturing around 99.9 percent of its content. In a tweet early Sunday, @donk_enby said she was crawling some 1.1 million Parler video URLs. “These are the original, unprocessed, raw files as uploaded to Parler with all associated metadata,” she said. Included in this data tranche, now more than 56 terabytes in size, @donk_enby confirmed that the raw video files include GPS metadata pointing to exact locations of where the videos were taken.

@donk_enby later shared a screenshot showing the GPS position of a particular video, with coordinates in latitude and longitude.

The privacy implications are obvious, but the copious data may also serve as a fertile hunting ground for law enforcement. Federal and local authorities have arrested dozens of suspects in recent days accused of taking part in the Capitol riot, where a Capitol police officer, Brian Sicknick, was fatally wounded after being struck in the head with a fire extinguisher.

[…]

Kirtaner, creator of 420chan — a.k.a. Aubrey Cottle — reported obtaining 6.3 GB of Parler user data from an unsecured AWS server in November. The leak reportedly contained passwords, photos and email addresses from several other companies as well. Parler CEO John Matze later claimed to Business Insider that the data contained only “public information” about users, which had been improperly stored by an email vendor whose contract was subsequently terminated over the leak. (This leak is separate from the debunked claim that Parler was “hacked” in late November, proof of which was determined to be fake.)

In December, Twitter suspended Kirtaner for tweeting, “I’m killing Parler and its fucking glorious,” citing its rules against threatening “violence against an individual or group of people.” Kirtaner’s account remains suspended despite an online campaign urging Twitter’s safety team to reverse its decision. Gregg Housh, an internet activist involved in many early Anonymous campaigns, noted online that the tweet was “not aimed at a person and [was] not actually violent.”

Source: Every Deleted Parler Post, Many With Users’ Location Data, Has Been Archived

ODoH: Cloudflare and Apple design a new privacy-friendly internet protocol for DNS

Engineers at Cloudflare and Apple say they’ve developed a new internet protocol that will shore up one of the biggest holes in internet privacy that many don’t know even exists. Dubbed Oblivious DNS-over-HTTPS, or ODoH for short, the new protocol makes it far more difficult for internet providers to know which websites you visit.

But first, a little bit about how the internet works.

Every time you go to visit a website, your browser uses a DNS resolver to convert web addresses to machine-readable IP addresses to locate where a web page is located on the internet. But this process is not encrypted, meaning that every time you load a website the DNS query is sent in the clear. That means the DNS resolver — which might be your internet provider unless you’ve changed it — knows which websites you visit. That’s not great for your privacy, especially since your internet provider can also sell your browsing history to advertisers.

Recent developments like DNS-over-HTTPS (or DoH) have added encryption to DNS queries, making it harder for attackers to hijack DNS queries and point victims to malicious websites instead of the real website you wanted to visit. But that still doesn’t stop the DNS resolvers from seeing which website you’re trying to visit.

Enter ODoH, which builds on previous work by Princeton academics. In simple terms, ODoH decouples DNS queries from the internet user, preventing the DNS resolver from knowing which sites you visit.

Here’s how it works: ODoH wraps a layer of encryption around the DNS query and passes it through a proxy server, which acts as a go-between the internet user and the website they want to visit. Because the DNS query is encrypted, the proxy can’t see what’s inside, but acts as a shield to prevent the DNS resolver from seeing who sent the query to begin with.

“What ODoH is meant to do is separate the information about who is making the query and what the query is,” said Nick Sullivan, Cloudflare’s head of research.

In other words, ODoH ensures that only the proxy knows the identity of the internet user and that the DNS resolver only knows the website being requested. Sullivan said that page loading times on ODoH are “practically indistinguishable” from DoH and shouldn’t cause any significant changes to browsing speed.

A key component of ODoH working properly is ensuring that the proxy and the DNS resolver never “collude,” in that the two are never controlled by the same entity, otherwise the “separation of knowledge is broken,” Sullivan said. That means having to rely on companies offering to run proxies.

Sullivan said a few partner organizations are already running proxies, allowing for early adopters to begin using the technology through Cloudflare’s existing 1.1.1.1 DNS resolver. But most will have to wait until ODoH is baked into browsers and operating systems before it can be used. That could take months or years, depending on how long it takes for ODoH to be certified as a standard by the Internet Engineering Task Force.

Source: Cloudflare and Apple design a new privacy-friendly internet protocol | TechCrunch

WhatsApp Has Shared Your Data With Facebook since 2016, actually.

Since Facebook acquired WhatsApp in 2014, users have wondered and worried about how much data would flow between the two platforms. Many of them experienced a rude awakening this week, as a new in-app notification raises awareness about a step WhatsApp actually took to share more with Facebook back in 2016.

On Monday, WhatsApp updated its terms of use and privacy policy, primarily to expand on its practices around how WhatsApp business users can store their communications. A pop-up has been notifying users that as of February 8, the app’s privacy policy will change and they must accept the terms to keep using the app. As part of that privacy policy refresh, WhatsApp also removed a passage about opting out of sharing certain data with Facebook: “If you are an existing user, you can choose not to have your WhatsApp account information shared with Facebook to improve your Facebook ads and products experiences.”

Some media outlets and confused WhatsApp users understandably assumed that this meant WhatsApp had finally crossed a line, requiring data-sharing with no alternative. But in fact the company says that the privacy policy deletion simply reflects how WhatsApp has shared data with Facebook since 2016 for the vast majority of its now 2 billion-plus users.

When WhatsApp launched a major update to its privacy policy in August 2016, it started sharing user information and metadata with Facebook. At that time, the messaging service offered its billion existing users 30 days to opt out of at least some of the sharing. If you chose to opt out at the time, WhatsApp will continue to honor that choice. The feature is long gone from the app settings, but you can check whether you’re opted out through the “Request account info” function in Settings.

Meanwhile, the billion-plus users WhatsApp has added since 2016, along with anyone who missed that opt-out window, have had their data shared with Facebook all this time. WhatsApp emphasized to WIRED that this week’s privacy policy changes do not actually impact WhatsApp’s existing practices or behavior around sharing data with Facebook.

[…]

None of this has at any point impacted WhatsApp’s marquee feature: end-to-end encryption. Messages, photos, and other content you send and receive on WhatsApp can only be viewed on your smartphone and the devices of the people you choose to message with. WhatsApp and Facebook itself can’t access your communications.

[…]

In practice, this means that WhatsApp shares a lot of intel with Facebook, including  account information like your phone number, logs of how long and how often you use WhatsApp, information about how you interact with other users, device identifiers, and other device details like IP address, operating system, browser details, battery health information, app version, mobile network, language and time zone. Transaction and payment data, cookies, and location information are also all fair game to share with Facebook depending on the permissions you grant WhatsApp in the first place.

[…]

Source: WhatsApp Has Shared Your Data With Facebook for Years, Actually | WIRED

If you’re a WhatsApp user, you’ll have to share your personal data with Facebook from next month – and no, you can’t opt out this time

WhatsApp users must agree to share their personal information with Facebook if they want to continue using the messaging service from next month, according to new terms and conditions.

“As part of the Facebook Companies, WhatsApp receives information from, and shares information with, the other Facebook Companies,” its privacy policy, updated this week, stated.

“We may use the information we receive from them, and they may use the information we share with them, to help operate, provide, improve, understand, customize, support, and market our Services and their offerings, including the Facebook Company Products.”

Yes, said information includes your personal information. Thus, in other words, WhatsApp users must allow their personal info to be shared with Facebook and its subsidiaries as and when decided by the tech giant. Presumably, this is to serve personalized advertising.

If you’re a user today, you have two choices: accept this new arrangement, or stop using the end-to-end encrypted chat app (and use something else, like Signal.) The changes are expected to take effect on February 8.

When WhatsApp was acquired by Facebook in 2014, it promised netizens that its instant-messaging app would not collect names, addresses, internet searches, or location data. CEO Jan Koum wrote in a blog post: “Above all else, I want to make sure you understand how deeply I value the principle of private communication. For me, this is very personal. I was born in Ukraine, and grew up in the USSR during the 1980s.

“One of my strongest memories from that time is a phrase I’d frequently hear when my mother was talking on the phone: ‘This is not a phone conversation; I’ll tell you in person.’ The fact that we couldn’t speak freely without the fear that our communications would be monitored by KGB is in part why we moved to the United States when I was a teenager.”

Two years later, however, that vow was eroded by, well, capitalism, and WhatsApp decided it would share its users’ information with Facebook though only if they consented. That ability to opt-out, however, will no longer be an option from next month. Koum left in 2018.

That means users who wish to keep using WhatsApp must be prepared to give up personal info such as their names, profile pictures, status updates, phone numbers, contacts lists, and IP addresses, as well as data about their mobile devices, such as model numbers, operating system versions, and network carrier details, to the mothership. If users engage with businesses via the app, order details such as shipping addresses and the amount of money spent can be passed to Facebook, too.

Source: If you’re a WhatsApp user, you’ll have to share your personal data with Facebook from next month – and no, you can’t opt out this time • The Register

Singapore police can access now data from the country’s contract tracing app

With a nearly 80 percent uptake among the country’s population, Singapore’s TraceTogether app is one of the best examples of what a successful centralized contact tracing effort can look like as countries across the world struggle to contain the coronavirus pandemic. To date, more than 4.2 million people in Singapore have download the app or obtained the wearable the government has offered to people.

In contrast to Apple’s and Google’s Exposure Notifications System — which powers the majority of COVID-19 apps out there, including ones put out by states and countries like California and Germany — Singapore’s TraceTogether app and wearable uses the country’s own internally developed BlueTrace protocol. The protocol relies on a centralized reporting structure wherein a user’s entire contact log is uploaded to a server administered by a government health authority. Outside of Singapore, only Australia has so far adopted the protocol.

In an update the government made to the platform’s privacy policy on Monday, it added a paragraph about how police can use data collected through the platform. “TraceTogether data may be used in circumstances where citizen safety and security is or has been affected,” the new paragraph states. “Authorized Police officers may invoke Criminal Procedure Code (CPC) powers to request users to upload their TraceTogether data for criminal investigations.”

Previous versions of the privacy policy made no mention of the fact police could access any data collected by the app; in fact, the website used to say, “data will only be used for COVID-19 contact tracing.” The government added the paragraph after Singapore’s opposition party asked the Minister of State for Home Affairs if police could use the data for criminal investigations. “We do not preclude the use of TraceTogether data in circumstances where citizens’ safety and security is or has been affected, and this applies to all other data as well,” said Minister Desmond Tan.

What’s happening in Singapore is an example of the exact type of potential privacy nightmare that experts warned might happen with centralized digital contact tracing efforts. Worse, a loss of trust in the privacy of data could push people further away from contact tracing efforts altogether, putting everyone at more risk.

Source: Singapore police can access data from the country’s contract tracing app | Engadget

Access To Big Data Turns Farm Machine Makers Into Tech Firms

The combine harvester, a staple of farmers’ fields since the late 1800s, does much more these days than just vacuum up corn, soybeans and other crops. It also beams back reams of data to its manufacturer.

GPS records the combine’s precise path through the field. Sensors tally the number of crops gathered per acre and the spacing between them. On a sister machine called a planter, algorithms adjust the distribution of seeds based on which parts of the soil have in past years performed best. Another machine, a sprayer, uses algorithms to scan for weeds and zap them with pesticides. All the while sensors record the wear and tear on the machines, so that when the farmer who operates them heads to the local distributor to look for a replacement part, it has already been ordered and is waiting for them.

Farming may be an earthy industry, but much of it now takes place in the cloud. Leading farm machine makers like Chicago-based John Deere & Co. DE +1.1% or Duluth’s AGCO AGCO +0.9% collect data from all around the world thanks to the ability of their bulky machines to extract a huge variety of metrics from farmers’ fields and store them online. The farmers who sit in the driver’s seats of these machines have access to the data that they themselves accumulate, but legal murk obfuscates the question of whether they actually own that data and only the machine manufacturer can see all the data from all the machines leased or sold.

[…]

Still, farmers have yet to be fully won over. Many worry that by allowing the transfer of their data to manufacturers, it will inadvertently wind up in the hands of neighboring farmers with whom they compete for scarce land, who could then mine their closely guarded information about the number of acres they plow or the types of fertilizers and pesticides they use, thus gaining a competitive edge. Others fear that information about the type of seeds or fertilizer they use will wind up in the hands of the chemicals companies they buy from, allowing those companies to anticipate their product needs and charge them more, said Jonathan Coppess, a professor at the University of Illinois.

Sensitive to the suggestion that they are infringing on privacy, the largest equipment makers say they don’t share farmers’ data with third parties unless farmers give permission. (Farmers frequently agree to share data with, for example, their local distributors and dealers.)

It’s common to hear that farmers are, by nature, highly protective of their land and business, and that this predisposes them to worry about sharing data even when there are more potential benefits than drawbacks. Still, the concerns are at least partly the result of a lack of legal and regulatory standards around the collection of data from smart farming technologies, observers say. Contracts to buy or rent big machines are many pages long and the language unclear, especially since some of the underlying legal concepts regarding the sharing and collecting of agricultural data are still evolving.

As one 2019 paper puts it, “the lack of transparency and clarity around issues such as data ownership, portability, privacy, trust and liability in the commercial relationships governing smart farming are contributing to farmers’ reluctance to engage in the widespread sharing of their farm data that smart farming facilitates. At the heart of the concerns is the lack of trust between the farmers as data contributors, and those third parties who collect, aggregate and share their data.”

[…]

Some farmers may still find themselves surprised to discover the amount of access Deere and others have to their data. Jacob Maurer is an agronomist with RDO Equipment Co., a Deere dealer, who helps farmers understand how to use their data to work their fields more efficiently. He explained that some farmers would be shocked to learn how much information about their fields he can access by simply tapping into Deere’s vast online stores of data and pulling up their details.

[…]

Based on the mountains of data flowing in to their databases, equipment makers with sufficient sales of machines around the country may in theory actually be able to predict, at least to some small but meaningful extent, the prices of various crops by analyzing the data its machines are sending in — such as “yields” of crops per acre, the amount of fertilizer used, or the average number of seeds of a given crop planted in various regions, all of which would help to anticipate the supply of crops come harvest season.

Were the company then to sell that data to a commodities trader, say, it could likely reap a windfall. Normally, the markets must wait for highly-anticipated government surveys to run their course before having an indication of the future supply of crops. The agronomic data that machine makers collect could offer similar insights but far sooner.

Machine makers don’t deny the obvious value of the data they collect. As AGCO’s Crawford put it: “Anybody that trades grains would love to have their hands on this data.”

Experts occasionally wonder about what companies could do with the data. Mary Kay Thatcher, a former official with the American Farm Bureau, raised just such a concern in an interview with National Public Radio in 2014, when questions about data ownership were swirling after Monsanto began deploying a new “precision planting” tool that required it to have gobs of data.

“They could actually manipulate the market with it. You know, they only have to know the information about what’s actually happening with harvest minutes before somebody else knows it,” Thatcher said in the interview.

“Not saying they will. Just a concern.”

Source: Access To Big Data Turns Farm Machine Makers Into Tech Firms

Firefox to ship ‘network partitioning’ as a new anti-tracking defense

Firefox 85, scheduled to be released next month, in January 2021, will ship with a feature named Network Partitioning as a new form of anti-tracking protection.

The feature is based on “Client-Side Storage Partitioning,” a new standard currently being developed by the World Wide Web Consortium’s Privacy Community Group.

“Network Partitioning is highly technical, but to simplify it somewhat; your browser has many ways it can save data from websites, not just via cookies,” privacy researcher Zach Edwards told ZDNet in an interview this week.

“These other storage mechanisms include the HTTP cache, image cache, favicon cache, font cache, CORS-preflight cache, and a variety of other caches and storage mechanisms that can be used to track people across websites.”

Edwards says all these data storage systems are shared among websites.

The difference is that Network Partitioning will allow Firefox to save resources like the cache, favicons, CSS files, images, and more, on a per-website basis, rather than together, in the same pool.

This makes it harder for websites and third-parties like ad and web analytics companies to track users since they can’t probe for the presence of other sites’ data in this shared pool.

According to Mozilla, the following network resources will be partitioned starting with Firefox 85:

  • HTTP cache
  • Image cache
  • Favicon cache
  • Connection pooling
  • StyleSheet cache
  • DNS
  • HTTP authentication
  • Alt-Svc
  • Speculative connections
  • Font cache
  • HSTS
  • OCSP
  • Intermediate CA cache
  • TLS client certificates
  • TLS session identifiers
  • Prefetch
  • Preconnect
  • CORS-preflight cache

But while Mozilla will be deploying the broadest user data “partitioning system” to date, the Firefox creator isn’t the first.

Edwards said the first browser maker to do so was Apple, in 2013, when it began partitioning the HTTP cache, and then followed through by partitioning even more user data storage systems years later, as part of its Tracking Prevention feature.

Google also partitioned the HTTP cache last month, with the release of Chrome 86, and the results began being felt right away, as Google Fonts lost some of its performance metrics as it couldn’t store fonts in the shared HTTP cache anymore.

The Mozilla team expects similar performance issues for sites loaded in Firefox, but it’s willing to take the hit just to improve the privacy of its users.

“Most policy makers and digital strategists are focused on the death of the 3rd party cookie, but there are a wide variety of other fingerprinting techniques and user tracking strategies that need to be broken by browsers,” Edwards also ZDNet, lauding Mozilla’s move.

PS: Mozilla also said that a side-effect of deploying Network Partitioning is that Firefox 85 will finally be able to block “supercookies” better, a type of browser cookie file that abuses various shared storage mediums to persist in browsers and allow advertisers to track user movements across the web.

Source: Firefox to ship ‘network partitioning’ as a new anti-tracking defense | ZDNet

Should We Use Search History for Credit Scores? IMF Says Yes

With more services than ever collecting your data, it’s easy to start asking why anyone should care about most of it. This is why. Because people start having ideas like this.

In a new blog post for the International Monetary Fund, four researchers presented their findings from a working paper that examines the current relationship between finance and tech as well as its potential future. Gazing into their crystal ball, the researchers see the possibility of using the data from your browsing, search, and purchase history to create a more accurate mechanism for determining the credit rating of an individual or business. They believe that this approach could result in greater lending to borrowers who would potentially be denied by traditional financial institutions.

At its heart, the paper is trying to wrestle with the dawning notion that the institutional banking system is facing a serious threat from tech companies like Google, Facebook, and Apple. The researchers identify two key areas in which this is true: Tech companies have greater access to soft-information, and messaging platforms can take the place of the physical locations that banks rely on for meeting with customers.

[…]

But how would all this data be incorporated into credit ratings? Machine learning, of course. It’s black boxes all the way down.

The researchers acknowledge that there will be privacy and policy concerns related to incorporating this kind of soft-data into credit analysis. And they do little to explain how this might work in practice. The paper isn’t long, and it’s worth a read just to wrap your mind around some of the notions of fintech’s future and why everyone seems to want in on the payments game.

As it is, getting the really fine soft-data points would probably require companies like Facebook and Apple to loosen up their standards on linking unencrypted information with individual accounts. How they might share information would other institutions would be its own can of worms.

[…]

Yes, the idea of every move you make online feeding into your credit score is creepy. It may not even be possible in the near future. The IMF researchers stress that “governments should follow and carefully support the technological transition in finance. It is important to adjust policies accordingly and stay ahead of the curve.” When’s the last time a government did any of that?

Source: Should We Use Search History for Credit Scores? IMF Says Yes

France fines Google $120M and Amazon $42M for dropping tracking cookies without consent

France’s data protection agency, the CNIL, has slapped Google and Amazon with fines for dropping tracking cookies without consent.

Google has been hit with a total of €100 million ($120 million) for dropping cookies on Google.fr and Amazon €35 million (~$42 million) for doing so on the Amazon .fr domain under the penalty notices issued today.

The regulator carried out investigations of the websites over the past year and found tracking cookies were automatically dropped when a user visited the domains in breach of the country’s Data Protection Act.

In Google’s case the CNIL has found three consent violations related to dropping non-essential cookies.

“As this type of cookies cannot be deposited without the user having expressed his consent, the restricted committee considered that the companies had not complied with the requirement provided for by article 82 of the Data Protection Act and the prior collection of the consent before the deposit of non-essential cookies,” it writes in the penalty notice [which we’ve translated from French].

Amazon was found to have made two violations, per the CNIL penalty notice.

CNIL also found that the information about the cookies provided to site visitors was inadequate — noting that a banner displayed by Google did not provide specific information about the tracking cookies the Google.fr site had already dropped.

Under local French (and European) law, site users should have been clearly informed before the cookies were dropped and asked for their consent.

In Amazon’s case its French site displayed a banner informing arriving visitors that they agreed to its use of cookies. CNIL said this did not comply with transparency or consent requirements — since it was not clear to users that the tech giant was using cookies for ad tracking. Nor were users given the opportunity to consent.

The law on tracking cookie consent has been clear in Europe for years. But in October 2019 a CJEU ruling further clarified that consent must be obtained prior to storing or accessing non-essential cookies. As we reported at the time, sites that failed to ask for consent to track were risking a big fine under EU privacy laws.

Source: France fines Google $120M and Amazon $42M for dropping tracking cookies without consent | TechCrunch

As if Productivity Score wasn’t creepy enough, Microsoft has patented tech for ‘meeting quality monitoring devices’ – PS is being defanged though

The slightly creepy “Productivity Score” may not be all that’s in store for Microsoft 365 users, judging by a trawl of Redmond’s patents.

One that has popped up recently concerns a “Meeting Insight Computing System“, spotted first by GeekWire, created to give meetings a quality score with a view to improving upcoming get-togethers.

It all sounds innocent enough until you read about the requirement for “quality parameters” to be collected from “meeting quality monitoring devices”, which might give some pause for thought.

Productivity Score relies on metrics captured within Microsoft 365 to assess how productive a company and its workers are. Metrics include the take-up of messaging platforms versus email. And though Microsoft has been quick to insist the motives behind the tech are pure, others have cast more of a jaundiced eye over the technology.

[…]

Meeting Insights would take things further by plugging data from a variety of devices into an algorithm in order to score the meeting. Sampling of environmental data such as air quality and the like is all well and good, but proposed sensors such as “a microphone that may, for instance, detect speech patterns consistent with boredom, fatigue, etc” as well as measuring other metrics, such as how long a person spends speaking, could also provide data to be stirred into the mix.

And if that doesn’t worry attendees, how about some more metrics to measure how focused a person is? Are they taking care of emails, messaging or enjoying a surf of the internet when they should be paying attention to the speaker? Heck, if one is taking data from a user’s computer, one could even consider the physical location of the device.

[…]

Talking to The Reg, one privacy campaigner who asked to remain anonymous said of tools such as Productivity Score and the Meeting Insight Computing System patent: “There is a simple dictum in privacy: you cannot lose data you don’t have. In other words, if you collect it you have to protect it, and that sort of data is risky to start with.

“Who do you trust? The correct answer is ‘no one’.”

Source: As if Productivity Score wasn’t creepy enough, Microsoft has patented tech for ‘meeting quality monitoring devices’ • The Register

Since then, Microsoft will remove user names from ‘Productivity Score’ feature after privacy backlash ( Geekwire )

Microsoft says it will make changes in its new Productivity Score feature, including removing the ability for companies to see data about individual users, to address concerns from privacy experts that the tech giant had effectively rolled out a new tool for snooping on workers.

“Going forward, the communications, meetings, content collaboration, teamwork, and mobility measures in Productivity Score will only aggregate data at the organization level—providing a clear measure of organization-level adoption of key features,” wrote Jared Spataro, Microsoft 365 corporate vice president, in a post this morning. “No one in the organization will be able to use Productivity Score to access data about how an individual user is using apps and services in Microsoft 365.”

The company rolled out its new “Productivity Score” feature as part of Microsoft 365 in late October. It gives companies data to understand how workers are using and adopting different forms of technology. It made headlines over the past week as reports surfaced that the tool lets managers see individual user data by default.

As originally rolled out, Productivity Score turned Microsoft 365 into a “full-fledged workplace surveillance tool,” wrote Wolfie Christl of the independent Cracked Labs digital research institute in Vienna, Austria. “Employers/managers can analyze employee activities at the individual level (!), for example, the number of days an employee has been sending emails, using the chat, using ‘mentions’ in emails etc.”

The initial version of the Productivity Score tool allowed companies to see individual user data. (Screenshot via YouTube)

Spataro wrote this morning, “We appreciate the feedback we’ve heard over the last few days and are moving quickly to respond by removing user names entirely from the product. This change will ensure that Productivity Score can’t be used to monitor individual employees.”

Privacy campaigner flags concerns about Microsoft’s creepy Productivity Score now in 365

Microsoft’s Productivity Score has put in a public appearance in Microsoft 365 and attracted the ire of privacy campaigners and activists.

The Register had already noted the vaguely creepy-sounding technology back in May. The goal of it is to use telemetry captured by the Windows behemoth to track the productivity of an organisation through metrics such as a corporate obsession with interminable meetings or just how collaborative employees are being.

The whole thing sounds vaguely disturbing in spite of Microsoft’s insistence that it was for users’ own good.

As more details have emerged, so have concerns over just how granular the level of data capture is.

Vienna-based researcher (and co-creator of Data Dealer) Wolfie Christl suggested that the new features “turns Microsoft 365 into an full-fledged workplace surveillance tool.”

Christl went on to claim that the software allows employers to dig into employee activities, checking the usage of email versus Teams and looking into email threads with @mentions. “This is so problematic at many levels,” he noted, adding: “Managers evaluating individual-level employee data is a no go,” and that there was the danger that evaluating “productivity” data can shift power from employees to organisations.

Earlier this year we put it to Microsoft corporate vice president Brad Anderson that employees might find themselves under the gimlet gaze of HR thanks to this data.

He told us: “There is no PII [personally identifiable information] data in there… it’s a valid concern, and so we’ve been very careful that as we bring that telemetry back, you know, we bring back what we need, but we stay out of the PII world.”

Microsoft did concede that there could be granularity down to the individual level although exceptions could be configured. Melissa Grant, director of product marketing for Microsoft 365, told us that Microsoft had been asked if it was possible to use the tool to check, for example, that everyone was online and working by 8 but added: “We’re not in the business of monitoring employees.”

Christl’s concerns are not limited to the Productivity Score dashboard itself, but also regarding what is going on behind the scenes in the form of the Microsoft Graph. The People API, for example, is a handy jumping off point into all manner of employee data.

For its part, Microsoft has continued to insist that Productivity Score is not a stick with which to bash employees. In a recent blog on the matter, the company stated:

To be clear, Productivity Score is not designed as a tool for monitoring employee work output and activities. In fact, we safeguard against this type of use by not providing specific information on individualized actions, and instead only analyze user-level data aggregated over a 28-day period, so you can’t see what a specific employee is working on at a given time. Productivity Score was built to help you understand how people are using productivity tools and how well the underlying technology supports them in this.

In an email to The Register, Christl retorted: “The system *does* clearly monitor employee activities. And they call it ‘Productivity Score’, which is perhaps misleading, but will make managers use it in a way managers usually use tools that claim to measure ‘productivity’.”

He added that Microsoft’s own promotional video for the technology showed a list of clearly identifiable users, which corporate veep Jared Spataro said enabled companies to “find your top communicators across activities for the last four weeks.”

We put Christl’s concerns to Microsoft and asked the company if its good intentions extended to the APIs exposed by the Microsoft Graph.

While it has yet to respond to worries about the APIs, it reiterated that the tool was compliant with privacy laws and regulations, telling us: “Productivity Score is an opt-in experience that gives IT administrators insights about technology and infrastructure usage.

It added: “Insights are intended to help organizations make the most of their technology investments by addressing common pain points like long boot times, inefficient document collaboration, or poor network connectivity. Insights are shown in aggregate over a 28-day period and are provided at the user level so that an IT admin can provide technical support and guidance.”

Source: Privacy campaigner flags concerns about Microsoft’s creepy Productivity Score • The Register

IRS contracted to Search Warrantless Location Database Over 10,000 Times

The IRS was able to query a database of location data quietly harvested from ordinary smartphone apps over 10,000 times, according to a copy of the contract between IRS and the data provider obtained by Motherboard.

The document provides more insight into what exactly the IRS wanted to do with a tool purchased from Venntel, a government contractor that sells clients access to a database of smartphone movements. The Inspector General is currently investigating the IRS for using the data without a warrant to try to track the location of Americans.

“This contract makes clear that the IRS intended to use Venntel’s spying tool to identify specific smartphone users using data collected by apps and sold onwards to shady data brokers. The IRS would have needed a warrant to obtain this kind of sensitive information from AT&T or Google,” Senator Ron Wyden told Motherboard in a statement after reviewing the contract.

[…]

Venntel sources its location data from gaming, weather, and other innocuous looking apps. An aide for the office of Senator Ron Wyden, whose office has been investigating the location data industry, previously told Motherboard that officials from Customs and Border Protection (CBP), which has also purchased Venntel products, said they believe Venntel also obtains location information from the real-time bidding that occurs when advertisers push their adverts into users’ browsing sessions.

One of the new documents says Venntel sources the location information from its “advertising analytics network and other sources.” Venntel is a subsidiary of advertising firm Gravy Analytics.

The data is “global,” according to a document obtained from CBP.

[…]

Source: IRS Could Search Warrantless Location Database Over 10,000 Times

GM launches OnStar Insurance Services – uses your driving data to calculate insurance rate

Andrew Rose, president of OnStar Insurance Services commented: “OnStar Insurance will promote safety, security and peace of mind. We aim to be an industry leader, offering insurance in an innovative way.

“GM customers who have subscribed to OnStar and connected services will be eligible to receive discounts, while also receiving fully-integrated services from OnStar Insurance Services.”

The service has been developed to improve the experience for policyholders who have an OnStar Safety & Security plan, as Automatic Crash Response has been designed to notify an OnStar Emergency-certified Advisor who can send for help.

The service is currently working with its insurance carrier partners to remove biased insurance plans by focusing on factors within the customer’s control, which includes individual vehicle usage and rewarding smart driving habits that benefit road safety.

OnStar Insurance Services plans to provide customers with personalised vehicle care and promote safer driving habits, along with a data-backed analysis of driving behaviour.

Source: General Motors launches OnStar Insurance Services – Reinsurance News

What it doesn’t say is whether it could raise insurances or deny them entirely, how transparent the reward system will be or what else they will be doing with your data.

Australia’s spy agencies caught collecting COVID-19 app data

Australia’s intelligence agencies have been caught “incidentally” collecting data from the country’s COVIDSafe contact-tracing app during the first six months of its launch, a government watchdog has found.

The report, published Monday by the Australian government’s inspector general for the intelligence community, which oversees the government’s spy and eavesdropping agencies, said the app data was scooped up “in the course of the lawful collection of other data.”

But the watchdog said that there was “no evidence” that any agency “decrypted, accessed or used any COVID app data.”

Incidental collection is a common term used by spies to describe the data that was not deliberately targeted but collected as part of a wider collection effort. This kind of collection isn’t accidental, but more of a consequence of when spy agencies tap into fiber optic cables, for example, which carries an enormous firehose of data. An Australian government spokesperson told one outlet, which first reported the news, that incidental collection can also happen as a result of the “execution of warrants.”

The report did not say when the incidental collection stopped, but noted that the agencies were “taking active steps to ensure compliance” with the law, and that the data would be “deleted as soon as practicable,” without setting a firm date.

For some, fears that a government spy agency could access COVID-19 contact-tracing data was the worst possible outcome.

[…]

Source: Australia’s spy agencies caught collecting COVID-19 app data | TechCrunch

Amazon’s ad-hoc Ring, Echo mesh network can mooch off your neighbors’ Wi-Fi if needed – and it’s opt-out

Amazon is close to launching Sidewalk – its ad-hoc wireless network for smart-home devices that taps into people’s Wi-Fi – and it is pretty much an opt-out affair.

The gist of Sidewalk is this: nearby Amazon gadgets, regardless of who owns them, can automatically organize themselves into their own private wireless network mesh, communicating primarily using Bluetooth Low Energy over short distances, and 900MHz LoRa over longer ranges.

At least one device in a mesh will likely be connected to the internet via someone’s Wi-Fi, and so, every gadget in the mesh can reach the ‘net via that bridging device. This means all the gadgets within a mesh can be remotely controlled via an app or digital assistant, either through their owners’ internet-connected Wi-Fi or by going through a suitable bridge in the mesh. If your internet goes down, your Amazon home security gizmo should still be reachable, and send out alerts, via the mesh.

It also means if your neighbor loses broadband connectivity, their devices in the Sidewalk mesh can still work over the ‘net by routing through your Sidewalk bridging device and using your home ISP connection.

[…]

Amazon Echoes, Ring Floodlight Cams, and Ring Spotlight Cams will be the first Sidewalk bridging devices as well as Sidewalk endpoints. The internet giant hopes to encourage third-party manufacturers to produce equipment that is also Sidewalk compatible, extending meshes everywhere.

Crucially, it appears Sidewalk is opt-out for those who already have the hardware, and will be opt-in for those buying new gear.

[…]

if you already have, say, an Amazon Ring, it will soon get a software update that will automatically enable Sidewalk connectivity, and you’ll get an email explaining how to switch that off. When powering up a new gizmo, you’ll at least get the chance to opt in or out.

[…]

We’re told Sidewalk will only sip your internet connection rather than hog it, limiting itself to half a gigabyte a month. This policy appears to live in hope that people aren’t on stingy monthly data caps.

[…]

Just don’t forget that Ring and the police, in the US at least, have a rather cosy relationship. While Amazon stresses that Ring owners are in control of the footage recorded by their camera-fitted doorbells, homeowners are often pressured into turning their equipment into surveillance systems for the cops.

Source: Amazon’s ad-hoc Ring, Echo mesh network can mooch off your neighbors’ Wi-Fi if needed – and it’s opt-out • The Register

The ones who brought you Let’s Encrypt, bring you: Tools for gathering anonymized app usage metrics from netizens

The Internet Security Research Group (ISRG) has a plan to allow companies to collect information about how people are using their products while protecting the privacy of those generating the data.

Today, the California-based non-profit, which operates Let’s Encrypt, introduced Prio Services, a way to gather online product metrics without compromising the personal information of product users.

“Applications such as web browsers, mobile applications, and websites generate metrics,” said Josh Aas, founder and executive director of ISRG, and Tim Geoghegan, site reliability engineer, in an announcement. “Normally they would just send all of the metrics back to the application developer, but with Prio, applications split the metrics into two anonymized and encrypted shares and upload each share to different processors that do not share data with each other.”

Prio is described in a 2017 research paper [PDF] as “a privacy-preserving system for the collection of aggregate statistics.” The system was developed by Henry Corrigan-Gibbs, then a Stanford doctoral student and currently an MIT assistant professor, and Dan Boneh, a professor of computer science and electrical engineering at Stanford.

Prio implements a cryptographic approach called secret-shared non-interactive proofs (SNIPs). According to its creators, it handles data only 5.7x slower than systems with no privacy protection. That’s considerably better than the competition: client-generated non-interactive zero-knowledge proofs of correctness (NIZKs) are 267x slower than unprotected data processing and privacy methods based on succinct non-interactive arguments of knowledge (SNARKs) clock in at three orders of magnitude slower.

“With Prio, you can get both: the aggregate statistics needed to improve an application or service and maintain the privacy of the people who are providing that data,” said Boneh in a statement. “This system offers a robust solution to two growing demands in our tech-driven economy.”

In 2018 Mozilla began testing Prio to gather Firefox telemetry data and found the cryptographic scheme compelling enough to make it the basis of its Firefox Origin Telemetry service.

[…]

Source: The ones who brought you Let’s Encrypt, bring you: Tools for gathering anonymized app usage metrics from netizens • The Register

Google Will Make It a bit Easier to Turn Off Smart Features which track you, Slightly Harder for Regulators to Break Up Google

Soon, Google will present you with a clear choice to disable smart features, like Google assistant reminders to pay your bills and predictive text in Gmail. Whether you like the Gmail mindreader function that autofills “all the best” and “reaching out,” or have long dreaded the arrival of the machine staring back from the void,: it’s your world, Google’s just living in it. According to Google.

We’ve always been able to disable these functions if we bothered hunting through account settings. But “in the coming weeks” Google will show a new blanket setting to “turn off smart features” which will disable features like Smart Compose, Smart Reply, in apps like Gmail; the second half of the same prompt will disable whether additional Google products—like Maps or Assistant, for example—are allowed to be personalized based on data from Gmail, Meet, and Chat.

Google writes in its blog post about the new-ish settings that humans are not looking at your emails to enable smart features, and Google ads are “not based on your personal data in Gmail,” something CEO Sundar Pichai has likewise said time and again. Google claims to have stopped that practice in 2017, although the following year the Wall Street Journal reported that third-party app developers had freely perused inboxes with little oversight. (When asked whether this is still a problem, the spokesperson pointed us to Google’s 2018 effort to tighten security.)

A Google spokesperson emphasized that the company only uses email contents for security purposes like filtering spam and phishing attempts.

These personalization changes aren’t so much about tightening security as they are another informed consent defense which Google can use to repel the current regulatory siege being waged against it by lawmakers. It has expanded incognito mode for maps and auto-deleting data in location history or web and app activity and on YouTube (though after a period of a few months).

Inquiries in the U.S. and EU have found that Google’s privacy settings have historically presented the appearance of privacy, rather than privacy itself. After a 2018 AP article exposed the extent of Google’s location data harvesting, an investigation found that turning location off in Android was no guarantee that Google wouldn’t collect location data (though Google has denied this.) Plaintiffs in a $5 billion class-action lawsuit filed this summer alleged that “incognito mode” in Chrome didn’t prevent Google from capturing and sharing their browsing history. And last year, French regulators fined Google nearly $57 million for violating the General Data Protection Regulation (GDPR) by allegedly burying privacy controls beneath five or six layers of settings. (When asked, the spokesperson said Google has no additional comment on these cases.)

So this is nice, and also Google’s announcement reads as a letter to regulators. “This new setting is designed to reduce the work of understanding and managing [a choice over how data is processed], in view of what we’ve learned from user experience research and regulators’ emphasis on comprehensible, actionable user choices over data.”

Source: Google Will Make It Easier to Turn Off Smart Features

Apple hits back at European activist lawsuit against unauthorised tracking installs – says it doesn’t use it… but 3rd parties do

The group, led by campaigner Max Schrems, filed complaints with data protection watchdogs in Germany and Spain alleging that the tracking tool illegally enabled the $2 trillion U.S. tech giant to store users’ data without their consent.

Apple directly rebutted the claims filed by Noyb, the digital rights group founded by Schrems, saying they were “factually inaccurate and we look forward to making that clear to privacy regulators should they examine the complaint”.

Schrems is a prominent figure in Europe’s digital rights movement that has resisted intrusive data-gathering by Silicon Valley’s tech platforms. He has fought two cases against Facebook, winning landmark judgments that forced the social network to change how it handles user data.

Noyb’s complaints were brought against Apple’s use of a tracking code, known as the Identifier for Advertisers (IDFA), that is automatically generated on every iPhone when it is set up.

The code, stored on the device, makes it possible to track a user’s online behaviour and consumption preferences – vital in allowing companies to send targeted adverts.

“Apple places codes that are comparable to a cookie in its phones without any consent by the user. This is a clear breach of European Union privacy laws,” Noyb lawyer Stefano Rossetti said.

Rossetti referred to the EU’s e-Privacy Directive, which requires a user’s consent before installation and using such information.

Apple said in response that it “does not access or use the IDFA on a user’s device for any purpose”.

It said its aim was to protect the privacy of its users and that the latest release of its iOS 14 operating system gave users greater control over whether apps could link with third parties for the purposes of targeted advertising.

Source: Apple hits back at European activist complaints against tracking tool | Reuters

The complaint against Apple is that the IDFA is set at all without consent from the user. And it’s not the point that Apple accesses it or not, the point is that unspecified 3rd parties (advertisers, hackers, government, etc) can.